299 14 1MB
English Pages XI, 98 [107] Year 2020
SPRINGER BRIEFS IN ETHICS
Perihan Elif Ekmekci Berna Arda
Artificial Intelligence and Bioethics
123
SpringerBriefs in Ethics
Springer Briefs in Ethics envisions a series of short publications in areas such as business ethics, bioethics, science and engineering ethics, food and agricultural ethics, environmental ethics, human rights and the like. The intention is to present concise summaries of cutting-edge research and practical applications across a wide spectrum. Springer Briefs in Ethics are seen as complementing monographs and journal articles with compact volumes of 50 to 125 pages, covering a wide range of content from professional to academic. Typical topics might include: • Timely reports on state-of-the art analytical techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • In-depth case studies or clinical examples • Presentations of core concepts that students must understand in order to make independent contributions
More information about this series at http://www.springer.com/series/10184
Perihan Elif Ekmekci Berna Arda •
Artificial Intelligence and Bioethics
123
Perihan Elif Ekmekci TOBB University of Economics and Technology Ankara, Turkey
Berna Arda Ankara University Medical School Ankara, Turkey
ISSN 2211-8101 ISSN 2211-811X (electronic) SpringerBriefs in Ethics ISBN 978-3-030-52447-0 ISBN 978-3-030-52448-7 (eBook) https://doi.org/10.1007/978-3-030-52448-7 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
When the effort to understand and explain the universe is combined with human creativity and future design, we see works in the field of science fiction. One of the pioneers of this combination was undoubtedly Jules Verne and his brilliant works. His book “From the Earth to the Moon,” published in 1865, was one of the first examples of the genre of a science fiction novel. However, science fiction has always been ahead of its era. A real journey to the moon was accomplished only in 1969, about a century after the publication of the book. The twentieth century was a period in which scientific developments progressed with giant steps. In the twenty-first century, we are now in an age in which scientific knowledge increases logarithmically, and the half-life of knowledge is very short. The amount of information to be possessed has grown hugely and varied so much that it has reached a level that is well above human control. The book you are holding is about Artificial Intelligence (AI) and bioethics. It is intended to draw attention to the value problems of an enormous phenomenon with uncertain limits on human creativity. The book consists of the following chapters. The first section is History of Artificial Intelligence. In this section, a historical perspective of the development of technology and AI is presented. We take a quick tour from ancient philosophers to Rene Descartes, from Lady Ada to Alan Turing, the prominent pioneers of the AI concept who contributed to philosophy and actual creation of this new phenomenon of whom some suffered many injustices during their lifetimes. This section ends with a short description of the state of the art of AI. The second section is Definitions. This section aims to explain some of the key terms used in the book to familiarize them with readers. We also aim to search the answer to the following question: “what makes an entity—human or machine— intelligent?” In the face of this fundamental question, relevant concepts such as strong AI, weak AI, heuristic, Turing test, which have become increasingly clear over time, have been discussed separately in the light of literature. The Personhood and Artificial Intelligence section address a fundamental question about the ethical agency of AI. Personhood is an important philosophical, psychological, and legal concept for AI because of its implications about moral responsibility. Didn’t the law build all punishment on the admission that individuals with v
vi
Preface
personhood should at the same time be responsible for what they do (and sometimes do not)? Until recently, we all lived in a world dominated by an anthropocentric approach. Human beings have always been at the top of the hierarchy among all living and non-living entities and ever the most valuable. However, the emerging of AI and its potential to improve into human-level or above human-level intelligence challenges human beings’ superior position by claiming to be acknowledged to have personhood. Many examples of daily life, such as autonomous vehicles, military drones, early warning systems, are discussed in this section. The following section is on bioethical inquiries about AI. The first question is on the main differences between conventional technology and AI and if the current ethics of technology can be applied to ethical issues of AI. After discussing the differences between conventional technology and AI and justifying our arguments about the need for a new ethical frame, we highlight the bioethical problems arising from the current and future AI technologies. We address the Asilomar Principles, the Montreal Declaration, and the Ethics Guidelines for Trustworthy AI of the European Commission as examples of suggested frameworks for ethics of AI. We discuss the strengths and weaknesses of these documents. This section ends with our suggestion for the fundamentals of the new bioethical frame for AI. The final section focuses on the ethical implications of AI in health care. Medicine is one of the oldest professions in the world, is and will be affected by AI. The moral atmosphere, shaped by the Hippocratic tradition of the Western world, was initially sufficient to ensure that only the physician was virtuous. However, by the mid-twentieth century, the impact of technology on medicine had become very evident. On the one hand, physicians were starting to lose their ancient techne-oriented professional identity. On the other hand, a new patient type emerged, as a figure demanding her/his rights, beginning to inquire about the physician’s paternalism. Even in this case, it was inevitable that different approaches would replace traditional medical ethics. Currently, these new approaches are challenged once more with the emergence of AI in health care. The main question is how the patients and caregivers will be shaped within the medical ethics framework in the AI world. This subsection, AI in health care and medical ethics, is suggesting answers and shedding light on possible areas of ethical concern. Chess grandmaster Kasparov, one of the most brilliant minds of the twentieth century, tried to defeat Deep Blue, but his defeat was inevitable. Lee Sedol, a young Go master, decided to retire in November 2019, after being defeated by the AI system Alpha Go. Of course, Alpha Go represented a much more advanced level of AI than Deep Blue. Lee Sedol’s early decision was that even if he was number one, AI was “invincible AI would always be at the top”. We now accept that Lee Sedol’s decision is based on a realistic prediction. The conclusion part presents the projections about AI in light of all the previous chapters and the solutions for the new situations that the scientific world will face.
Preface
vii
This book contains what two female researchers can see and respond to in terms of AI from their specialty field, bioethics. We want to continue asking questions and looking for answers together. Wish you an enjoyable reading experience. Ankara, Turkey
Perihan Elif Ekmekci Berna Arda
Contents
1 1 3
1 History of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . 1.1 What Makes an Entity–Human or Machine–Intelligent? 1.2 First Steps to Artificial Intelligence . . . . . . . . . . . . . . . 1.3 The Dartmouth Summer Research Project on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 A New Partner in Professional Life: Expert Systems . . 1.5 Novel Approaches: Neural Networks . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
6 10 12 15
2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 What is Artificial Intelligence? . . . . . . . . . . . . . . 2.1.1 Good Old-Fashioned Artificial Intelligence 2.1.2 Weak Artificial Intelligence . . . . . . . . . . . 2.1.3 Strong Artificial Intelligence . . . . . . . . . . . 2.1.4 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Turing Test . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Chinese Room Test . . . . . . . . . . . . . . . . . 2.1.7 Artificial Neural Network . . . . . . . . . . . . . 2.1.8 Machine Learning . . . . . . . . . . . . . . . . . . 2.1.9 Deep Learning . . . . . . . . . . . . . . . . . . . . . 2.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
17 17 21 21 22 23 23 24 24 25 25 26 27
.......
29
....... ....... .......
31 32 35
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
........ ........ ........
3 Personhood and Artificial Intelligence . . . . . . . . . . . . . . . . . 3.1 What Makes Humans More Valuable Than Other Living or Non-living Entities? . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Can Machines Think? . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Moral Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
x
Contents
3.4 Non-discrimination Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Moral Status and Ethical Value: A Novel Perspective Needed for Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 37 40
.. ..
41 41
.. ..
46 51
..
52
. . . . . . . .
. . . . . . . .
52 55 56 56 58 59 60 61
..
62
..
62
.. .. ..
63 64 69
..
70
..
70
..
71
.. ..
74 77
5 Artificial Intelligence in Healthcare and Medical Ethics . . . . . . . . . . 5.1 Non-maleficence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Change of Paradigm in Health Service . . . . . . . . . . . . . . . . . . . . .
79 81 84
4 Bioethical Inquiries About Artificial Intelligence . . . . . . . . . . . . . . 4.1 Ethics of Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Does Present Ethics of Technology Apply to Artificial Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Looking for a New Frame of Ethics for Artificial Intelligence . . 4.4 Common Bioethical Issues Arising from Current Use of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Vanity of the Human Workforce, Handing the Task Over to Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Annihilation of Real Interpersonal Interaction . . . . . . . . 4.4.3 Depletion of Human Intelligence and Survival Ability . . 4.4.4 Abolishing Privacy and Confidentiality . . . . . . . . . . . . . 4.5 Bioethical Issues on Strong Artificial Intelligence . . . . . . . . . . . 4.5.1 The Power and Responsibility of Acting . . . . . . . . . . . . 4.5.2 Issues About Equity, Fairness, and Equality . . . . . . . . . 4.5.3 Changing Human Nature Irreversibly . . . . . . . . . . . . . . 4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Two Main Aspects of the New Ethical Frame for Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 The Ethical Norms and Principles That Would Guide the Development and Production of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Evolution of Ethical Guidelines and Declarations . . . . . 4.6.4 Who is the Interlocutor? . . . . . . . . . . . . . . . . . . . . . . . . 4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 How to Specify and Balance Ethical Principles in Actual Cases in a Domain? . . . . . . . . . . . . . . . . . . . 4.7.2 Should We Consider Ethical Issues of Artificial Intelligence in the Ethical Realm of the Domain They Operate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Would Inserting Algorithms for Ethical Decision Making in Artificial Intelligence Entities Be a Solution? . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
5.2.1 Abolition of the Consultation Process 5.2.2 Loss of Human Capacity . . . . . . . . . 5.3 Privacy and Confidentiality . . . . . . . . . . . . . 5.4 Using Human Beings as a Means to an End 5.5 Data Bias, Risk of Harm and Justice . . . . . . 5.6 Lack of Legislative Regulations . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
. . . . . . .
85 87 87 90 92 95 96
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Chapter 1
History of Artificial Intelligence
1.1 What Makes an Entity–Human or Machine–Intelligent? When we discuss the ethical issues about artificial intelligence (AI), we focus on its human-like abilities. The human-like abilities categorize under two main headings: doing and thinking. An entity with the capability to think, understand the settings, consider the options, consequences, and implications, reason, and finally decide what to do and physically act -do- accordingly in a given circumstance may be considered intelligence. Then an entity is intelligent if it can think and do. However, both abilities are not easy to define. While looking for answers for definitions, we trace back to 5th century BC to Aristotle, who laid out the foundations of epistemology and first formal deductive reasoning. For centuries, the philosophical inquiries about body and mind and how the brain works accompanied advancements in human efforts to build autonomous machines. Seventeenth century hosted two significant people in this respect; Blaise Pascal, who invented the mechanical calculator, the Pascaline, and Rene Descartes, who codified body-mind dichotomy in his book “A Treatise on Man”. The body-mind dichotomy, known as the Cartesian system, indicates that the mind is an entity separate from the body. According to this perspective, the mind is intangible, and the way it works is so unique and metaphysical that it cannot duplicate in an in-organic/human-made artifact. Descartes argued that the body, on the other hand, was an automatic machine-like irrigation taps in elegant gardens of French chateaus or clocks on church municipality towers, which were popular at those times in European towns. It was the era when anatomic dissections on the human body became more frequent in Europe. The raising knowledge about human anatomy revealed some facts about the pumping engine, heart, and blood circulation in tubes, vessels. It enabled some analogies between the human body and a working machine. Descartes stated that the pineal gland was the place where the integration between the mind and the material body. The idea about body-mind dichotomy was developed further. It survived to our day and, as discussed in forthcoming chapters, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_1
1
2
1 History of Artificial Intelligence
and still constitutes one of the main arguments against the personification of AI entities. The efforts of philosophers to formulate the thought, ethical reasoning, and the ontology of humanness and, the nature of epistemology continued in the 18th century with Gottfried Wilhelm Leibniz, Baruch Spinoza, Thomas Hobbes, John Locke, Immanuel Kant, and David Hume. Spinoza, a contemporary philosopher of Descartes, studied the Cartesian system and disagreed with this idea. His rejection was based on his pantheist perspective, which viewed the mind and body as two different aspects of human being, and were merely representations of God. Another highly influential philosopher and mathematician of the time, Gottfried Wilhelm Leibnitz imagined mind and body as two different monads which were exactly matching with each other to form a corresponding system, which was similar to the cogwheels of a clock. In an era when communication among contemporaries was limited to direct contact or having access to one of the few published books, Leibnitz travelled all through Europe and talked to other scientists and philosophers, which enabled him to comprehend the existing state of the art both in theory and implementation. What he inferred was the need for a common language of science so that thoughts and ideas could be discussed based on the same terms. This common language required to develop an algorithm in which human thoughts are represented by symbols for computers so that humans and machines could communicate. Leibnitz’s calculus ratiocinator –a calculus for reasoning- could not accomplish the task of symbolically expressing logical terms and reason on them, but undoubtedly was an inspiration for “Principia Mathematica” of Alfred North Whitehead and Bertrand Russel in the mid-19th century. While every one of these philosophers shaped contemporary philosophy, efforts to produce autonomous artifacts proceeded. Jacques de Vaucanson’s mechanical duck symbolized the state of the art of automata and AI in the 18th century. This automatic duck could beat its wings, drink, eat, and even digest what it eats, almost like a living being. In the 19th century, artifacts and humanoids took place in literature. Best known ones were Ernst Theodor Wilhelm Hoffman’s “The Sandman”, Johann Wolfgang von Goethe’s “Faust” (part II) and, Mary Wollstonecraft Shelley’s “Frankenstein”. Hoffman’s work has been an inspiration to the composition of the famous ballet “Coppelia”, which contained a doll that comes to life and becomes an actual human being. These may be considered as the part played by art and literature for the elaboration of the idea of artifacts becoming more human-like and the potential of AI to have human attributes. AI appeared in 19th-century literature. L. Fran Baum’s mechanical man “Tiktok” was an excellent example of intelligent nonhuman beings of the time. Jules Verne and Isaac Asimov also deserve mentioning as pioneering writers who have AI in their works (Buchanan 2005). These pieces of art helped to prepare the mind of ordinary people to the possibility of the existence of intelligent beings other than our human species.
1.2 First Steps to Artificial Intelligence
3
1.2 First Steps to Artificial Intelligence In the 1840s, Lady Ada Lovelace envisaged the “analytical engine,” which was built by Charles Babbage. Babbage was already dreaming about making a machine that could calculate logarithms. The idea of calculators has been introduced in 1642 by Blaise Pascal, and further developed by Leibnitz later. Still, these were slightly too simple when compared to the automatic table calculator, the “difference engine,” that Babbage started to build in 1822. However, the difference engine was left behind when Lady Ada Lovelace’s perspectives were introduced about the analytical engine (McCorduck 2014). The analytical engine was planned to perform arithmetical calculations as well as to analyse and to tabulate functions with a vast data storage capacity and a central processing unit controlled by algebraic patterns. Babbage could never finish building the analytical engine, but still, his efforts are considered as a cornerstone in the history of AI. This praise is mostly due to the partnership between him and Ada Lovelace, a brilliant woman who had been tutored in mathematics since the age of 4, conceptualized a flying machine at the age of 12, and became Babbage’s partner at the age of 17. Lovelace thought that the analytical engine had the potential to process symbols representing all subjects in the universe and could deal with massive data to reveal the facts about the real world. She imagined that this engine could go far beyond scientific performance and would even be able to compose a music piece (Boden 2018). She elaborated on the original plans for the capabilities of Analytical Engine and published the paper in an English journal in1843. Her elaboration contained the first algorithm to be carried out by a computer program. She had the vision, she knew what to expect, but she and her partner Babbage could not figure out how to realize it. The analytical engine did not answer “how-to” question. On the other hand, Ada Lovelace’s vision survived and materialized at the beginning of the 21st century. In the 19th century, George Boole worked on the construction of “the mathematics of the human intellect” or to find out the general principles of reasoning by symbolic logic, merely by using symbols of 0 and 1. This binary system of Boole later constituted the basics of developing computer languages. Whitehead and Russel’s were following the same path while they wrote Principia Mathematica, a book which has been a cornerstone for philosophy, logic, and mathematics and an inspirational guide for scientists working on AI. The 19th century has been the year in which AI flourished both in theory and implementation. In 1936 Alan Turing shed light on “how to” question, left unanswered by Lady Lovelace, by proposing the “Turing Machine.” In essence, his insight was very similar to Lovelace’s. He suggested that an algorithm may solve any problem which is represented by a symbol. Also, “it is possible to invent the single machine which can be used to compute any computable sequence.” (Turing 1937). In his paper “On Computable Numbers with an Application to the Entscheidungsproblem,” he defined how this machine works. It is worth mentioning that he used terms like “remember, wish, and behave” while describing the abilities of the Turing Machine, words that
4
1 History of Artificial Intelligence
were used only for functions of the human mind before. The following sentences show how he personalized the Turing Machine: The behaviour of the computer at any moment is determined by the symbols which he is observing and his state of mind at that moment. It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it.
Alan Turing demonstrated that an AI which is capable of doing anything that requires intelligence could be produced, and the process occurring in the human mind could be modelled. In 1950, he published another outstanding paper: “Computing Machinery and Intelligence” (Turing 1950). This paper initiated with the question: “Can machines think?” and proposed the “imitation game” to answer this fundamental question. Turing described the fundamentals of the Imitation Game as follows: It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The interrogator aims to determine which of the other two is the man and which one is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A…. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, do not listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.
A teleprinter performed the communication between rooms, or interactions were repeated by an intermediary to avoid hints transferred by the tone of the voice. After this theoretical explanation of settings, Turing replaced the initial question about the ability of machines to think by the following ones: “What will happen when a machine takes the part of A in this game?” “Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”. Turing discerned possible strong objections to the idea of thinking machines. The act of thinking has always been a term attributed to human beings, if not considered as the sole unique property of human beings to distinguish them from any other natural or artificial being and provide the superior hierarchical position of human beings in the world. Turing wrote in his paper, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.” Turing was aware of the fact that arguments against machines thinking would arise from several main grounds: the first one is the theological objection, which argues that God has provided the man an immortal soul so that they can think. God did not give part of his soul to any other creature; hence, none of them possess the ability to think. The second objection said that the fact of thinking machines would risk the commanding superior hierarchical position of the human beings in the world, and the consequences of this would be unacceptably dreadful. Turing named this
1.2 First Steps to Artificial Intelligence
5
argument “heads in the sand objection,” which was entirely appropriate considering the implications of it. The third one is the mathematical objection, which rests on the idea that machines can produce unsatisfactory results, which is abstractly defeated by Turing by pointing at fallacious conclusions coming from human minds. The fourth objection Turing considered was consciousness or meta-cognition in contemporary terms. This objection came from Professor Jefferson’s Lister Oration and stated that composing music or writing a sonnet was not enough to prove the existence of thinking. One must also be able to know what it had written or produced and feel the pleasure of success, and the grief of failure. Turing overcame this objection by putting forth that objections of consciousness “could be persuaded to abandon it rather than be forced into the solipsist position,” and one should solve the mystery about consciousness before building an argument on it. It is plausible to say that Turing’s perspective was ahead of Lady Lovelace’s at one point. Lovelace stated that the analytical engine (or an artifact) could do whatever we know how to order it to perform. With this, she cast out the possibility of a machine that can learn and improve its abilities. Her vision complied with Gödel’s Theorem, which indicated that computers are inherently incapable of solving any problem which humans can overcome (McCorduck 2014). At this point, Turing proposed to produce a program to simulate a child’s mind instead of an adult is so that the possibility to enhance capabilities of the machine’s thought process would be high, just like a child’s learning and training process. Reading through this inspirational paper, we can say that Alan Turing had an extraordinary broad perspective about what AI could be in the future. While Turing was developing his ideas about thinking machines, some other scientists from the USA were producing inspirational ideas on the subject. Two remarkable scientists, Warren McCulloch and Walter Pitts, are worth mentioning at this point. In 1943, they published a paper titled “A Logical Calculus of the Ideas Immanent in Nervous Activity” (McCulloch and Pitts 1943). This paper referred to the anatomy and physiology of neurons. It argued that the inhibitory and excitatory activities of neurons and neuron nets grounded basically on “all-or-none” law, and “this law of nervous system was sufficient to ensure that propositions may represent the activity of any neuron.” Thus, neural nets could compute logical propositions of the mind. This argument proposed that the physiological activities among neurons would correspond to relations among logical propositions. The representations were utilized to identify the ties with those of the propositions. In this respect, every activity of the neurons corresponded to a proposition. This paper had a significant impact on the first digital computer developed by John Von Neumann in the same year (Boden 1995). However, the authors’ arguments inevitably inspired the proponents of the archaic inquiries about how the mind worked, the unknowable object of knowledge and, the dichotomy of body and mind. The authors concluded that since all psychic activities of the mind are working with the “all-or-none” law of neural activities, “both the formal and final aspects of that (mind) activity which we will not call mental are rigorously deducible from present neurophysiology.” The paper ended with the assertion that “mind no longer goes more ghostly than a ghost.” There is no doubt that these assertions constructed the main arguments of cognitive science. The
6
1 History of Artificial Intelligence
idea for the elaboration of cognitive science was that cognition is computation over representations, and it may take place in any computational system, either neutral or artificial. Three years after this inspiring paper, in 1950, while Turing was publishing his previously mentioned paper on the possibility of machines thinking, Claude Shannon’s article titled: “Programming a Computer for Playing Chess” appeared to shed light on some of the fundamental questions of the issue (Shannon 1950). In this article, he was questioning features needed to differentiate a machine that could think from a calculator or a general-purpose computer. For him, the capability to play chess would imply that this new machine could think. He grounded his idea on the following arguments. First, a chess-playing device should be able to process not only with numbers but with mathematical expressions and words; what we call representations and images today. Second, the machine should be able to make proper judgments about future actions by trial and error method based on previous results, meaning the new entity would have the necessary capacity to operate beyond “strict, unalterable computing processes.” The third argument depended on the nature of the decisions of the game. When a chess player decides to make a move, this move may be right, wrong, or tolerable, depending on what her rival would do in response. A decision that could be considered to be faulty by any average chess player will be accepted if the opponent makes worse decisions about her moves in response. Hence a chess player’s choices are not 1 or 0—right or wrong-, but “rather have a continuous range of quality from the best to the worst”. Shannon described a chess player’s strategy as “a process of choosing a move in a given position”. In-game theory, if the player always chooses to make the same move in the same position—pure strategy-, this makes the player a very predictable one, since her rival would be able to figure out her strategy after a few games. Hence a good player has to have a mixed plan, which means that the plan should involve a reasoning procedure operating with statistical elements so that the player can make different moves in similar positions. Shannon thought if we could build a chess-playing machine with these intelligent qualifications, then this would be followed by machines “capable of logical deduction, orchestrating a melody or making strategic decisions for the military.” Today we are far beyond Shannon’s predictions in 1950.
1.3 The Dartmouth Summer Research Project on Artificial Intelligence While Shannon was working on the essentials of a chess-playing machine, other remarkable scientists were working on human reasoning. John McCarty was one of them who made a significant proposition to use the first-order predicate calculus to simulate human reasoning and to use formal and homogenous representation for human knowledge. Norbert Wiener, a mathematician at Massachusetts Institute of Technology (MIT), began to work with fellow physiologist Arturo Rosenblueth and
1.3 The Dartmouth Summer Research Project on Artificial Intelligence
7
Julian Bigelow, and this group published the influential paper titled “Behaviour, Purpose, and Teleology,” in Philosophy of Science in 1943. Soon after, in 1942, Rosenblueth met Warren McCulloh, who was already working together with Walter Pitts on the mathematical description of neural behaviour of the human brain. Simultaneously various scientists were working on similar problems in different institutions. In the summer of 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon came up with the idea to summon up all scientists working in the field of thinking machines together, share what they had achieved and, develop perspectives for future work. They thought it was the right time for this conference since the studies on AI had to “proceed based on the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine could be made to simulate it.” The name of the gathering was “the Dartmouth Summer Research Project on AI”. It was planned to last for two months with the attendance of 10 scientists who were working in the field of AI. In the Dartmouth conference “AI” term was officially used for thinking machines for the first time. Each of the four scientists who wrote the proposal offered proposals for research areas. Shannon was interested in the application of information theory to computing machines and brain models and, the matched environment-brain model approach to automata, Minksy’s research area was learning machines. He was working on a machine that could sense changes in the environment and adapt its output according to them. He described his proposal as a very tentative one and hoped to improve his work in the summer course. Rochester proposed to work on originality and randomness of machines, and McCarthy’s proposal was “to construct an artificial language which a computer can be programmed to use on problems requiring conjecture and self-reference” (McCarthy et al. 2006). However, the conference did not meet the high expectations of the initiating group. McCarthy acknowledged the reasons for this setback that most of the attendees did not come for the whole two months. Some of them were there for two days while others came and left on various dates. Moreover, most of the scientists were reluctant to share their research agenda and collaborate. The conference did not yield a common agreement on general theory and methodology for AI. McCarthy said they were also wrong to think that the timing was perfect for such a big gathering. In his later evaluations, he stated that the field was not ready for big groups to work together since there was no agreement on the general plan and course of action (McCorduck 2014). On the other hand, despite the negativities, the Dartmouth AI conference had significant importance in the history of AI because of nailing the term AI, solidifying the problems and perspectives about AI, and determining state of the art (Moor 2006). However, it was later realized that the conference hosted a promising intervention, the Logic Theorist or General Problem Solver (GPS), developed by Allen Newell and Herbert A. Simon. It did not get the attention it deserved at the conference since its main focus was different than what most scientists’ projects. Simon and Newell came from RAND, an institution where people worked to develop projects for the benefit of the air forces. It was the Second World Wartime, and the military needed any innovation they could have that would give them superiority over the enemy. Simon was working on the decision process of human beings, and he wrote a book titled
8
1 History of Artificial Intelligence
“Administrative Behavior,” in which he explained this theory of decision making. His theory of reasoning process was on concluding from premises. He argued that people would start different premises in different settings, and these premises would be affected by their perspectives. From this point of view, he predicted that decisions depend on their perspectives. This book was very influential in business and economy, but it also had practical implications for AI. When he started to work with Newell, Simon began to transfer the language Newell developed for air defence set-up, to his decision-making process so that information-processing ideas could be used to comprehend the way air defence personnel operated. The core idea of Simon was to develop an analogy between the reasoning process of the human mind and computer. He argued that understanding how human beings reached conclusions from premises could be imitated by computers. This imitation would provide two significant benefits; to have a machine that could think like human beings and to understand how the human mind works, which has been a mystery for humanity for centuries. Newell’s unique understanding of the non-numerical capabilities of computers matched well with Simon’s perspectives to develop their Logic Machine and present it at the Dartmouth Conference. Their matching perspectives constituted the main reason why the Logic Machine did not receive sufficient attention at the Dartmouth Conference. The contemporary scientists thought what Simon and Newell were presenting was a model for the human mind, something that did not interest the participants. The human mind-computer analogy and the question if machines could do what the human mind is capable of in terms of reasoning, which were discussed widely by Turing and other pioneers, were out of date during the time of the conference. However, even it was not appreciated at the conference, the Logic Theorist was working and was capable of intellectual and creative tasks that were unique to human beings heretofore. Moreover, the decisions of Logic Theorist were unpredictable. One would not know what the machine’s decision would be, which is also a very human feature, indeed (McCorduck 2014). Meanwhile, the works on game theories were developing. In 1947 Arthur Samuel accomplished to write down a checker player program for computers. Although first versions of the program could play only at beginner or average level, by 1961, it could play at the masters’ level. Alex Bernstein, who was a devoted chess player and an IBM worker, invested long hours of hard work on building a computer that could play chess. His program was capable of deciding to make the best possible moves after evaluating probable positions in depth. Pamela McCormick writes in her influential book “machine who think” that Bernstein’s team worked in shifts with another team of IBM, which was developing the popular program, FORTRAN. They had to work with shifts since there was only one computer in the lab, and one group had to be off for the other one to work. Although Bernstein’s work gave him popularity, the chess-playing computer could not be more than a mediocre player. 1975 was the year in which a chess player computer developed by Richard Greenblatt has achieved class C. In 1977 David Slate and Larry Atkin dared to put their Chess 4.5 computer against David Levy, a chess player with a 2375 rating. The result was a disappointment for Slate and Atkin. Levy beat the Chess 4.5 and showed “who the boss was.”
1.3 The Dartmouth Summer Research Project on Artificial Intelligence
9
1950s embraced frustrations of scientists resulting from lack of a common understanding of what AI was, what it would be in the future, and the nature and content of basic issues in programming such as memory management and symbol manipulation languages. In short, it is plausible to say that the AI community who were excited by the general-purpose computer in the beginnings of the 1950s and were busy with machine simulation of complex non-numerical systems like games and picture transformations by the middle of the decade. Minsky describes this period as follows: “work on cybernetics had been restricted to theoretical attempts to formulate basic principles of behaviour and learning. The experimental work was at best ‘paradigmatic; small assemblies of simple ‘analogue’ hardware convinced the sceptic that mechanical structures could indeed exhibit “conditioned” adaptive behaviours of various sorts… The most central idea of the pre-1962 period was that of finding heuristic devices to control the breadth of a trial-and-error search. A close second preoccupation was with finding effective techniques for learning” (Minsky 1968). According to Minsky, AI work has proceeded in three major paths. The first one was aiming to discover self-organizing systems; Newell and Simon led the second one focused on the simulation of human thought, and the third one was to build intelligence artifacts simple, biological, or humanoid. The programs which were built by the third perspective were called “heuristic programs.” The book “Computers and Thoughts” by Edward Feigenbaum and Julian Feldman provides a satisfactory collection of heuristic programs by the end of 1961. Checker player by Arthur Samuel, a logic theorist by Newell and Simon, was the most well-known one. After 1962 the main focus shifted from learning to the representation of knowledge and overcoming the limitations of existing heuristic programs. Minsky acknowledges that the main limitation of heuristic programs was the blind trial-and-error method that goes through all possible sequences of available actions. The first thing to overcome this problem was to replace this blind process with a smart one, which would go through hypothesis only with a high potential of relevance instead of all possible ones. This new approach would enable a mediocre level checker program to perform at the master level, not by increasing the speed of the process, but by decreasing the number of choices to go through. The second limitation was the formality of targeted problem areas such as games or mathematical proofs. Minsky said the reason for that was the clarity and simplicity of these problems. Another limitation went back to Lady Lovelace’s prediction about a machine that can only do what we program it to do; in other words, it cannot learn. The programs of this era were static, as Lady Lovelace said. They could solve the problem, but could not learn from this solving. The final limitation emerged from the representation of relevant knowledge to the computer. That was the difference between problemoriented factual information and general problem-solving heuristics (Minsky 1968). In 1958 Frank Rosenblatt came up with a novel approach, which he called the “neural network,” which was indeed a paradigm breaker idea since it aimed to overcome one of the most significant limitations of symbolic AI by machine learning. Symbolic AI required all knowledge to be operated by the AI system to be decoded by the human programmers. It was knowledge-based, and this knowledge had to be provided by the codes written for the AI to process with them. On the other hand,
10
1 History of Artificial Intelligence
Rosenblatt’s neural network was developed to acquire knowledge without the interference of codes written down to translate it to the symbols. Although that was a brilliant idea, the technology was not ready to support it. The available hardware did not have sufficient receptors to acquire knowledge, like cameras with poor pixels and audio receptors, which could not specify sounds like speech. Another significant problem was the lack of big data. The neural network required vast amounts of data to accomplish machine learning, which was not available in the 1950s. Besides, Rosenblatt’s neural network was quite limited in terms of layers. It had one input and one output layer, which enabled the system to learn only simple things like recognizing that the shape is a circle, not a triangle. In 1969 Marvin Minsky and Seymour Papert wrote in their book “Perceptrons” that the primary dilemma in neural networks was two layers are not enough to learn complicated things, but adding more layers to overcome this problem would lower the chance of accuracy (rebooting AI). Therefore, Rosenblatt’s neural network could not succeed and faded away among the eyecatching improvements in symbolic AI until Geoffrey E. Hinton and his colleagues introduced advanced deep learning AI systems about five decades later (Marcus and Davis 2019). Patrick Henry Winston names all the period from Lady Lovelace to the end of the 1960s, the pre-historic era of AI. The main achievements of this age were the development of symbol manipulation languages such as Lisp, POP, and IPL and hardware advances such as processors and memory (Buchanan 2005). According to Winston, after the 1960s, the Dawn Age came, in which the AI community was full of high expectations, such as building AI as smart as humans, which did not come true. However, there were two accomplishments at this time worth mentioning because of their role in the creation of expert systems, the program for solving geometric analogy problems, and the program that did symbolic integration (Grimson and Patil 1987). Another characteristic of the 1960s was that the institutionalization of significant organizations and laboratories. MIT, Carnegie Tech, working with the Rand Corporation, AI laboratories at Stanford, Edinburgh, and Bell laboratories are some of these institutions. These were the major actors who took an active role in enhancing AI technology in the following decades (Buchanan 2005).
1.4 A New Partner in Professional Life: Expert Systems From the 1970s perspective of Bruce Buchanan and Edward Feigenbaum, who suggested that knowledge was the primary element for intelligent behaviour dominated the AI field. Chancing the focus from logical inference and resolution theory, which were predominant until the early 1970s, to knowledge-based systems was a significant paradigm shift in the course of AI technology (Buchanan 2005). In 1973, Stanford University accomplished a very considerable improvement in the AI field: the development of MYCIN, an expert system to diagnose and treat bacterial blood infections. MYCIN used AI for solving problems in a particular domain, medicine, which has been requiring human expertise. The first expert system, DENDRAL,
1.4 A New Partner in Professional Life: Expert Systems
11
was developed by Edward Feigenbaum and Joshua Lederberg at Stanford University in 1965 for analysing chemical compounds. The inventers of MYCIN were also from Stanford University. Edward Shortliffe was a brilliant mind who had degrees in applied mathematics, medical information sciences, and medicine. MYCIN had a knowledge base that was provided by physicians at Stanford School of Medicine. The MYCIN’s static knowledge also involved rules for making inferences. The system allowed the addition of new experiences and knowledge without changing the decision-making process. When the physician user loaded data about a particular patient, the rule interpreter system worked and produced conclusions about the status of the patient. The expert system of the MYCIN allowed the physician user to ask questions such as “how, why, and explain.” Having the answers enables the user to understand the reasoning process of the system. In 1979 a paper was published in the Journal of American Medical Association, which compared the performance of MYCIN and ten medical doctors on meningitis cases, and the result was satisfactory for MYCIN (Yu et al. 1979). According to Newell, MYCIN was “the original expert system that made it evident to all the rest of the world that a new niche had opened up.” Newell’s foresight has come true. By 1975 medicine has been a significant area of application for AI. PIP, CASNET, and INTERNIST were other expert systems, and MYCIN was further developed to EMYCIN to diagnose and treat more infectious diseases. Based on the EMYCIN model, a new expert system was produced: PUFF. It was an expert on pulmonary illnesses. PUFF was also written at Stanford University. Its main task was to interpret measurements of respiratory tests and to provide a diagnosis for the patient. The superiority of PUFF was that it did not require data input by physicians, which was a handicap for using AI systems in hospitals because of being time-consuming. The data input came directly from the respiratory test system. Also, PUFF’s area of expertise was a practical choice since diagnosing a pulmonary dysfunction did not require a tremendous amount of knowledge. Patient history and measurements from the respiratory tests together with existing medical knowledge would be enough for diagnosis, and this early model of expertise system could process them. On the other hand, PUFF had substantial benefits for clinical use, such as; saving time for physicians and standardizing the decision-making process (Aikins et al. 1983). In 1982 a new expert system was introduced to the medical community, CADUCEUS. It got its name from the symbol of the medical profession, with roots going back to Aesculapius. The name of the program implied that it was highly assertive in decision making for several medical conditions. CADUCEUS based on the INTERNIST expert system, which was also proven to be highly effective. However, CADUCEUS contained a considerable amount of medical knowledge and could link symptoms and diseases. Its performance was determined to be efficient and accurate in a wide range of medical specialties (Miller et al. 1982). These were expert systems for medical practice. However, expert systems were being developed in other areas too. XCON configured computers, DIPMETER ADVISOR, interpreted oil wells. All these systems could solve problems, explain their rationale that dominated their decision-making process, and were reliable, at
12
1 History of Artificial Intelligence
least for the standardization of their decisions. These features were important since a technology product would be successful if it is practical and beneficiary in daily use. Each of these expert systems had been involved in everyday use to various extends (Grimson and Patil 1987). Despite these positive examples, there were still debates about the power of this technology to revolutionize industries and to what extent it would be possible to build expert systems to substitute human beings to accomplish their tasks. Moreover, there were conflicting ideas about which sectors would be pioneering for the development and use of AI technology. Although these were the burning questions of the 1980s, it is surprising (or not) to notice that these questions would be relevant if we articulate them in any platform related to AI in 2019. 1977 is a year worth mentioning in the history of AI. It was the year when Steve Jobs and Stephen Wozniak built the first personal Apple computer and placed it on the market for personal use. Also, the first Star Wars movie was on theatres to introduce us with robots who have human-like emotions and motives and, Voyagers 1 and 2 were sent to space to explore the mysterious unknown planets. The signs of progress in the 1980s were dominated, but not limited to expert systems. The work station computer system was also promising. A sound work station system should enable the user to communicate in her language, provide flexibility to work in various methods, bottom-up, top-down or back and forth, and provide a total environment composed of computational tools or previous records. LOGICIAN, GATE MASTER, and INTELLECT were examples of these systems. These systems inevitably embodied questions on the possibility of producing a work station computer with human-like intelligence, again a relevant issue for today (Grimson and Patil 1987). Robotics was another area of concern. While the first initiatives on robotics took place at the beginning of the 1970s at Stanford Research Institute, during the 1980s, news about improvements has started to emerge from Japan and the USA. Secondgeneration robots were in use for simple industrial tasks such as spray painting. Wabot-2 from Waseda University Tokyo could read music and play the organ with ten fingers. By the mid-1980s, third-generation robots were available with tactile sensing, and a Ph.D. student produced a ping-pong playing robot that could beat human players the outcome of his dissertation thesis. However, none of them were significant inventions, and there were still doubts about the need and significance of building robots (Kurzweil 2000).
1.5 Novel Approaches: Neural Networks By the beginning of 1980s neural networks, which were introduced by Frank Rosenblatt in 1958 and became a failure due to technological insufficiencies, began to gain importance once more. The Nobel Prize in Physiology winners in 1981 were David Hubel and Torsten Wiesel. The work that brought the prize to them was their discovery concerning information processing in the visual system. Their study revealed the pattern of organization of brain cells that process visual stimuli, the
1.5 Novel Approaches: Neural Networks
13
transfer method of visual information to the cerebral cortex, and the response of the brain cortex during this process. Hubel and Wiesel’s main argument was that different neurons responded differently to visual images. Their study was efficiently transferred to the area of neural networks. Kunihiko Fukushima was the first one to build an artificial neural network depending on the work of Hubel and Wiesel’s, which became the actual model for deep learning. The main idea was to give more considerable weight to connections between nodes if it is required to have a more substantial influence on the next one. Meanwhile, Geoffrey Hinton and his colleagues were working on developing deeper neural networks. They found out that by using back-propagation, they may be able to overcome the problem of accuracy that Minsky and Papert were talking about back in 1969. The driving factors behind the reappearance and potential dominance of neural networks were the introduction of the internet and general-purpose graphical processing units (GPGPU) to the field. It did not take long to realize that the internet could very efficiently and effectively distribute codes and, GPGPU was more suitable for applications using image processing. The first game-changer application was “the Facebook.” It used GPGPU, high-level abstract languages, and internet together. As its name become shorter by dropping “the,” it gained enormous weight in the market in a short time. It did not take long for researchers to find out that the structure of neural networks and GPGPUs is a good match. In 2012 Geoffrey Hinton’s team succeeded in using GPGPUs to enhance the power of neural networks. Big data and deep learning were two core issues for this success. Hinton and his friends used the ImageNet database to teach neural networks and acquired 98% accuracy in image recognition. GPGPUs enabled the researchers to add several layers to the hidden layer of the neural network so that the learning capacity of them is enhanced to embrace more complex issues with higher accuracy, including speech and object recognition. In a short time, the practical use of deep learning has been manifest in a variety of areas. For example, the capacity of on-line translation applications enhanced significantly, synthetic art, virtual games improved remarkably. In brief, we can say that deep learning has given us an entirely novel perspective for AI systems. Since the first introduction of binary systems, it was the programmers’ primary task to write their efficient codes for computers to operate. Deep learning looks like a getaway form the limitations of good old-fashioned AI to strong AI (Marcus and Davis 2019). In 1987 the market for AI technology products had reached 1.4 billion US Dollars in the USA. Since the beginning of the 1990s, improvements in AI technology has been swift and beyond imagination. AI has disseminated to so many areas and has been so handy to use that today, we do not even recognize it while using. Before finishing this section, there are two other instances worth mentioning. The game mates and driverless cars: In July 2002, the DARPA Company, which has been working on driverless cars since the mid-1980s, announced a challenge for all parties interested in this filed. The challenge was to demonstrate a driverless automobile, which can ride autonomously from Barstow California to Primm Nevada. One hundred six teams accepted the
14
1 History of Artificial Intelligence
challenge; nevertheless, only 15 driverless automobiles were ready to turn on their engines on the day of the challenge, 13 March 2004. None of the competitors could complete the course. After the event, the deputy program manager of the Grand Challenge program said that some vehicles were good at following GPS but failed to sense the obstacles on the ground. The ones with sensitive sensors for ground surface were hallucinogenic about obstacles or were afraid of their shadows. It was surprising to detect words indicating human features such as hallucinating or fearing in his narration. DARPA announced a second challenge in 2005, and this time five driverless cars could accomplish the course. It seems one year was enough to teach driverless vehicles not to be afraid of shadows or treat their hallucinations. The winner driverless automobile of the 2005 challenge was called Stanley, and after being nominated as the “best robot of all times” by Wires, retired into seclusion at the Smithsonian National Museum of American History. The third challenge was in November 2007. It was called Urban Challenge, and as it is evident from the name this time, driverless cars were to drive in a mock city where other cars with human drivers were riding as the driverless vehicles were competing. Moreover, they had to visit specific checkpoints, park and, negotiate intersections without violating traffic rules. Eleven teams were qualified for the challenge, and six of them could accomplish the task. Completing the third challenge required more complex systems than detecting the obstacles on the ground. The finishers had to have a complex system capable of perception. They also had to plan, reason, and to act accordingly (Nilsson 2009). Although driverless cars have been a popular area for implementation of AI in contemporary times, another area has been on the spot, which is games, particularly chess games. Since Claude Shannon’s paper on programming a computer for playing chess, this game has been one of the main working areas for scientists studying AI. The reason for this may be that chess has always been a game for intelligent people, and no one would doubt the intelligence of a chess champion. Hence, an AI entity to beat a chess champion, Kasparov, was a significant victory for AI technology that drew the attention of the entire world. In 1996, when Kasparov and deep blue played chess for the first time, deep blue won the first game, but Kasparov was the winner of the match. However, one year later, deep blue became more intelligent and defeated the world champion in a six-game match by two wins, one loss, and three draws. Kasparov’s remarks after the game were even more interesting than the defeat was. Right after the game, Kasparov left the room immediately and later explained the reason for his behaviour was fear. He said, “I am a human being. When I see something that is well beyond my understanding, I am afraid". While Kasparov was impressed by the creativity and intelligence in deep blue’s strategy, the creator company, IBM, did not agree that deep blue’s victory was a sign to imply AI technology could mimic human thought processes or creativity. On the contrary, deep blue was a very significant example of computational power. It could calculate and evaluate 200,000,000 chess positions per second, while Kasparov’s capacity allowed him to examine and calculate up to three chess positions per second. Hence it was not intuition or creativity that enabled deep blue to defeat Kasparov; it was a brute computational force. However, these frank comments of the company did not have a
1.5 Novel Approaches: Neural Networks
15
significant impact on the romantic thoughts in peoples’ minds busy with imagining an AI struggling in its mind to beat Kasparov (Nilsson 2009). This image positions deep blue as an entity of strong AI, with the capability to learn process information and drive novel strategies following its intelligence process.
References Buchanan, B.G. 2005. A (very) brief history of artificial intelligence. AI Magazine 26 (4): 53. Boden, M. 2018. Artificial intelligence. A very short introduction. Oxford, UK: Oxford University Press. Boden, M.A. 1995. The AI’s half century. AI Magazine 16 (4): 96. Grimson, W.E.L., and R.S. Patil (eds.). 1987. Al in the 1980s and beyond. An MIT Survey. Massachusetts, London, England: MIT Press Cambridge. Aikins, J.S., J.C. Kunz, E.H. Shortliffe, and R.J. Fallat. 1983. PUFF: an expert system for interpretation of pulmonary function data. Computers and Biomedical Research 16 (3): 199–208. Kurzweil, R. 2000. The age of spiritual machines When computers exceed human intelligence. New York, NY, USA: Penguin Books. Marcus, G., and E. Davis. 2019. Rebooting AI: building artificial intelligence we can trust. New York, USA: Pantheon Books. McCorduck, P. 2014. Machines who think a personal inquiry into the history and prospects of artificial intelligence, 68. Massachusetts: A K Peters, Ltd. ISBN 1-56881-205-1. McCarthy, J., and M.L. Minsky, N. Rochester, and C.E. Shannon. 2006. A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine 27(4): 12. McCulloch, W.S., and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5: 115–133. Minsky, M. 1968. Semantic information processing. Cambridge, MA, USA: MIT Press. Moor, J. 2006. The Dartmouth college Ai conference: the next fifty years. AI Magazine 27 (4): 87. Nilsson, N.J. 2009. The quest for artificial intelligence a history of ideas and achievements, 603–611. New York, NY, USA: Cambridge University Press. Miller, R.A., H.E. Pople Jr., and J.D. Myers. 1982. Internist-I, an experimental computer-based diagnostic consultant for general internal medicine. New England Journal of Medicine 307 (8): 468–476. Shannon, C.E. 1950. Programming a computer for playing chess. Philosophical Magazine 41 (314): 256–275. Turing, A.M. 1937. On computable numbers with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society 42(1): 230–265. (Turing, A.M. 1938, “On computable numbers, with an application to the entscheidungsproblem: a correction”. Proceedings of the London Mathematical Society 43(6):544–546). Turing, A.M. 1950. Computing machinery and intelligence. Mind. LIX(236):443–460. Yu, V.L., L.M. Fagan, S.M. Wraith, et al. 1979. Antimicrobial selection by a computer. a blinded evaluation by infectious diseases experts. JAMA 242(12):1279–1282.
Chapter 2
Definitions
2.1 What is Artificial Intelligence? The world “artificial” indicates that the entity is a product of human beings and that it cannot come to existence in natural ways without the involvement of humans. An artifact is an object made by a human being that is not naturally present but occurs as a result of the preparative or investigative procedure by human beings. Intelligence is generally defined as the ability to acquire and apply knowledge. A more comprehensive definition refers to the skilled use of reason, the act of understanding and the ability to think abstractly as measured by objective criteria. An AI, then refers to an entity created by human beings and possesses the ability to understand and comprehend knowledge, reason by using this knowledge and even act due to them. The term artificial has an allusion to synthetic, imitation, or not real. It is used for things manufactured with the resemblance to the real one, like artificial flowers, but lacks the features that are innate to the natural one (Lucci and Kopec 2016). This may be the reason why McCarthy insisted on avoiding the term “artificial” in the title of their book, which they co-authored with Shannon in 1956. McCarthy thought that “automata studies” was a much proper title for a book that collected papers on current AI studies, a name which has positive connotations; severe and scientific (McCorduck 2004). When Shannon introduced the AI term in the call for Dartmouth Conference, some scientists did not like it by saying that the term artificial implies that “there is something phony about it” or “it is artificial and there is nothing real about this work at all.” However, the majority must have liked it so that the term was accepted and is in use since then. Human beings have been capable of producing artifacts since the Stone Age, about 2.6 million years before. This ability has improved significantly from creating simple tools for maintaining individual or social viability to giant machines for industrial production or computers for gruelling problems. However, one common feature for any artifact is sustained all through these ages: The absolute control and determination of human beings on them. Every artifact, irrespective of its field and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_2
17
18
2 Definitions
purpose of use, is designed and created by human beings. Hence the abilities and capacity of them are predictable and controllable by human beings. This feature has been preserved for artifacts for a very long time. The first signals to indicate an approaching change in this paradigm became explicit in the proposal for Dartmouth Conference. There were several issues for discussion, of which two had the potential of changing the characteristic feature of artifacts. These were self–improvement and randomness and creativity. In the proposal, it was written that “a truly intelligent machine will carry out activities which may be best described as a self–improvement. Some schemes for doing this have been proposed and are worth further study” and, “a fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness. The randomness must be guided by intuition to be efficient” (McCarthy et al. 2006). The words self-improvement, creative thinking, and injection of randomness were the exact words to represent ideas to enhance the definition of AI from a computer that can solve codes which were unsolvable for humans to a more complex one. The new idea was to create “a machine that is to behave in ways that would be called intelligent if a human were so behaving” (McCarthy et al. 2006). Another cornerstone for the definition of AI after the Dartmouth College Summer Course was the highly influential paper “Computing Machinery and Intelligence,” written by Alan Turing in 1950. His paper started with the provocative question which humankind still seeking the answer to: I propose to consider the question, “Can machines think?” (Turing 1950).
Despite the question in the title, Turing did not argue if machines can think. On the contrary, he took the machines’ capability to think for granted and focused on how to prove that. Turing suggested the well-known Turing test to detect this ability of machines. However, he did not provide a definition and did not specify what he meant by the act of “thinking.” After reading his paper, one can assume that Turing conceptualized thinking as an act of reasoning and providing appropriate answers to questions related to various subjects. Moreover, after studying his work, it would be plausible to say that he assumed that thinking is an indicator of intelligence. For centuries the ability to think has been considered to be a qualification that is unique to Homo sapiens species as characterized in Rodin’s Le Penseur, a man with visible muscles to prove his liveliness and his hand on his chin to show that he is thinking. With his test, Alan Turing not only implied that machines could think but also attempted to provide a handy tool to prove that they can do so. This attempt was a challenge to the thesis of the superior hierarchical position of homo-sapiens over any other living or artificial being, which has been taken for granted for a long time. The idea that machines can think has flourished since the time of Turing and evolved to terms such as weak AI, strong AI, or artificial general intelligence (AGI). Stuart Russell and Peter Norvig suggested a relatively practical approach to defining AI. They focused on two main processes of AI, the first one is the thought process, and the second one is behaviour. This approach makes classification of
2.1 What is Artificial Intelligence?
19
definitions possible and enables us to gather varying perspectives together to understand how AI has been construed. Russel and Norvig classified definitions of AI in a four-cell matrix composed of “thinking humanly,” “thinking rationally,” “acting humanly,” and “acting rationally.” The distinction among these definitions is not that clear since acting humanly may require thinking and reasoning humanly, and thinking rationally requires the ability to think in the first place (Russel and Norvig 2016). In terms of thinking, AI is either defined by attributing to human-like thinking or rational thinking. “AI is an artifact with processors working like human mind” is the simplest definition of AI with the perspective of human-like thinking. Rational thinking, on the other hand, has a more mechanistic connotation. In this respect, a basic definition might be “AI is the computation with the capacity to store and process information to reason and decide.” In terms of behaviour, Kurzweil’s and Raphael’s definitions are worth mentioning. Kurzweil said that “AI is the art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil 1990). On the other hand, Raphael used different words to indicate a similar context; “AI is the science of making machines do things that would require intelligence if done by man.” (Raphael 1976). Although similar, these two definitions pose a significant difference. The first one sees AI development as a process of creation, a term with metaphysical connotation, while the latter sees it as a branch of science, which is definitely out of the scope of metaphysics. Moreover, Kurzweil refers to “performing functions” which is very broad since it includes brain functions such as deep thinking, learning, dreaming or revelation as well as motor functions, while Raphael’s definition focuses on “doing things” which evokes actions with concrete results such as problem solving and calculation. Russel and Norvig thought that these definitions reflect the essential differences in two significant entities: Weak AI and Strong AI. The definitions which are centred on thinking and acting humanly comply with Strong AI, while definitions attribution to thinking and action rationally go for Weak AI (Russel and Norvig 2016). The difficulty in defining AI results from the involvement of several disciplines in the process of developing AI technology products. Philosophy, mathematics, economics, neuroscience, psychology, computer engineering, control theory and cybernetics, linguistics, ethnography, and anthropology are some of the disciplines which work in the field of AI. All of these disciplines have their paradigms, perspectives, terminologies, methodologies, and priorities, which are quite different from each other. Hence, it is not easy to form a common language and understanding. On the other hand, all these disciplines are essential for creating AI, especially strong AI, which makes working together necessary for accomplishing this hard task. Having said these, we will go through the disciplines which are involved in AI and examine their role and perspectives. In this respect, the first discipline to address is philosophy. Ontology and epistemology have served as the theoretical grounds for the development of AI. As described in the History of AI section, theories about nature of human body and mind have been guiding the development of thought on the kind and nature of the artifacts for centuries. On the other hand, epistemology has a significant effect on
20
2 Definitions
the creation of AI. For example, Newell and Simon used Aristoteles’s theory on the connection between knowledge and action, and the process of reasoning was used to develop the regression planning system, which was implemented in the General Problem Solver machine. Likewise, the doctrine of logical positivism, which suggested that knowledge can be acquired from experience in the phenomenal world and that only synthetic statements are meaningful since they generate knowledge which can be verified by test and error, defined an apparent computation method to transfer empirical experiences to knowledge. Ancient Greek philosophers established the essentials of formal logic. Keeping in mind that great mathematicians like Euclid were also great philosophers of their time, it is plausible to say that philosophy contributed to the development of AI by feeding mathematics. Algorithms, incompleteness theory, tractability, the theory probability have been the areas in which mathematics is substantially involved in the development of AI technology. Economics, asks questions about the feasibility and financial consequences of AI, is another sector that derives and shapes AI technology. The idea of utility and positive economic impact is fundamental for economists. Economy gives priority to the areas with the probability of most economic benefit and utility and suggests investing in these areas. More investment pays back with more development. Hence it is plausible to say that economics is one of the disciplines which determine where AI technology will head in the future. Neuroscience is a discipline that contributes to AI development by providing knowledge about the configuration and function of neurons. Plato and Aristoteles were the first philosophers who developed theories to explain the nature of human knowledge and how it is perceived. Their theories have had a significant impact on the development of the philosophy of mind. Neuroscience, with the, emerge of experimental physiology, that explores and explain mental operations systematically, and improvement of behaviourism, a view that denies the existence of a metaphysical mind and restricts physiology to observable stimuli and the observable behaviours to them, objective answers to the archaic questions about how human brain works have been discovered. Cognitive science works on mind and intelligence by investigating various kinds of human thinking under different controlled environments. These experiments aim to explore deductive reasoning, constitution, and application of concepts, mental imagination and, analogical problem-solving. This methodology is also used by experimental philosophy, which rose on the argument that philosophical investigations should base on empirical fieldwork such as physiology. The experiments in cognitive science and experimental philosophy generate knowledge about the operational functions of the mind and, neuroscience provides knowledge about the structure and configuration of the human brain. Cognitive anthropology and ethnography add on to this knowledge by providing cognizance of how the thinking process differs under various environments, such as cultures. Linguistics has been another discipline that is essential for AI technology since it was understood that the main task for computation of language was translating the subject matter and context, not just translating sentences to symbols, which requires
2.1 What is Artificial Intelligence?
21
philosophical analysis of the language. Due to this perspective, modern linguistics and AI have emerged and improved side by side and gave birth to a new field known as computational linguistics or natural language processing.
2.1.1 Good Old-Fashioned Artificial Intelligence Good old-fashioned AI (GOFAI) represents symbolic (classical) AI. It grounds on the idea that ideas, opinions, reflections which are expressed in the form of language could be condensed to symbols and computed by logic. In this sense, every proposition is represented by 1 or 0, which implies they are true or false. Logical operators like Checkers, the Logic Theory Machine, and General Problem Solver were early representations of GOFAI, which could compute complex propositions or deductive arguments by propositional logic (Boden 2018). In the GOFAI systems, it is the programmer who encoded the knowledge for the AI to operate on, and that constituted the main limitation for them. Although some of the GOFAI systems still offer usage in particular fields such as drawing optimized routes in GPS based navigation systems, they have not been of significant practical importance (Marcus and Davis 2019).
2.1.2 Weak Artificial Intelligence Weak AI is a synonymous term with narrow AI. Both terms imply that the entity exhibits intelligent behaviour. For example, the Deep blue, which defeated Kasparov, the legend world chess champion, definitely exhibited intelligent behaviour. Likewise, an AI technology product that can evaluate magnetic resonance images of patients and diagnose pathologies of the brain even more accurately than their human counterparts exhibits even more intelligent behaviour than human radiologists. The autopilot systems in aircraft, global positioning systems installed in our smartphones or cars, or the smartphones themselves are systems working with weak AI. We can find several examples of intelligent AI applications that have permeated our daily lives and facilitate our functions in their areas of expertise. Moreover, some versions of these entities attempt to substitute us and fulfil the whole task by themselves, even sometimes better and quicker than us. However, all these systems are experts in one domain. For example, the latest product of AI technology, driverless cars, attempts to accomplish a laborious job that requires high-level skills such as precise recognition of images, immediate calculation of unpredictable variables, and accurate control of the device. However, all of these entities are categorized under weak or narrow AI terminology. The weakness or narrowness of these entities emerges from the fact that they can operate only in one domain. A brilliant chess player AI system is utterly ignorant about any other function even at the most basic level. In other words, weak AI implies intelligent
22
2 Definitions
entities that can only perform due to its program. Some weak AI entities may have the capability to improve their processing system by machine learning systems such as deep learning. However, as long as they do not gain the capacity to perform in an entirely new domain, they are still considered as weak AI entities. That is to say; weak AI terminology depends on the singularity of domains an AI entity performs. Its level of expertise and accuracy in producing outputs or its capacities, such as deep learning, are irrelevant. Another aspect of the paradigm about weak AI that was remarked by experts in the MIT was that the structure and processing mechanism of the intelligent entity is not essential. On the contrary, the important thing is the output, which is exhibiting intelligence for solving a given problem in a given domain (Russel and Norvig 2016; Lucci and Kopec 2016).
2.1.3 Strong Artificial Intelligence Strong AI is used synonymously with AGI or human-level AI. Carnegie-Mellon University represented the primary paradigm of strong AI. It suggests that the process and structure of an intelligent entity are as essential as the output and proposes the human mind and body as a model for AI (Lucci and Kopec 2016). Strong AI term refers to an intelligent entity that is created by humans, which has similar abilities to function like the human brain, if not better. That is, strong AI can exercise the powers of judgment, conception, or inference and can develop a plan, idea, or design. The term strong AI is representing entities that possess the ability to perform beyond their initial programs. These entities can decide, argue, debate, and choose the way they act. Their actions can be unpredictable as they do not reason due to a static pre-installed program. Success for strong AI is measured due to its alikeness to human performance. The bioethical issues regarding personification, autonomy, consciousness, meta-cognition, or responsibility of AI are mainly emerging from the prospective consequences of working and living with strong AI entities. However, human brain has some shortfalls which limit its functions, efficiency and even cause errors. While developing Strong AI, these shortfalls constitute the undesirable subset of human brain functions in Strong AI. Forgetting is one of these defects. Humans are inclined to forget previously acquired knowledge, ways of reasoning or even memories of important experiences. Since we don’t exactly know how the Strong AI operates among the multilayers of networks of the deep learning process, forgetting or any other unforeseen shortfalls is a possibility that would limit Strong AI. Weak AI systems can perform even faster and better than the human brain to operate in the area where they operate. For example, they can calculate probabilities for the severity and timing of a hurricane more precisely than humans can do or detect a potentially malignant cell, which human senses would fail to see. Although these correspond to human brain functions, strong AI implies much more than that. Strong AI involves learning from experience and developing the ability to distinguish
2.1 What is Artificial Intelligence?
23
right from wrong and having autonomy in decision making. Until recently, weak AI was understood when speaking about AI. However, with the accomplishments in neurocognition, deep learning, and access to big data, the ultimate goal of AI technology is redefined. It is to develop systems that can think, reason, learn, decide, and act like a human being. These properties suggest that the strong AI systems would have the capacity to function in a variety of domains and act like an autonomous agent who is capable of self-motivated action. This definition embraces a broad set of ethical issues related to AI. Besides, it is plausible to foresee objections to this claim due to the low prospects of developing AI artifacts to accomplish all functions attributed to strong AI. These claims would refer to erroneous predictions and over-optimistic estimations about the arrival of strong AI entities in previous years. However, currently, we are witnessing vast improvements in the capacity of human beings to develop AI systems. The number of scholars, researchers, and investors in this field are proliferating, and so do the variety of domains and skills of AI entities. Therefore, it is plausible to say that the state of the ability of AI today is subject to significant change due to technological improvements. AI technology has evolved so fast in recent years that any ability that does not seem possible now might be a reality shortly. This high prospects of advancement are one of the reasons why bioethical inquiries about AI should not be limited to what is possible now but should be broadened to what may be possible.
2.1.4 Heuristics It is a method of procedure that often works to solve a problem. The method is generally correct but not intended to be scientifically accurate. Instead, it is tested and discovered by trial-and-error methods. The heuristic is different from an algorithm which stands for a set of rules to produce a predictable result. The result can be anything varying from baking a cake to solving a complicated algebra problem. Once the algorithm is applied, the expected result is obtained. However, the heuristic method, the expected result is presumptive, but not certain to happen. Heuristic programs were used widely in the 1960s for developing AI. The General Problem Solver was an example of the early uses of heuristic (Lucci and Kopec 2016).
2.1.5 Turing Test It is an imitation game defined by Alan Turing. There are two types of Turing test. In the first one, the computer is the interrogator who is behind a curtain. On the other side of the curtain, there are two people. One is a man, and the other one is a woman. The interrogator asks questions, and depending on the answers given by the man and woman tries to figure out which one is a man and which one is a woman. The man is allowed to lie or deceive the interrogator, the computer, but the woman should always
24
2 Definitions
be truthful. The parties communicate via telegram or writing to avoid any hints from the tone of the voices. In the second Turing test, the interrogator on one side of the curtain is a person, and she interrogates a person and a computer and tries to guess which one is the computer. This time the computer is allowed to lie, but the person should be truthful. Turing thinks that a computer that can fool the interrogator is as intelligent as a human. Moreover, this would be proof for the ability of the computer to think like a human (Turing 1950).
2.1.6 Chinese Room Test Defined by John Searle as a counter-argument to the Turing test. There is an interrogator who asks questions in Chinese. The interrogated person is in a closed room; she does not speak Chinese but has a Chinese rule book with her. When she receives the question in Chinese, she checks the rule book and figures out what the squiggles symbolize and responds again by chancing Latin letters to Chinese squiggles by looking at the book. Although the interrogator receives syntactically correct and semantically reasonable answers, would she be right to think that the person in the room speaks Chinese? According to Searle, the answer is “no.” The person in the room only processes correctly with symbols but does not know Chinese. Searle thinks that is what happens in computers. The computers process correctly with symbols, but this does not necessarily mean that they know or understand what they process and what the outcome means or implies (Lucci and Kopec 2016).
2.1.7 Artificial Neural Network It is a model derived from the human nervous system. An artificial neural network is composed of three basic layers. The first one is the input layer, which corresponds to the dendrites of the human nervous system neurons. They receive the information and operate on them and transfer their output to the next. The second basic layer is called “the hidden layers.” They are hidden from us since we do not know how they process while they are enabling the neural network. As these hidden layers become more complex and deep, more complex activities can be carried out. The output of one layer is the input for the next layer. The final layer is the output layer, which extracts outputs from the previous ones. The system which works forwardly from the input layer to the output layer is called forward propagation. There are also back propagation algorithms that come into use if an error is determined in the output. The back propagation algorithm propagates the alleged output back to the system and runs to minimize the error resulting from each layer (Jain 2016).
2.1 What is Artificial Intelligence?
25
2.1.8 Machine Learning Tom Mitchell defines machine learning as “a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” Gary F. Marcus and Ernest Davis suggest a more abstract definition: “machine learning included any technique that allows the machine to learn from data.” Machine learning works with two main algorithms. Supervised or unsupervised algorithms. In the first one, the programmer teaches the computer, while unsupervised learning implies that the computer is doing this task on its own. Machine learning techniques are deep learning, decision trees, genetic algorithms, and probabilistic learning. The deep learning technique is the best known and most widely used one.
2.1.9 Deep Learning It is a kind of machine learning which uses multi-layered neural networks and big data. The neural network consists of an input layer, several hidden layers, and an output layer and is fed by a massive amount of categorized and labelled big data. We will explain deep learning via a simple example. Let us think that we want to develop an AI system that will operate in radiology. We want our AI to evaluate mammography of patients and diagnose breast cancer. For our system to learn to diagnose cancerous images in mammography, we provide a massive number of mammography from patients who were diagnosed with breast cancer and tell the system that each image is cancerous. Then we also provide images that are from benign tumours, pre-cancerous formations, and suspicious ones. The network processes through all these images and develops an algorithm of its own so that it can recognize a cancerous image and can also differentiate between benign or pre-cancerous images. There are two particular properties of deep learning: 1. The quality of learning and the robustness of outputs, correct and accurate diagnosis in our case, depends on the number and quality of big data. The system will learn better and function more accurately as it is fed with more data and will perform suboptimally if it has learned from a limited data set. 2. Our ignorance about how the algorithm in which the AI system generated by deep learning operates. That is where the term “black box” comes. We, outsiders of the AI system, only know the input and output. We have no idea how the output is produced. This process is the main difference between expert systems and deep learning AI systems. In expert systems, researchers are the ones who write the codes for the computer. The first language to communicate with computers was the binary system. Then binary codes were organized into assembly languages and finally to high-level languages. Programmers can write codes with these languages more efficiently and get the
26
2 Definitions
computer to operate the way they want. For example, we can tell the AI system that if X, Y, and Z qualifications are present in an image, then diagnose it cancerous. In this system, we know precisely how the computer reasoned when it provides us the diagnose. However, writing codes for complex tasks that have to work with the highest accuracy possible is too demanding, if not impossible. On the other hand, AI systems with deep learning capacity can go through images by themselves and learn how to diagnose without receiving any specific codes to instruct them on how to do it.
2.2 State of the Art The development of AI has been parallel with the development of the internet, the introduction of GPUs, and big data. Hence, it is plausible to say that AI is involved in most of the web-based activities, and AI technology underlies most of the internet applications such as search engines, web mining, information filtering, and user modelling in recommender systems and web site content aggregators. Various examples of weak AI are so profoundly and pervasively included in our lives that we take them for granted and may lose some of our capabilities if our access to them is avoided. For example, it would be tough, if not impossible, to decide which route to take while driving in an unfamiliar town without using online maps or to determine which flight ticket is financially feasible without checking the smart apps designed to show us best available prices or to choose a playlist appropriate for our mood without a music providing application. Weak AI applications tell us when to stand up when to go to the gym, how to schedule flows throughout the day, how many glasses of water to drink, what to buy from the grocery shop, and which movies to watch in our free time. What is more, we have met with Sophie, the first artifact, which was a conferred citizenship, watched her interviews on social media, and discussed her humanness. We have been reading news about driverless cars and discussing who would bear the ethical and legal responsibility if a driverless car is involved in a car accident. We see videos of creatures looking like dogs or haunts which fail to open a door in their initial trials, however, quickly learn from their failures, and discover that they have to press the doorknob for success and eventually open the door and let themselves in. The finance sector expected the reality that AI technologies are setting the trends in stock markets by trading according to their probability calculations. Robotics, speech recognition, autonomous planning, and scheduling in the spacecraft sector, machine translation, games playing, and spam-fighting are some other fields that AI has already been advanced. Currently, we are living in an era in which weak AI has been an ordinary and necessary tool embedded in our daily lives and getting used to the idea of strong AI artifacts, which would be a part of our world soon. Literature and arts had played a significant role in illuminating human imagination about the possibility of thinking machines in the 17th century. Likewise, TV series and movies are playing that role
2.2 State of the Art
27
today. Popular TV series like Black Mirror and West World are urging us to imagine the consequences of living together with strong AI systems and create cognizance that this is inevitably going to happen shortly. Currently, technology allows us to use human DNA to store vast amounts of data in AI entities. It would not be surprising to see artifacts containing biological components. Likewise, AI will possibly be used for the physical and intellectual enhancement of human beings. The AI technology improves so vastly that some of the events we see in these fiction movies and fall into astonishment with a scepticism that it would not go that far may have already been realized in someplace. We may have to deal with ethical and legal issues arising from incorporating AI technology into our lives, bodies, and minds, which would result in defining humanness and humanity from the very beginning.
References Boden, M. 2018. Artificial intelligence. A very short introduction. Oxford, United Kingdom: Oxford University Press. Jain, A. 2016. Fundamentals of deep learning—starting with artificial neural network. https://www. analyticsvidhya.com/blog/2016/03/introduction-deep-learning-fundamentals-neural-networks/. Kurzweil, R. 1990. The age of intelligent machines. Cambridge, MA, USA: MIT Press. Lucci, S., and D. Kopec. 2016. Artificial intelligence in the 21st century a Living Introduction. Dulles VA, USA: Mercury Learning and Information. Marcus, G., and E. Davis. 2019. Rebooting AI: building artificial intelligence we can trust. New York, USA: Pantheon Books. McCarthy, J., M.L. Minsky, N. Rochester, and C.E. Shannon. 2006. A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine 27(4): 12. McCorduck, P. 2004. Machines who think. A personal inquiry into the history and prospects of artificial intelligence. Natick, MA, USA: A K Peters, Ltd. Raphael, B. 1976. The thinking computer: mind inside matter. , San Francisco, CA, USA: W.H Freeman. Russel, S.J., and P. Norvig. 2016. Artificial intelligence a modern approach. Essex, UK: Pearson Education Limited. Turing, A.M. 1950. Computing machinery and intelligence. Mind. LIX(236): 443–460.
Chapter 3
Personhood and Artificial Intelligence
Defining personhood is one of the core issues of bioethics. Being a person entitles humans with rights and responsibilities. It also provides a higher moral value among other beings. Improvements in medical sciences have been urging scholars to scrutinize and redefine the concept of personhood. Beginning and end of life issues constitute the main areas of discussion in the context of personhood since determining when the embryo or foetus becomes a person changes the description of the act of aborting it or deciding withdrawal of lifesustaining treatment would have different ethical and legal implications depending on how personhood is defined. For example, if the embryo is considered a person since the time of conception, then abortion becomes an act of homicide. On the other hand, if the embryo is considered a person only after birth, then the act of abortion would be equivalent to removing any tissue from the body in ethical terms. Likewise, medical interventions related to end-of-life have been another area for ethical discussions about personhood. The ethical appropriateness of conducting euthanasia for persons who have lost all cognitive abilities such as being in a persistent vegetative state or withdrawing treatment from an end stage cancer patient due to futility and letting the patient die are among the frequent instances in which the concept of personhood is discussed. These discussions aim to define which qualia are required to consider an entity -embryo, a foetus of a human being- a person. Some scholars reject the acceptability of this question and state that being a member of human species is sufficient to be considered a person. In this respect, any member of human species has the same ethical value and, are entitled to the rights of a person. This perspective makes all of the above medical interventions ethically controversial since it conceptualizes euthanasia and abortion as murder. If we acknowledge an embryo as a person, then we should not kill it for its life is of the same value as its mothers. Likewise, we should not produce or dispose of any embryos in the laboratories for research purposes. It would not be acceptable to dispose of leftover embryos after accomplishing successful in vitro fertilization.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_3
29
30
3 Personhood and Artificial Intelligence
On the other hand, several other scholars argued for a set of qualia to be entitled to the full rights of a person. Some of these arguments claim that being a member of human species undoubtedly provides an ethical value higher than other beings. However, human species members who lack the qualia needed to be a person would have a lower ethical value than the ones who have these qualia and, therefore, would not be entitled to have the full rights of persons. The literature about this debate on personhood has acquired a new dimension with the recent developments in AI technology. It is known that GOFAI and expert systems had several limitations compared to more advanced AI systems of today and prospecting ones like strong AI. For example, the information available for GOFAI or expert systems was limited to their original program. Besides, they could not scan, find and transcend the pitfalls of their operations; their outputs were ineffective without a human being to interpret and use them to produce consequences in real life and, their operation capability was limited to one single domain. Because of these limitations, GOFAI and expert systems were similar to conventional technology products and did not provoke personhood discussions. However, with the improvements in deep learning and robotics, most of these limitations have been removed. We know that advanced AI entities have access to vast amounts of data, an amount that no human being can process in a lifetime, and by using this data, they can improve their original programs, enhance the realm of their operations, influence decisions of human beings or substitute human beings for decision-making task. Moreover, if they possess the necessary robotic technology, they can even execute due to their decisions. Understanding, reasoning, deliberating, independent choosing, and acting are among the traits of autonomous agents. It is plausible to assume that these features will be realized in AI technology as we move towards AGI, human level, or above human-level AI. Hence, the possibility of living together with AI entities that can decide and execute like humans–or instead of humans- in various domains is quite high for the near future. As the importance and coverage range of these AI substituted decision areas increase, discussions on personhood of AI entities are heated. An AI entity with the capability to process information, reason due to the outputs of this process and act accordingly without requiring human involvement suggests that it has autonomy in the field of operation, which urge us to question its responsibilities and–evenits rights. Therefore, the discussion on personhood of AI draws attention to the core question about the qualia needed to be entitled to be a person again. On the other hand, personhood embodies a higher ethical value for the entity in question by addressing an exclusive value to its existence. Therefore, nominating AI entity personhood would have ethical implications in this respect. Having said these, we can conclude that the core reason for our inquiry on the personhood of AI technology artifacts grounds on two significant issues: 1. To determine if an AI entity is equally valuable to a person so that our actions on it or related to it would be justified. 2. To determine the rights and responsibilities of the AI entity that is considered a person.
3 Personhood and Artificial Intelligence
31
In this respect, it is plausible to say that the discussion on personhood and AI technology have two dimensions: the first one is the ethical value and moral status. The question related to this dimension is; if AI entities have the qualia for personhood then do they also have the same ethical value and moral status as any person? The second dimension is about the ethical implications of an affirmative answer to this question. In this chapter, we will first address the moral base or ethical value and moral status of human beings and AI entities. Then we will focus on discussions on the ethical implications of personhood and AI technology.
3.1 What Makes Humans More Valuable Than Other Living or Non-living Entities? What is the moral base for the ethical value and status of human beings? Pure anthropocentric or species-specific perspectives would argue that human beings are ethically valuable because of being a biological member of human species. These perspectives would refuse that no other entity would ever have the same ethical value as human beings regardless of their features such as intelligence, sentience, or consciousness. Therefore, we will leave these anthropocentric or species-specific perspectives out of this discussion since they fail to address the personhood query about AI and begin our discussion with the assumption that there are particular qualia other than biological species membership, which make human beings morally valuable than any other entity. Presumably, these would be the same qualia required for personhood as well. If we can identify the features that provide the moral status and value to human beings, then we can seek for these features in any entity and justify attributing the same value and personhood to them. Before proceeding further, we should note that the context of this discussion implies a species neutral standpoint for the term personhood. Philosophers and scientists have pursued the qualia, which provides the ethical value to humans for a very long time. In this respect, some of them focused on the beginning of life so that they would find an exact point in the intrauterine life of the foetus at which valuable features come to existence. Again, we will exclude perspectives that argue for the assumption that human life begins at conception, and any fertilized egg has the same ethical value as a human being because of its potential to be a full grown adult human in the future. The reason for excluding these arguments is not that they are not justifiable or nonsense, but instead, they do not contribute to our reasoning to answer our core question. We will start our discussion with the ideas of Clifford Grobstein, who suggested three qualia for an intrauterine foetus to be as valuable as any human. The first one is its capability “to express behaviour diagnostic of a rudimentary self-state,” which refers to a minimal reaction to an external stimulus. The second one is to poses “non-behavioural functional processes,” such as having a nervous system, and the third one is “to be recognized as a self by others.” The third feature has
32
3 Personhood and Artificial Intelligence
both physical and intellectual connotations. Grobstein suggested going through the photos of intrauterine foetuses and defining a stage at which the intrauterine entity brings the look of a human baby, which turned out to be the end of the first trimester, according to him. He said the foetus achieves the value of a human -and full rights of a person- when it “evokes empathy as another self” because “this evocation is objective evidence of the existence of an inner self.” Therefore, he argues the foetus should be bestowed with the same ethical value as a person after that point. Although the arbitrariness of this argument is apparent, it is of utmost importance for our inquiry on the personhood of AI entities. Let us go to the beginning of the discussion. Why are we questioning if AI should have an ethical value similar to human beings? Because we, humans, have begun to recognize them as a self . We see them functioning (or the potential to evolve to function) in a way that only an entity with the feeling of self can do. Grobstein thinks what grants personhood is others’ recognition and acceptance. He sees social interaction objective evidence of the presence of self. What he implies by this view is that any entity, which is recognized as a self by others, should ipso facto have the qualia need to be a person, which is consciousness. John Harris states that the whereof consciousness is not pure awareness, “rather, it is an awareness of awareness.” (Harris 1990) The presence of this awareness can be known subjectively by the entity itself. However, objective detection of its existence is more problematic, and social interaction may be the only key to prove its existence. In this respect, the kind of awareness Harris talks about is proven if other people recognize an entity a self. Considering the high penetration of technology into our lives, we can plausibly assume that most readers of this book have been in kind of social interaction with an AI product or have seen the videos of sensible conversations between a human being and an AI robot. Hence, it is not purely speculative science fiction, but rather a gripped actuality that AI technology either has produced or will produce entities, which are recognizable as a self by us. This recognition urges us to inquire about the ethical value of these entities. Moreover, the improvements in robot technology help acknowledging AI entities with physical properties like human beings as persons. In this respect, it is plausible to say that Grobstein’s third feature constitutes an unjustified but practically useful ground for our core inquiry.
3.2 Can Machines Think? The first sentence of Alan Turing’s influential paper “Computing Machinery and Intelligence” raised the question if machines can think. Although he asked about thinking, his inquiry implied a more profound question, that is, if machines can work identically with the human brain. In his paper, Turing preferred to leave this question unanswered and replaced it with another one about the “imitation game” with the claim that if a computer can win this game, this will answer the first question affirmatively. What Turing argued in his theoretical imitation game was that if a
3.2 Can Machines Think?
33
machine can act like a human being by imitating the rationalization of the human brain, then this would be a proof of its capability to think. We should note that the implication of the idea behind the imitation game is compatible with what Grobstein said in his third qualia. Turing thought if a machine could interact with the interrogator in such a way that the interrogator would feel like she is communicating with another human, not a machine, this would prove that the machine is capable of displaying properties of a self. Hence, we can say that Turing has made a smart suggestion. On the one hand, he intended to avoid contradictions based on the technical possibility of machines able to think, and on the other, he proved its possibility through this practical experiment. Since Turing’s suggestion, inquiries about the possibility of thinking machines are raised at some point when discussing the ethics of AI. Although Turing’s suggestion was quite persuasive for some people, not all scholars agreed with him. A debate has been going on since he first launched his ideas. Various parties have asserted several arguments to support or defeat the possibility of an AI that can think like a human being. John Searle (1932) raised the best-known counter-argument against Turing with his thought experiment: the Chinese room test. Searle persuasively stated that if we provide necessary codes to an entity (either human or AI), it might successfully translate a text to another language without actually understanding what that text means. He drew an analogy between the ignorance of the person in the Chinese room with the lack of ability of computers to think by asserting that providing the right answers does not necessarily prove that the operating agent comprehends the context. The conclusion is that the process in the Chinese room is not “thinking.” It is “decoding.” He thought the Chinese room test implies that AI entities may simulate the functions of the human brain by creating similar outputs that humans would do, but this does not prove that they go through the same experiences and use exact capacities while doing so. For example, an AI which communicates and tells that it is sorry does not necessarily mean that it experiences being sorry or comprehends the meaning of sorrow. Likewise, if the communication with an AI entity makes us feel like talking like an actual person, this does not necessarily prove that entity experiences phenomenal self- awareness, but rather shows us that it exhibits functional self-awareness. Chinese room test also throws suspicions about the third criteria of Grobstein by suggesting that recognition of an AI entity, as a self does not necessarily indicate isomorphism of that entity with humans. We should note that the Chinese room test has several pitfalls. The most significant pitfall emerges from the first assumption: translation is decoding. Searle stated that any entity (human-being or AI) with the necessary codes for translation could correctly translate a language to another without having any idea about its context. We can see the wrongfulness of this argument by merely referring to our experiences with first-generation online translation programs. Many of us would have experienced the consternation when we read the machine-translated sentence on our computer screen that makes no sense. Hence, it would be tough, if not impossible, for any entity to make exact and meaningful translation without having any idea or comprehension about the context.
34
3 Personhood and Artificial Intelligence
Another pitfall is the incompatibility between the premises and the inferences of his reasoning. To be more explicit, let us think that a human was in the first Chinese room and an AI entity in the second Chinese room. We give them the same texts and receive the answers. We realize that both answers are the same. We ask both of them if they understand the meaning of what they have written down, and both say “yes” and continue to give reasonable answers to our questions. In this situation and without any additional information, on what basis we can argue that humans and AI entities are going through different processes while producing outcomes, and one process is more valuable than the other one? We can elaborate on our thought experiment by assuming that we received wrong translations from both rooms and asked if they were sorry for their wrongdoing, and both rooms answered, “Yes, I am sorry for the wrong translation. I should have been more careful". On what rational grounds, we can prove that the answers were a genuine expression of feelings stimulated by comprehension of the context or the opposite. Our point is that the Chinese room experiment does not provide us any means to infer that AI entities cannot think. Turing’s argument gained supporters too. One of them is the functionalist view, which claims, “mental state is the causal condition between input and output,” and any two systems, which provide the same output from the same input, should have isomorphic causal conditions and mental states (Russel and Norvig 2016). Again a theoretical experiment can illustrate the appropriateness of the functionalist view. This theoretical experiment is called “the brain replacement experiment.” In this experiment, we assume that the mystery of the human brain is solved so that we have the information about causal conditions between inputs and outputs and the structure and physiology of neural networks operating on them. Depending on this knowledge, it would be possible to produce an artificial neural network that would function like a human brain. In every step, we remove one very tiny piece of the human brain and replace it with its tiny artificial correspondent. This smooth replacement continues until artificial networks replace the whole human brain. The functionalist view claims that when the artificial neural network substitutes the whole human brain, the mental states, and the causal conditions between inputs and outputs will remain the same as the original human brain. This result would show that mental states are brain states, and they can be existent in AI artifacts as well as human beings. The debate on machines’ capability to think like a human brain has been going on since Turing raised the first question. Although novel theoretical experiments have enriched it, no conclusion has been reached until now. However, we think that there is another essential inquiry, which requires attention together with this unsolved one. What is the importance of the inquiry about the capability of AI to think? Why did philosophers, scientists, and any other parties have attributed importance to this question? What would an affirmative answer imply? Thinking is the process of considering or reasoning about something by using memory. In this respect, there is no doubt that computers can think since their existence depends on processing information by using their memories. This answer indicates that what we inquire when asking if AI can think is not the capability of just processing information, but something more particular. We are asking, “if an AI can have the subjective experiences of a human while thinking?” These subjective
3.2 Can Machines Think?
35
experiences are the intrinsic nature of experiences, which require consciousness, intuition, intention, and feelings. These constitute the qualia for being a person. However, human beings could neither answer this core question nor could solve the mysteries of the human brain until now. We still do not know how consciousness, self- awareness, intuitions, creativity, feelings, or sentience works. In turn, it is plausible to say that arguing about the presence -or possibility of presence- of an artificial mind functioning just like the human mind is only futile speculation at this point. Moreover, if AI produces the same outputs with the human mind by going through a different process, how can we justify the superior value of the way the human mind works over AI processes. While these discussions lead us to unresolved speculations, novel perspectives have been emerging with the prospects of providing new ways of thinking on this issue.
3.3 Moral Status “An entity has moral status if and only if its interests morally matter to some degree for the entity’s own sake” (Jaworska and Tannenbaum 2018). In other words, moral status stands for values that necessitate treatment with exceptional diligence. Various theories identify the grounds of moral status differently. From a utilitarian perspective, for an entity to have moral status, it should have the capability to experience utility (pain and pleasure) so that its interest would be included in calculation while determining which action will create the most significant utility for all. On the nonutilitarian perspective, there are reasons before the entity’s interest in moral status. For example, according to Immanuel Kant, developer of non-utilitarian deontological ethics theory, moral status emerges from the capability to deliberate moral laws and to act according to them. The species-specific perspective, another non-utilitarian approach, argues for the fact that any human being has moral status because of being a member of human species. Depending on the identified ground, degrees of moral status can be justified or not. There are arguments to support that moral status comes in degrees. For example, species-specific perspective state that human beings have the full and highest moral status, but there are other living beings in nature which have moral status, like animals, plants, or the environment as a whole. Once an entity is proven to have moral status (a status equal to rational human beings), there comes the moral obligations of others towards them, such as not to treat them in ways which harm or destroy their existence, to treat them fairly or to aid them when possible. Although human beings possess a higher moral status, other beings with lower degrees of moral status deserve respect. Our moral obligations vary in degrees, depending on the moral status of the entity in question. An agent may crash and destroy a stone if it feels like doing that, but it should not harm a plant or an animal the same way. It would be morally acceptable to shoot and kill a horse when it is severely hurt, but this action would have no justifiable grounds if the subject were a human being. The theories which ground the moral status of a being on something other than species membership may or may not justify moral
36
3 Personhood and Artificial Intelligence
status in degrees. For example, some philosophers interpret Kant’s perspective to be compatible with moral status coming in degrees, even to human beings. These argue that Kant’s appreciation to the utmost value of the capability for rational thinking to deliberate universal moral laws imply that humans who temporarily or permanently lack this capability should have a lower moral status than the ones who have this capability. Regardless of our deliberation about the justifiability of moral status in degrees, there is another important aspect related to justice and fairness concepts. The first principle of justice that was uttered by Aristotle some two thousand years ago states that we should treat equals equally and non-equals non-equally. Hence, no matter which grounds we hold for moral status and our perspective on the existence of moral status in degrees, once we agree that two entities have the same moral status (full or in a lower degree than full), we should treat these two entities equally. Nick Bostrom brings a novel interpretation to this ancient, but still valid, principle of justice by associating it with the principle of non-discrimination. His argument rests on the fact that treating equals differently is not only unjust but also discriminative, thus morally unjustifiable and unacceptable. Bostrom’s perspective provides relief for the clogged discussions about the ethical value and moral status of AI technology.
3.4 Non-discrimination Principles Nick Bostrom states that for an entity to have moral status, two main criteria are required: sentience and sapience. He conceptualizes sentience with “the capacity for a phenomenal experience,” or roughly the ability to feel pain and sapience with the “self-awareness and being a responsive agent.” The two principles of non-discrimination ground on this statement (Bostrom and Yudkowsky 2014). The principle of substrate non-discrimination claims that the substrate of the entity makes a difference -in terms of ethical value- only if it causes a difference in the capability of sentience and sapience. In other words, if two entities -one biological one artificial- have the same capacity for sentience and sapience, then they have the same moral status. It is morally irrelevant if they are made of neurons and neurotransmitters or silicones and semi-conductors. The principle of ontological non-discrimination follows similar reasoning with the principle of substrate non-discrimination. It reads as follows; any two entities with the same capacity for sentience and sapience have the same moral status regardless of how they come to existence. According to this principle, being a member of human species does not provide a superior moral status to human beings. The presence of the two criteria is the only morally relevant factor that indicates that an AI entity can have the same moral status as a human being.
3.4 Non-discrimination Principles
37
These two principles tell us that if an AI entity has the same sentience and sapience with human beings, then they would plausibly have the same moral status with all other beings with the same properties, namely humans. It is beyond doubt that Bostrom’s approach helps us to overcome the arguments discussed in previous pages. However, before proceeding to the ethical implications of these principles, we still need to reflect on two questions: 1. In the introduction of this chapter, we said that the first dimension of discussions about AI and personhood is about the moral status and ethical value of AI. Bostrom’s principles suggest that if AI entities may be entitled to have the same moral status as humans, but does not say anything about ethical value. The question is: Does the equal moral status of AI entities ipso facto embody equal ethical value? 2. Human-level moral status encloses not only rights but responsibilities as well. If AI has the same moral status as humans, then, do they encumber the same responsibilities? In the next section, we will seek answers to these two questions.
3.5 Moral Status and Ethical Value: A Novel Perspective Needed for Artificial Intelligence Technology The term value indicates the worth of something among its kind. The ethical value represents the worth of an entity among other entities in the ethical realm. The ethical value of human beings is generally considered supreme compared to all other beings. John Harris says our inclination to save a human being instead of a dog when both of their lives are in danger, and we cannot save them both, shows this fact. In addition to the supreme ethical value, human beings also possess a supreme moral status. Philosophers tried to justify the supreme moral status of humans by referring to various arguments such as anthropocentric or species-specific perspectives or unique features of humans such as intellect and self- consciousness, of which some discussed above. No other entity has been a candidate for the same ethical value as humans until we faced with the possibility of human-level or above human-level AI technology. Moreover, we did not face an entity that has the same supreme moral status as human beings, but has–or may have- lower ethical value than human beings. However, since AI entities are being subject to discussions about personhood and equal moral status with humans, we should now discuss if supreme (human level) moral status ipso facto co-exists with supreme (human level) ethical value when AI entities are in question. That is to say, do we have to treat AI entities with justified equal moral status with human beings the same way as we treat human beings? Are they of equal ethical value because of their equivalent moral statuses? We can conduct a thought experience to find the answers to these inquiries. Before proceeding to the experiment, we should note that the AI entity in question has – again theoretically- the same sentience and sapience with a person. Now let us picture ourselves as the conductor of the trolley, which is famous for causing a mental struggle
38
3 Personhood and Artificial Intelligence
for anyone who tries to decide whom to save when it is inevitable to kill either one person or another person (s) on two paths in front of it. In our theoretical experiment, we have one human being on one road and one AI entity that has the same moral status as the human being, on the other. Again, the brakes of the trolley do not work, and it is inevitable to kill one of the victims. What would we do? Do we struggle as much as we did when human lives were at stake? Alternatively, would it be easier to decide and sacrifice the AI entity to save the human? More importantly, how could we justify our decision ethically? Current common sense steers us to save human lives in any circumstance that we cannot save both entities when their existences are at risk. On the other hand, none of us has faced such a dilemma in real life yet. We have not experienced what it would be like to live in a society together with the human-level or above human-level AI entities. Besides, we also lack the experiences of sincere hearth-to-hearth interaction with an AI entity. We do not know how we would feel about sacrificing a loved one if its essence is silicone and wires instead of flesh and blood. Moreover, we do not know what level of penetration would human-like AI entities to our social, economic, and cultural lives. Maybe they would be so integrated into everyday lives that we would not dare to sacrifice them since doing so would highly risk the sustainability of the economy or production. Our point is that the realization of human-level or above human-level AIs and living together with them will possibly create a very new form of society with new dynamics of production and governance and unpredictable new forms of interaction and relationships. Therefore, our assessments depending on current common sense emerging from the paradigms of today, would be cumbersome for ethical issues of the new world. On the other hand, we can reflect on them. In light of all of the discussions above, it would be plausible to say that if technology succeeds in producing AI, with equivalent sentience and sapience to human beings, there would be no justifiable grounds to refute that their moral status would be equal to the moral status of human beings. On the other hand, ethical value has some subjective connotes. These emerge from relevance to the depth and scope of relationship and interaction between agents. Our moral obligation to treat equally to entities with equal moral status would guide us to some extent, but the ambiguity of the ethical value entails further consideration of the issue. Moral status not only demands us to treat agents in specific ways but imposes the agents with some responsibilities as well. That is why the second part of this discussion will address what responsibilities would AI entities encounter once we agree that they have the same moral status as human beings or personhood. The responsibilities of an agent may take place in the professional realm or ethical realm. However, these two realms are not strictly apart from each other. On the contrary, they may overlap frequently. For example, competence in a profession is a responsibility in both professional and ethical realms. A surgeon should prove her competency in the operation room to get her residency and to meet the moral responsibility not to harm any patient. On the other hand, an AI entity with the same moral status as human beings get involved in all domains of life, not only in one profession. Therefore, it faces
3.5 Moral Status and Ethical Value: A Novel Perspective Needed …
39
several moral issues like any other person. Should we refer to the justice principle of Aristotle and argue for the plausibility of holding all entities with the same moral status accountable with the same responsibilities? In other words, would AI entities with the same moral status as human beings have the same moral responsibilities as well? Depending on Nordstrom’s non-discrimination principles, we may answer this question affirmatively. If two entities have the same sentience, sapience, and moral status, there would be no justifiable reason to encumber them with different moral responsibilities. However, similar to our discussion about the ethical value, giving full moral responsibility to AI entities creates some discomposure in current common sense. This discomposure mainly grounds on the idea that if an agent has full moral responsibility, then it has full autonomy in the decision-making process as well. We are not ready to transfer the authority to decide to AI entities, especially in instances when human lives may be at risk. This reluctance depends on unpleasant experiences in recent history. One of the best-known situations is the early warning system of the Soviet army, Serpukhov-15. On 26 September 1983, this warning system rang alarms because it detected a ballistic missile coming from the USA. Stanislav Petrov, who was in charge of activating the nuclear defence system, thought that AI technology of the early warning system was wrong, and avoided a tragedy for the whole world. Similar examples prove the inappropriateness of transferring the authority to AI entities. We discussed the implications of AI autonomy and how we can design a new frame in the chapter on a new ethical frame for AI technology. Therefore, we will suffice by underlying two facts about this issue. The first one is about the scantiness of our perspective because of the current paradigm. When we give examples of wrongful decisions of AI entities and try to build our reasoning depending on them, we are making a huge mistake since the human-level, or above human-level AI technology, is nothing like the technology in our examples. We are talking about a level of AI that we have not witnessed yet. It is a technology with the power to change the whole paradigm of the world. Therefore, we should expand our perspective as much as possible and avoid the easy path of relying on examples of recent history. The second fact is that recognizing the moral responsibilities of an agent does not necessarily mean to give the full authority of decision-making to it. Various decisionmaking methods with the participation of several partners (human or AI) can be developed instead of excluding AI entities. In short, we can use our creativity to build up decision-making strategies, which embrace the strengths of new technologies and human beings.
40
3 Personhood and Artificial Intelligence
References Bostrom, N., and E. Yudkowsky. 2014. The ethics of artificial intelligence. In The cambridge handbook of artificial intelligence eds. William Ramsey, and Keith Frankish, 316–334. Cambridge University Press. Harris, J. 1990. The value of life, an introduction to medical ethics. Routledge & Kegan Paul. Jaworska, A., and J. Tannenbaum. 2018. The grounds of moral status. In The stanford encyclopedia of philosophy (Spring 2018 Edition), ed. Edward N. Z. https://plato.stanford.edu/archives/spr 2018/entries/grounds-moral-status/. Russel, S.J., and P. Norvig. 2016. Artificial intelligence a modern approach. Essex, UK: Pearson Education Limited.
Chapter 4
Bioethical Inquiries About Artificial Intelligence
In this section, we will begin by defining the historical evaluation of the ethics of technology. Then we will discuss if contemporary perspectives in philosophy and ethics of technology comply with AI technology. To do this, we will inquire if the concept of AI technology bears any fundamental differences compared to the conventional concept of technology. After identifying these differences and understanding the insufficiency of existing ethical perspectives of technology to address ethical issues of AI technology, we will proceed to discuss how a new frame of ethics for AI technology can be achieved and what specifics it should bear.
4.1 Ethics of Technology Although philosophical thinking about technology has flourished in the second half of the 19th century, its origins can be traced back to ancient Greek philosophy. Plato and Aristotle were the first ones to think and reflect on ontology and ethics of technology, and their works sow the seeds of the contemporary philosophy of technology. Their first concern was to identify the distinction between natural and artificial beings. Plato defined this distinction by referring to two terms: Physis and Poiesis. Physis indicates nature, the one which does not need human inference to be. Poiesis, on the other hand, is used for artifacts, the entities which require human inference for coming to existence. Art, craft, and social conventions constitute products of Poiesis. Aristotle added on to Plato’s definitions by saying that Physis is the “primary underlying matter in each case, of things which have in themselves a source of their movements and changes.” In contrast, Poiesis comes to existence with the interference of techne. It has no “innate tendency to change.” The main idea of these definitions is that Poiesis lack the inherent genuine capability of self-generation, motion, and self-induced change, which only Physis have and that Poiesis can come to existence as a product of human function. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_4
41
42
4 Bioethical Inquiries About Artificial Intelligence
The etymologic roots of the term technology also come from ancient Greek. Technology constitutes of two separate words: Techne and logos. Techne means art, skill, or craft. In ancient Greek philosophy, techne had a broad realm covering medicine, farming, geometry, policy, music, carpentry, and cookery. According to Plato, techne refers to an act of a human. It also implies the episteme of the right way of doing the act. Hence, Techne contains two connotations; 1. The idea before the existence of an artifact, 2. The essence of the thing before making it. The term “logos” indicates the principle of order and knowledge of determining the world. Plato thought logos was immanent in the world and also the transcendent divine mind, which provides the reason. Then technology means the knowledge or principle of bringing a craft, the art of skill to existence. In Plato’s perspective, all Physis and Poiesis exist in ideas before their actual real existence. Ideas contain the essence of everything. Techne is the objective knowledge to bring the essence (idea) to existence. In this respect, there is no difference in the essence of the Physis and Poiesis. Both are developed from an idea that defines the essence of them. Techne includes the purpose and meaning for artifacts and, what comes to existence –either natural beings or human-made artifacts—come to existence with its telos. In other words, means and ends of a being come together. According to Plato, humans are a part of the telos of nature. What they create (technology) is not different from the entities which derive from themselves (natural beings). The World has telos, and we discover its telos, understand the potentials of nature and humans doing, and play our part in this complete plan. Humans are not the masters of the world; on the contrary, they are potentialities who act purposefully to bring artifacts to existence to fulfil the telos of the universe. This perspective has three assertions on the ethics of technology: 1. Natural beings and artifacts have the same ethical value, 2. Artifacts are created to fulfil the telos of the universe. 3. Human beings do not have a higher ethical hierarchical position compared to other natural beings or artifacts. The thoughts about technology have changed over the centuries. The recognition of alchemy by Western World in the mid-12th century cultivated the idea that humans can produce products similar to or better than natural entities. This idea uplifted Poiesis to the level of Physis for a few centuries, however, this understanding was too perilous for the conservative religious authorities of middle age because of menacing the ultimate creative power of the divine God, hence was demolished before reaching the end of 13th century (Franssen et al. 2018). The ethical perspective on technology began to change during the Enlightenment period in Europe. Philosophers of this period conceptualized the place of humans to the World significantly different from that of the ancient Greeks. Humans were no longer considered as a part of the world and its telos. Instead, they positioned as agents who learn and understand the secrets of nature. What they learn provided them the power to conquer and rule the world. This new world perspective appreciated humans’ creative efforts to produce artifacts. The world was imagined analogous to a mechanistic clockwork in which gears turn in harmony. What humans had to do was to understand how these gears work. The concept of knowledge changed, as did the concept of nature. Physis was no longer understood as something that has
4.1 Ethics of Technology
43
its telos and exists without the inference of humans but rather an exposition of raw material to be used for the production of goods. Two very influential philosophers of the time, Descartes and Bacon, reflected the new changing perspective of this era about the place of humans and technology in their writings. For Descartes, humans were to be the masters and possessors of nature via scientific knowledge. Francis Bacon published his influential book, New Atlantis, in 1627, in which he glorified technology and knowledge. He uttered that knowledge was power over nature. These perspectives no longer saw techne as a means to realize potentialities of nature, but as a tool to transfer scientific knowledge to a utility to realize humans’ intentions. They also contained a concrete belief that scientific knowledge and technology were the exact means to take a society to perfection. This understanding changed the central question of knowledge. In Plato’s view, knowledge was seeking “what being is,” from a new perspective, knowledge looked for “how nature works?” Knowledge was no longer limited to discover the nature of things. On the contrary, it embraced the episteme to create artifacts, together with their telos and meanings (Feenberg 2003). In Greek philosophy, techne had an inherent ethical value because of its telos, essence, and its genuine place in the system of the universe. However, the view which gained dominance during the Enlightenment period did not comply with this perspective. The new perspective paved the way to the instrumentalist philosophy of technology, which has sustained its dominance until recently. The instrumentalist perspective on technology shares a positive approach to technology of the Enlightenment period because of its potential to enhance human capabilities and functions. According to instrumentalists, every technological artifact is designed and produced to fulfil a function that emerges from a need. This function may be something that human beings find too hard to accomplish physically or mentally, or although human beings are capable of doing that function, it may be thought that artifacts would do it more precisely, accurately or quickly. Either way, the proposed function would enhance human beings’ capability to accomplish something. The ethical aspect of technology emerges from the fact that technological artifacts are incapable of functioning by self-motivation as defined clearly in the Physis and Poiesis discussion. The instrumentalist philosophy of technology states that the function of an artifact necessarily refers to the intentions of the human beings who use them. Therefore, technologic artifacts have no inherent ethical value in themselves. This perspective considers technological artifacts as mere tools that may be used for various purposes and ends. In this respect, artifacts are a-moral entities; in other words, they exist out of the realm of ethics. They may be ethically good or bad due to the purpose they are used for or the consequences of their usage. For example, the function of a drone is to fly without human pilots. In the ethical realm, a drone may be considered ethically good or moral if it is used for humanitarian purposes and bad or immoral if it is used to harm innocent civilians. A pilotless aircraft is a-moral until a human uses it, and two identical drones may have different moral values due to the intentions of the people engaged with them (Feenberg 2003).
44
4 Bioethical Inquiries About Artificial Intelligence
Technologic determinism has a different perspective than technological instrumentalism. Although instrumentalism and determinism agree on the fact that technological artifacts are value-free by nature, instrumentalism suggests that human beings are the subjects who control the progress and development of technology. Technological determinism, on the other hand, argues for the autonomous nature of technology, which implies that although human beings are inevitably involved in the development and application of technology, they lack the freedom to decide and determine in which direction technology will evolve. The autonomous nature of technology implies that societal norms and values do not affect the significant trends of technology. However, technology affects society. It shapes societal norms and values and main paradigms about the course of life. Societies or humanity, in general, have to adapt themselves to what technology affirms. Technologic determinism sees technical progress as a linear track from simple to complex and acknowledges the predetermined pattern of this track. According to technological determinists, it is the internal technical logic independent of social influence that determines which direction technology will evolve, and this determination will inevitably shape the construction of the society beyond humans’ choice and intention. Technological substantivism agrees with technological determinism on the idea that technology has autonomy regarding which direction it will evolve. The dissidence between these two perspectives is that the substantivist theory argues for the value leadenness of technology. According to this theory, technology embodies ethical value independent of its instrumental use and consequences created by this use. Technological substantivism states that this inherent ethical value of technology is the power and domination it provides to anyone who has access to technology. The proponents of the substantive theory are generally pessimistic about technology. This pessimism can be seen in the cult book of Aldous Huxley, “a Brave New World” in which triumph of technology over any other ethical value is portrayed as the evil to bring the end of humanity. Martin Heidegger was another well-known supporter of the substantive theory of technology. He stated that the power and domination of technology lead us to conceptualize human beings and nature as raw materials to be used in technological developments and society as a reflection of technical systems. The constructivist account of technology contradicts deterministic argument with inspiration from Kuhn’s theory of science. It argues that when the first products of technology emerge, they have some alternatives in form and utility. The dominant paradigm in values and norms shape the way technology proceeds. The evolution of cell phones is a good example of how competing paradigms affect the choice of design and implementation technology. Let us try to explain this perspective with an example from cell phone development. When the first cell phones were launched to the market, they were big and bulky. Companies were working to find ways to make them smaller and lighter. They had a concrete idea about the function and utility of the cell phone; to talk and text. On the other hand, another idea was flourishing. This idea was imagining cell phones as devices that can accomplish more than just talking and texting. Playing games, taking notes, or creating memos were some of the other tasks that could be incorporated in cell phones. The two ideas competed, and the latter won, which
4.1 Ethics of Technology
45
resulted in the extensive use of smartphones. First-generation cell phones vanished, although they were substantially smaller and lighter than their initial versions. The constructivist account says, looking back from the present, we can recognize this choice and realize that what we have today did not happen because of the inevitability of technological determinism. It was a result of competition between two perspectives. The endorsed perspective pawed the way to the development of a new concept of cell phones. There are two premises of constructivism which contradict determinism: the first one is that technology does not proceed in a unilineal fashion. It has alternatives on the way of improvement, and various alternatives are present at several stages of technology development and design. The second premise opposes the account of determinism, which suggests that technology determines the changes in society and argues for the opposite; societal and technical factors affect the trends in technologic development and design. Technological determinism and instrumentalism have been criticized and dismissed mostly by the second half of the twentieth century. Although the instrumentalist philosophy of technology has supporters at present, by the second half of the twentieth century, its’ dominance has been tottering due to emerging of alternative variety of conceptualizations of technology which reject the value-free, ethically neutral interpretation of technology and argue for the inherent value of it. A new perspective emphasized ethical reflections of specific phases in the development of technology and empirically informed ethics of technology, which led to the involvement of science and technology studies (STS) and technology assessment (TA) in the realm of philosophy of technology. New arguments of the last decades of the 12th century argue for the valueleadenness’ of primary function for which the technological entity is designed to accomplish. The design and production of technology is a goal-oriented process. The functions of any technological artifact, either an end product or an intermediate product, are defined and make sense in this process. Therefore, it is not reasonable to conceptualize a technologic artifact out of the context of this whole goal-oriented design and production process. Moreover, although some technological artifacts may be used for various purposes, a considerable amount of technological artifacts are designed and produced for a particular function. Social shaping of technology (SST) argues that technologic development is driven by social factors instead of a predetermined logic or a single determinant. During the process of technology, development choices are being made. Some of these choices are at the micro-level. They concentrate on the design and structure of artifacts. Other choices are at meta-level, and they determine the main trends in the development of the technology. These choices create different results for a particular society and all human species. SST deals with the design and structure of technology and its social implications. The first presumption is that particular groups or forces with different aims may shape technology to meet their aims. Hence depending on which group or force to be dominant, the technological and social outcomes of the technology may change. The second presumption is that the initial choices determine the trend and pattern of the technologic developments.
46
4 Bioethical Inquiries About Artificial Intelligence
SST suggests that social forces with an effect on technology are complex and cannot be reduced to market demand, which reflects single rationality, the deterministic role of the economy in this case. Choosing one option would result in increased research to generate cumulative knowledge. This knowledge is reflected in technologic development and production, which would have effects on social and cultural infrastructure. However, SST also argues for the possibility to revise the initial choice and change the trajectory of technological development. SST is critical to linear models that see technologic development as a unidirectional track in which knowledge from research in basic sciences is transferred to technology development and production of goods and launching them to the market. This approach comprehends invention, innovation, and diffusion as distinct processes that consecutively follow each other and do not have any interaction between them. Then the consequences of this technology are evaluated by looking at its impacts on various areas, social, economic, legal, and ethical. SST argues that this one-directional model does not substantiate in real life. On the contrary, what happens in real life is a spiralling process with a constant interaction between every stage of invention, innovation and dissemination, and that these stages are not separate entities, but they are integrated within each other (Williams and Edge 1996).
4.2 Does Present Ethics of Technology Apply to Artificial Intelligence? To answer this question. We visit the fundamentals of the issue and ask if AI fits in the definition of technology. The contemporary philosophy of technology conceptualizes technology as the act of transferring knowledge to practice so that a need is met or a function is accomplished more accurately, timely, or smoothly. We will refer to this conceptualization as the conventional concept of technology. In this respect, there are common features of all technological artifacts: 1. 2. 3. 4. 5. 6.
Their capabilities and functions are determined and controlled by human beings The pathways of their operations and functions are known by human beings They cannot self-evolve, self-learn and self-generate They lack high-level functions which require intention, creativity, and strategy They need humans to create any consequences in the physical world, Humans produce them.
These features which frame the conventional concept of technology are subject to significant changes in the discourse of AI. We will go through some of these features to inquire how and why AI technology is significantly different from the conventional concept of technology. After defining the significant differences, we will discuss if we need a new ethical perspective to address moral issues arising in the realm of AI technology.
4.2 Does Present Ethics of Technology Apply to Artificial Intelligence?
47
The first feature about human control over artifacts functions and capabilities of conventional technology artifacts have been challenged markedly by AI technology development. When the first products of computer technology appeared in the 19th century, it was Lady Lovelace who argued for the fact that computers could accomplish any task, including artwork, as long as they were programmed by humans to do so. However, it did not take long for Alan Turing to discover that AI technology could function beyond the pre-determined programs installed in them. In his famous article titled: “Can machines think?” he argued against the vision of Lady Lovelace and asserted that computers might go beyond their original programs. Although Turing was right in theory, humanity had to wait until the beginning of the 21st century to witness the first AI artifacts to function beyond their initial programs. Nevertheless, today, it is evident that AI technology products are capable of deep-learning and can change or enhance their functions due to this capability. Therefore, it would be plausible to say that the first feature of the conventional concept of technology about the absolute determination of humans on technological artifacts’ functions and capabilities is not relevant for AI technology products. The second feature of conventional technology entities is closely related to the first feature. Humans who are in charge of developing conventional technology products know everything about how their creation will operate. They know which cable is connected to which switch, and they know the reason why they are connected. Likewise, they possess every detail of the software architect of the conventional technology product. They know how the output will be calculated out of inputs. This system is valid for GOFAI and expert systems too. However, when more advanced AI entities such as Strong AI technology, AGI, or Above Human-Level AI are in question, no programmer can be so sure about the operation process. This ignorance is the essence of development in AI technology. Human beings are out of the program after developing intelligence that can accomplish more complicated mental tasks than they can, with the ability to improve their original operation process and develop decision trees and algorithms by going through a massive amount of data. In this regard, it is plausible to say that the second feature of conventional technology is void for advanced AI technology products. Deep learning provides the capability of self-learning and self-development to AI technology artifacts so that they gain the ability to self-evolve and self-learn, which resulted in repealing part of the third feature of the conventional concept of technologic artifacts. However, the issue of the self-generation capacity of AI entities is still elusive. When we talk about self-generation, we are referring to the ability of natural beings to reproduce that is to breed a new member of their species. Up to now, one of the main features that distinguish artifacts from natural beings has been the ability of reproduction. Although it sounds ridiculous to imagine a breeding AI technology artifact, it is plausible to think that some kind of reproductive capability in AI may be possible, at least for advanced forms. However, to stay in the frame of the topic, we purposefully avoid discussing the relevance between ethical status or ethical value and the ability to reproduce.
48
4 Bioethical Inquiries About Artificial Intelligence
The fourth feature, which is “lack of high-level functions requiring intention, creativity, and strategy,” is the essential item for this discussion. The conventional concept of technology conceptualizes technologic artifacts as something fundamentally different than human beings. This perception takes us back to the archaic argument about what makes human beings “human,” which generally grounds on the qualifications of human beings that cannot be acquired by technologic production. It is argued that some peculiarities inherent to human beings like intuition, conscience, self-awareness, faithfulness, loyalty, sensitiveness, suffering, and empathy are the essences of human beings. It is believed that the origins of these features lie in the mind or soul of human beings. Since mind and soul cannot be artificially produced, these features cannot be produced as well; hence, they will be defining human beings and distinguishing humans from any other form of AI technology. The origins of this argument were that the human brain and mind are two different things. According to this perspective, it would be possible to discover the codes of the morphologic structure of the brain, to understand how neurons work and find out functions of particular parts of the brain tissue; however, this would not enable us to decode how human mind works. This perspective conceptualizes the mind like a metaphysical entity that cannot be reduced to biochemical reactions. It has been argued that the metaphysics of the human mind is created by divine intervention. The dichotomy of body and mind can be traced back to ancient Greek philosophy. Socrates was sure that his mind and soul would live much more freely once it was let out of the prison of his body, and this thought steered him to embrace his death with pleasure rather than suffering. Likewise, several religions assert that the human body is the physical matter which will eventually die, but the metaphysical part of human existence which comes from the creator will live eternally. The everlasting nature and divine origin imply that the metaphysical component constitutes the real essence of being human. Modern philosophy was introduced with this dichotomy by Rene Descartes, whose ideas initiated the subjectivity based theory of knowledge summarized by his famous quote, “I think; therefore, I am”. The premise was that the origin of specific knowledge was introspection; in other words, the metaphysical part of human existence. Therefore, the metaphysical part, which is commonly called the mind or soul, was the efficacious and ethically valuable component, while the physical component was considered temporary, illusionary, and less valuable in the ethical realm. Since mind and soul, the ethically valuable part of existence, are uniquely possessed by human beings, they have a higher hierarchical ethical status than all other beings. However, contemporary neuroscience has shown us that the physiology and biomechanics of features of mind can be explained. We know that human mood can be changed from negative and depressed to positive and optimistic by increasing the levels of particular neurotransmitters in the brain or characteristics of personality that may change considerably if certain parts of brain tissue are damaged. Even the beliefs and thoughts of a person can be altered by interfering with unusual biochemical reactions of the brain. Hence it is plausible to say that the features which were attributed to mind or soul as metaphysical part of human beings are results of biochemical reactions in the brain, not mystical nascence. This knowledge urges us
4.2 Does Present Ethics of Technology Apply to Artificial Intelligence?
49
to question the validity of the ultimate ethical value of the human mind or soul and the higher hierarchical position of human beings that accompanies it. The ultimate goal of AI technology is to develop artifacts that can accomplish anything that humans can do (Boden 2018). This definition covers functions of mind and body such as cognition, reasoning, perception, and association, prediction, planning, and intended motor action. That is to say, intelligence in AI has a broad sense; intelligence that is required to accomplish all functions of the human mind, an intelligence that can function at the human level or even better. Deep learning has been a very efficient tool in this respect. Max Tegmark defines an AI technology that was as immature as a new-born human baby considering high human mind functions at the beginning in the preliminary section of his influential book life 3.0 being human in the age of “AI”. However, it develops the capacity to understand and express emotions, reflecting on intuitions, perceiving sensations in a significantly shorter time than a human baby would, by deep learning. Although Tegmark was telling a fictive story, readers see the fact that this fiction would be realized in very near future, if it has not been already (Tegmark 2017). Nick Bostrom estimates that human-level AI will be reached around 2040 and states that it would take a relatively shorter time to have superintelligence after that (Bostrom 2014). Studies on whole brain emulation aim at making progress in understanding how the human brain functions work and carve out the identical model of it. Therefore, some researchers argue that whole brain emulation is the guide for computational neuroscience for accurately simulating human brain structure and functions. As the codes of human brain structure are revealed and how the human mind works are understand stood, AI models would be produced depending on this knowledge. In this respect, the human brain and its functions, including the ones attributed to functions of mind or soul, can be experienced by AI agents. Thinking about developments in neuroscience, deep learning, cognitive science, and whole brain emulation, we say that functions such as intuition, creativity, or strategy would no more be exclusive to human beings. On the other hand, the fact that AI agents experience these features does not prove that these features genuinely exist in these artifacts. The premise is that artificial agents are mechanical entities, and they can only mimic what is inherently existent in human beings. This argument initiates an extensive discussion about the ontology of AI, kept out of this section intentionally not to step out of context. On the other hand, the ontology of AI entities and the principle of ontological non-discrimination will be addressed briefly in the chapter on discussions on the personhood of AI entities. The fifth feature that technologic artifacts require human interference to create any real—physical- consequences has been challenged and defeated by recent robot technology and deep learning capacity of AI entities as well. The first versions of expert systems worked by processing the input they receive from existing literature and providing an output for the human users. Using the output or not was up to the decision of human users. Therefore, it was not reasonable to say that these systems were real agents that could both process data and execute according to the output they produce. On the contrary, these expert systems were intelligent tools to process
50
4 Bioethical Inquiries About Artificial Intelligence
data vastly and accurately than their human users. Their task was completed when they produced the output. However, recent AI technology has changed this course too. For example, installing the anti-lock braking system in cars, that avoided the car from uncontrolled slipping if the driver hits hardly on the breaks, was a significant improvement for driving safety. On the other hand, it was a system that went into action with the interference of the driver. In the last decade, many brake systems have been developed, of which several are installed in cars almost routinely. Some of these systems do not require driver’s command to function; instead, they check the environment regularly and align the sensitivity of the breaks to react drivers command if they detect a threat of accident, and hit the brakes autonomously if the driver does not respond to the risk of an accident. The use of autonomous drones in the military is another example in this respect. The first versions of drones required a person on the ground who kept looking up the sky and command though a remote controller for the drone to function correctly. Later versions were controlled via satellite technology by experts who were located miles away from the drones. Today, drones are capable of doing much more. In the military, they can detect a threat and decide to damage that thread autonomously. Current legislations do not allow this autonomous action because drones are not sensitive enough to distinguish innocent civilians from ones with a risk of a terrorist attack. Therefore, human commanders are kept in the decision loop, and they are the ones who make the final decision to hit the detected probable terror targets. On the other hand, it is reasonable to think that technology will eventually evolve to the point that would enable drones to distinguish between civilians and people with potential risk for terrorist activities and take autonomous decisions to dispel the target and execute this decision without the need for humans in the loop. Although this issue invokes several justifiable ethical and legal concerns, some governments can use autonomous drones as military power once they are available. This use would be a dramatic demonstration of executive power of AI technology, because of its characteristics; 1. The decision they make is a choice between life and death. If the decision is wrong, then the consequences are devastating, and there is no compensation mechanism. 2. It is an irreversible execution. Once drone hits, people die, and there is no possibility to undo what is done. 3. The decision has both scientific and ethical dimensions; therefore, it requires the drone to be equipped with scientific and ethical reasoning properties and accurate equipage to execute the decision. It is plausible to say that a device that is capable of making decisions autonomously between life and death, which does not need a human to execute its decision, has been very much out of the perspective of philosophers who were only familiar with the conventional concept of technology. The theories of philosophy and ethics of technology did not consider technology as something which can function as extreme as autonomous military drones.
4.2 Does Present Ethics of Technology Apply to Artificial Intelligence?
51
The critical factor for these advanced AI entities is their capability of deep learning together with some other learning techniques which enable them to learn and comprehend concepts of real-life effectively. Reinforced learning is one of these mechanisms through which AI technology learns by processing the input and environmental interaction and gains competence in any task. Speech recognition is another. These learning techniques enable AI to understand the compositionality of the real world to get competency in the skills of the human brain like reasoning or using language to communicate. These discussions show us that conventional technology products and AI entities notably advanced AI technology entities such as human-level or above human-level AI, inherent fundamentally different features than conventional technology artifacts. Because of that, they are steered out of the scope of existent conventional technology ethics. That is the reason why we struggle to address ethical issues related to AI technology; our existing theoretical framework does not work as proper means to handle these issues. Therefore, we need to develop a novel ethical perspective when to deal with ethical issues related to AI technology.
4.3 Looking for a New Frame of Ethics for Artificial Intelligence In the previous section, we discussed some of the particular distinctions between AI technology and conventional technology. Most of the distinctions we talked about, such as human-level AI or autonomous AI entities, are qualifications of strong AI entities. We have agreed that we would need a new perspective of ethics when dealing with ethical issues that have emerged or would likely emerge with the extensive usage of these agents soon. However, although science ensures that strong AI and above human-level AI technology will be available shortly, no one is in the position to tell the exact timing for this. On the other hand, a broad spectrum of AI technology entities, varying between simple weak AI to available strong AI, various AI technologies have been incorporated deeply in our current lives. Therefore, we need to understand two problematic areas; 1. To address current ethical problems which we have been already experiencing due to the use of available AI technology, 2. To develop a new ethical frame for living with strong and above human-level AI technology, humanoids, or superintelligence. Before proceeding to these two problematic areas, we should keep in mind that the introduction of the human level or above human-level AI technology to our lives will be an entwined process rather than a sharp paradigm change. Even today, the distinction between weak AI and strong AI is blurred. Some AI entities may be considered as weak AI entities but may contain strong AI features to some extent. This situation requires a broad understanding of solutions to ethical issues for current available AI entities, together with new ethical perspectives needed for more advanced ones.
52
4 Bioethical Inquiries About Artificial Intelligence
Moreover, some sectors may be more prospective to incorporating strong AI entities when compared to others. Therefore, we may need to address ethical issues in some sectors before they are realized as problems in others. Apart from timing, we need to think about the peculiarities of the sectors too. Medicine and finance are two sectors in which AI technology is improving faster than others. We have already been facing severe ethical problems in these sectors, and there is no doubt that these problems will be increasing by time as technology evolves. However, these two sectors have significant differences in the ethical domain. Medical ethics has evolved to be a specific area of ethics that has its principles, values, and applied theories. Hence, addressing ethical issues of AI technology in the domain of medical ethics would require a particular understanding and expertise in this select branch of ethics. Incorporation of AI technology into the finance sector may provoke very different ethical issues than incorporating it into medicine. This difference leads us to develop an ethical perspective that can address particular ethical issues of each sector as well. In this section, we will first define the most common ethical issues arising from the current use of AI in general. In this respect, we will identify bioethical issues common to weak and strong AI, and then discuss contemporary endeavours to address them. Then we will put on googles with broader eyesight and try to define ethical issues those are likely to emerge from the strong, human level or above human-level AI technology and discuss how a new ethical perspective can be developed to address them and what the specific qualifications of the new ethical perspective should be.
4.4 Common Bioethical Issues Arising from Current Use of Artificial Intelligence 4.4.1 Vanity of the Human Workforce, Handing the Task Over to Expert Systems The AI agents are meant to accomplish tasks that require intelligence when done by humans. These AI agents rely on three components; 1. A knowledge base 2. An inference engine 3. A user interface. Human experts and existing literature provide knowledge. When a query arrives at the AI, it searches the knowledge base and provides an answer to the user. Some expert systems would not need a human user to take the answer and act accordingly. Instead, they can be programmed to initiate a series of actions depending on their findings. Moreover, due to improvements in technology, these systems may have the capability to feed their knowledge base system by scanning and picking knowledge on the web that would be useful for their decision making procedure.
4.4 Common Bioethical Issues Arising from Current Use of Artificial Intelligence
53
AI systems have the potential to invade several professional areas and various aspects of daily life. The expected and required consequences of replacing human performance by AI technology in these areas are that these tasks could be fulfilled faster, more accurately, or more comprehensively. For example, the use of AI systems in financial markets makes it possible to calculate the probability of profit and loss by taking into consideration a vast amount of parameters from several markets and arrange trading activities, which trigger transactions in markets all over the world. It would be tough, if not impossible, for a single broker to calculate all these probabilities and set up a system to initiate automatic trading in various markets. Besides, a human broker would have to work 24 h not to overlook any data that would require an instant trade, if some other data would come to support the first one. Hence, trades in financial markets are very much in the hands of AI technology now. AI systems find more extensive usage in the healthcare sector day after day. The first human genome sequencing method was developed by Sanger et al. in 1977. Although this was a considerable accomplishment at that time, which brought Nobel price to Sanger, it had considerable setbacks too. It took approximately 15 years and 100 million US Dollars to sequence a human DNA by the Sanger method. Only one decade before our day, scientists introduced next-generation sequencing methods. One of them, 454 Genome Sequencer FLX, was considered a big success because it could sequence the DNA in 2 months and at one-hundredth of the original cost (Topol 2019; Wheeler et al. 2008). AI technology has enabled us to make considerable progress in this area. In his influential book Deep Medicine, Eric Topol, a medical doctor and a cardiologist, writes about one of the following recent accomplishments of AI technology in medicine, which was achieved by Dr. Kingsmore and his team from San Diego. In 2018 this team announced that they did whole gene sequencing from a drop of blood only in 19.5 h by using AI technology. This achievement was a massive step for using AI in medicine so that the scientists were awarded a Guinness World Record for this accomplishment (Sisson 2018; Topol 2019). On the other hand, AI systems are in excessive use in some other domains, as well. Operations in design and manufacturing, process monitoring and control, diagnosis and troubleshooting of devices and, planning and scheduling are some of them. The use of AI systems has been an essential part of routine operations in several domains, so that it would be tough to think about how to sustain the operations in these domains without AI technology. Air transport is one of the domains in which AI technology is embedded excessively. It is beyond doubt that most passengers would hesitate to get on board if they were informed that expert systems were excluded from the operation of their airline flights, including all relevant processes such as designating gates, assigning flight personnel, and doing necessary checks before departure. It would be deeply annoying for most of us to imagine ourselves in an aircraft, 8.500–10.500 m above ground flying at a speed of 828–1000 kms per hour, operated only by human beings by excluding machine intelligence. On the other hand, the extensive dependence on AI systems in some domains, together with academic research to prove their efficiency and effectivity when compared to human beings doing the same tasks, have created concerns about the
54
4 Bioethical Inquiries About Artificial Intelligence
exclusion of human workforce from these sectors and the negative economic and social consequences of this exclusion. These concerns feed a variety of pessimistic views, like an increase in unemployment and poverty or otiose of human beings. Some of these concerns go far enough to anticipate a dystopia in which AI systems take control of every operation so that they would manage and rule human beings in every aspect of their lives. Although these predictions are entitled to be discussed widely, they are beyond the scope of this book. On the other hand, to shed light on the bioethical aspect of the issue and to determine the need for sector-specific ethical approaches, we should discuss consequences that are related to the nature of the service provided by AI systems. To do this, we can refer to a simple preference test to determine if the nature of the service is appropriate for extensive use of expert systems by asking the consumers if they would prefer a human being or an artifact to provide the service in question. For a financial trader, it would not make a difference if a computer or a human being gives orders for transactions, as long as the transaction concludes with a sufficient profit instead of a devastating loss. Airline passengers would not be aware when the autopilot was in charge and would not care about it so far as they land on earth safely. A driver whose car produces an awkward noise would not be unhappy or dissatisfied when she does not see a serviceman if the AI system resolves the problem smoothly. On the contrary, the customers would be more content with the service they had been provided with the relief that they were served by an automatic system free of human errors. Hence, it is plausible to say that these fields pass the preference test, and expert systems are required to replace or substitute for humans in most of the operations in these fields. However, this would not be the case in some sectors, and medicine is one of them. Medicine has been one of the first areas in which expert systems were put in use. The manufacture of expert systems like DENDRAL and MYCIN was considered as the cornerstones in AI technology and medicine. These developments resulted in exceeding prophecies about the odds of white computers sitting in physicians’ desk to serve patients instead of human physicians in their white coats. The supposition behind these prophecies was that medicine was one of the subject areas in which computer software could act like a human expert and provide similar service. However, we would not be as confident as we were in other sectors for expert AI systems to take over responsibility in medical services. It is plausible to think that patients would like to have automatic systems which would eradicate human errors in some of the diagnostic procedures, such as laboratory tests, radiographic imaging systems, or gene sequencing systems as mentioned above, but would it be preferable to meet a computer system in examination room instead of a human? Imagine an elderly patient who is not familiar with computer systems or a confused patient who would need assistance even to tell her reason for coming to the hospital. The inpersonification of medical services would not be preferable for many people, which implies that some services would require more than epistemic expertise. The nature of decisions also plays a role in the result of the preference test. In medicine, adding two and three would not always be equal to five. That is to say; a physician may plausibly offer different treatments to different patients with the same diagnosis.
4.4 Common Bioethical Issues Arising from Current Use of Artificial Intelligence
55
Sometimes social, cultural, religious, personal, and economic variables may play a crucial role in decision making together with medical data. These will be addressed widely in section on “AI in healthcare and medical ethics,” hence we will suffice with stating that the role of AI systems depends on the nature of expertise required and that it would be preferable and desirable to hand the task over to AI systems in some sectors, while it would be equally inconvenient and out of favour in some others. Medicine may be one of the sectors in the latter group.
4.4.2 Annihilation of Real Interpersonal Interaction The use and popularity of AI systems enhanced with the introduction of Web 2.0, which transferred the web from a simple content delivery application to a platform that enables instant interaction and turned their users from passive reads to active knowledge providers. The improvements in AI went parallel with web 2.0 after 2005, and a new virtual world was formed in which people could interfere in web site contents, add knowledge and feed-back to them and share their thoughts, feelings, and statements through social media platforms. These feeds become available to millions of internet users worldwide as soon as they are posted, which makes this platform a paramount virtual meeting area. The attendees of this virtual meeting area get familiar with each other and sometimes develop more intimate relationships than their real associates in their daily lives. AI helps us to follow people who might be of our interest, find hashtags fitting to our hobbies, and relate with professional fellows. In 1990s if we saw a young man sitting on a park bank with his head dropped to his chest we would worry about his health, but today we are pretty sure that he is holding a smartphone in his hand and is quite active intelligently and -possibly socially- in contrast with his catatonic physical appearance. Game technology, which is introduced by AI, has also been a particular virtual reality platform. People spend a significant amount of time, sometimes days without giving a break, with these games in which they wrap themselves up with various avatars and collaborate with or fight against people whom they meet online. There have been incidents in which people neglected their most fundamental physical needs, such as food and water, and faced serious health hazards because of being so much into virtual reality. Some parents complain about the fact that they cannot see and spend time with their teenage kids because of their engagement in the virtual world of games. Negative consequences of being too much into virtual world such as neglecting real-life responsibilities, ignoring the human intimacy, abandoning social gatherings requiring in-person attendance, losing the ability to feel human sensibility, and detaching from daily life, bring social issues of AI to the bioethical agenda by raising questions if the harms are almost surpassing the benefits of this technology. Martin Cooper, who is the inventor of the first cell phone, puts forth that technology is here to serve us to make our lives better. However, we are not that sure if letting virtual interactions to substitute real ones makes our lives better or worse.
56
4 Bioethical Inquiries About Artificial Intelligence
4.4.3 Depletion of Human Intelligence and Survival Ability How many phone numbers do you have in your memory? If your cell phone’s battery dies unexpectedly, can you call a family member or a friend by merely remembering their phone numbers? Possibly not many people can. We have phone numbers, e-mails, and addresses of our family members, professional associates, and friends stored in our smartphones and computers. We can contact them by calling their names or just typing the first two letters of their names on our phones. The AI technology in our devices can do the rest. We do not need to remember the right syntax for words or technical terms when we are creating an e-mail or message. The AI technology checks and corrects any mistakes we make or guesses what we mean and offer better literary forms to express ourselves. AI technology even determines our feelings from the words we use and offer relevant emoji to suit our sentences; a sad face after typing sorry, a bouncing heart after welcome, a crying face after condolences. We do not need to think about the best route to take when we are driving to a less familiar neighbourhood. All we should do is to type in the address, and then follow the directives of the map application embedded in our cars or cell phones. No need for thinking or deciding, AI can do it for us. We do not need to remember our daily schedule, note down our future events on our calendar, record flight numbers and times, and even think about grocery shopping. Our AI technology does all that for us. It is beyond doubt that transferring all these tasks to AI technology is a relief and real ease for our lives. There have been criticisms about this situation with the argument that humans are not using their brain capacities and depending on AI technology to pursue their daily life instead of their abilities, and this will deplete their survival capacity eventually. A human, who is deprived of AI technology which she has depended upon for her daily activities, personal statements and expressing emotions, may feel as startled as Gregor Samsa did when he found himself in the body of a cockroach one morning; without a clue how to do basics for sustaining life. Although creating an analogy between our points with Kafka’s Metamorphosis may be criticized for being too exaggerate, we all sense that there would be a pinch of truth in that if we imagine how to overcome a day’s activities without accessing devices with AI. We may feel even more annoyed imagining how it would be like in the future if the AI technology invades in our lives at this pace.
4.4.4 Abolishing Privacy and Confidentiality AI technology provides platforms on which we breach our privacy and confidentiality by ourselves, willingly. We invite others, even strangers whom we will never meet in person, to see us socializing, performing sports, travelling, expressing our talents, chilling at home, cooking or sleeping by sharing our posts on social media. We can learn several things about a person just by looking up their profiles in Instagram or
4.4 Common Bioethical Issues Arising from Current Use of Artificial Intelligence
57
Face-book or going through their tweets. Therefore, one can argue that it is not web 2.0 or AI which breaches our privacy and confidentiality; it is in propria persona the individual who does that. However, agreeing with this statement is not the end of the discussion; on the contrary, it is where it begins. It is beyond doubt that AI technology, which flourished with the extensive use of the internet, accesses private and confidential information about us even by most straightforward applications. We willingly consent for this access when we download and login to an application. The question is, “do we know what we consent for?” Moreover, the second question is, “do we have any other choice than consenting? Is it possible to stay out of it by rejecting to use these applications?”. Starting with the first question: do we know what we consent? How many people read and understand the privacy policies of applications when they pop-up on our screen? In case people read and knowingly consent for privacy policies, do they follow up when these policies change? How many of us have spent time to go through any e-mail sent us regarding privacy policy changes of a standard application? The answer is “not many.” The following question is, why don’t we pay attention? Why don’t we read? Is it because we think we would not understand if we read them? The assumption of not being able to understand even if we read them may be a relevant answer since going through a relatively long piece of writing, which may plausibly contain legal terms, would be difficult for most people. However, a more valid answer may be hidden somewhere else; in human being’s surrender for the reality that they would have to consent even if they do not agree full-hearted with the privacy policy because they have to use that technology. What would happen if you think the privacy policy of a website or an application is not suitable for you? You would not use it. However, there are some apps with AI technology that you would find it too hard to sustain professional or daily life without using them; for example, google. The privacy policy of google says it would collect data “about the services that you use and how you use them, like when you watch a video on YouTube, visit a website that uses our advertising services, or view and interact with our ads and content.” What would happen if you do not agree with these terms? Avoid google? Here is another example; think about internet banking. How many people would be able to avoid it if they thought the privacy measures were not enough? AI technology is so deeply installed in our lives that it is already beyond the point of rejection. It is the norm now to be visible online, to have available information about what we do, how we do, and where we do. At this point, we should revisit the philosophy of technology and contemplate reflections of technological determinism in our daily lives.
58
4 Bioethical Inquiries About Artificial Intelligence
4.5 Bioethical Issues on Strong Artificial Intelligence AI aims to create fast, accurate, and efficient artifacts that can operate in areas requiring human intelligence. One of the methodologies used for AI developments is using the human brain as a model for developing AI and is called human brain emulation. The accomplishments in human brain emulation have been of utmost importance for AI technology development. On the other hand, this has been a bi-directional journey. These accomplishments have served significantly for understanding how the whole human brain or some parts of it work, a knowledge which has been a mystery since the beginning of history. In this respect, scientists study and produce models of the human brain -or some parts of it-, then they use these models for developing AI technology. The information acquired through the process is reflected in studies and researches on the human brain. Cognitive science has flourished significantly on these bases, which shed light both on how the human brain operates and development on strong AI technology. Knowledge about both areas increased, and strong AI technology has gained enormous momentum. Although this momentum was desired and worked for, it brought together doubts about the possibility of realization of horror stories of the creation of superintelligence that would enslave humans and super-intelligent agents’ invasion of the world. The bi-directional nature of these studies enhanced the probability of creating other entities such as superhumans, which were only considered as products of science fiction before. Yuval Noah Harari spared a whole chapter on superhumans and revisited this idea several times in his bestselling books Homo Deus and 21 Lessons for the 21st Century. His idea is that we are not far from a technology that would unite AI with the human brain to create superhumans or humanoids, which would go beyond the vulnerabilities and meagreness of human nature. In this book, he discusses the economic, political, and social implications of this situation and argues that the next world war would be between societies that would have this technology and which have no access to it (Harari 2018). However, not all opinion leaders share the perspective of Harari. For example, Nick Bostrom, in his influential book Superintelligence, argues that the future of AGI will be mainly on the path to uploading the human mind to AI instead of incorporating AI to the human brain to create superhumans or humanoids (Bostrom 2014). Max Tegmark’s book “Life 3.0: Being Human in the Age of Artificial Intelligence” is another significant publication on the future of AI. In his book, Tegmark, like others, does not rule out the possibility of realization of humanoids or other super-intelligent agents soon (Tegmark 2017). These arguments are usually accompanied by concerns about humans being enslaved by super-intelligent agents or by humanoids and cultivate suspicions regarding risking human nature and existence by investing in advanced AI technology. The critics about the implausibility of interfering with human nature because of its potential to change society, religion, and culture irreversibly invite scientists and opinion leaders to do a comprehensive technology assessment before going any further.
4.5 Bioethical Issues on Strong Artificial Intelligence
59
4.5.1 The Power and Responsibility of Acting Strong AI entities think like humans. They can reason beyond the limits of their original programs and learn by themselves. This ability gives them the ability to contemplate and cogitate on problems, envision the consequences of their decisions, and excogitate alternative solutions. Due to enhancements in robotic technology, strong AI agents may plausibly have the ability to act according to their decisions. Acting creates real consequences. These consequences require the agent to take responsibility for them. This reasoning brings questions about the legal and ethical liability of AI technology artifacts as agents. Some new advanced AI technology products such as driverless cars have heated this debate. The main question is: Who will bear the legal and ethical responsibility of an incident that leads to injury or death of a human being if an accident occurs because of the faulty action of a driverless car? It may be plausible to think that the responsibility belongs to the manufacturer since they are the creators of the AI program in the first place. They should be attentive enough to take all the necessary precautions while they are building the system. However, this argument contradicts the central concept of driverless cars, as well as deep learning AI entities. Driverless cars claim that they have the technology to sense and assess every required data for safe driving, and they possess the necessary mechanical infrastructure to act, drive in this case, according to their assessments. The driverless cars are thought to have the technology to enhance their knowledge about driving by learning from their experiences and elaborate their driving abilities by the new knowledge they generate. Besides, even human drivers conflict with whom to protect pre-emptively when an accident is inevitable. Let us recall the discussions about the unsolvable trolley problem that is referred to in almost all discussions on the fundamentals of ethical decision-making. In this regard, it would be argued that the manufacturer’s responsibility begins and ends with the establishment of the driverless car once necessary tests clear it on safety and effectivity. After that the driverless car is on its own, to decide what to do and when to do. It is beyond doubt that more entities like driverless cars will be the subject of discussions on the ethical and legal responsibility of AI entities as this technology disseminates in other areas. Healthcare is one of them. Any AI technology entity that would have the capacity to decide and act accordingly in medical services would have an impact on the health status of the patient and in case of a faulty diagnosis, or medical intervention which was initiated or implemented by AI technology artifacts, the agent for legal and ethical responsibility remains vogue. The ethical and related legal implications of AI technology used in medicine will be discussed further in the “AI in healthcare and medical ethics” chapter of this book. These discussions have solid grounds as long as it is proven that the advanced AI entities can learn, decide, and act accordingly without the involvement of a human after the establishment of the artifact. However, these statements open a new discussion realm regarding the ethical agency and personhood of AI technology entities. This discussion reaches out so far to the ethical responsibility of AI agents and how these responsibilities would be materialized in legal terms. These issues will be addressed in more detail in a separate section in the chapter “personhood and AI” chapter of this book.
60
4 Bioethical Inquiries About Artificial Intelligence
4.5.2 Issues About Equity, Fairness, and Equality The development of AI technology and putting it in use in several areas require considerable logistics and human intelligence that have expertise in various disciplines. It is not a surprise that computer science and AI technology have been led by a limited number of very well established universities, research centres, and institutions since the beginning. Beyond intelligence and logistics, sufficient funding should be sustained all through the process. Developing and launching AI technology is expensive, so it is having access to them. Bill Gates’s dream was to put every house and office desk a personal computer (PC) when he was in the beginnings of establishing Micro-Soft Company. It was a huge dream, indeed considering the price of a single PC at that time. Today, dreaming of an office without any computers is a wilder dream than Gates’s original dream. It is tough, if not impossible, to run a business without using computers or the internet regardless of the place we live or work. Would AI technology face the same fate? We can imagine that it would be very costly to have the available latest AI technology equipment in our business or daily lives, but the question is: Would they be affordable for ordinary people worldwide in a reasonable time? Or would advance AI technology be kept as an exclusive product for the ones who have the privilege to access such technology? If the latter option is realized, this will create considerable inequality. Some societies or a particular group of people in various societies would have the means to enhance their health and even limitations emerging from the nature of human beings by using AI technology. They would have access to drugs that would elongate human life, applications to track health status by several indicators and infer the risks, implementations to improve human intelligence, or deploy mental states which would mitigate human productivity. These examples can be diversified by imagining further what AI technology can provide humans, and they may provide a considerable advantage over the ones who do not have access to these technologies. Hariri argues that the societies and individuals who have access to these technologies would not be eager to share them with others, which would create a kind of dystopia in which superhumans have substantial dominance over ordinary, inferior in this case, humans (Harari 2018). The advantages that AI technology may provide are not limited to human health and capacity. There would be enormous advantages in the military, agriculture, industry, and eventually, wealth and prosperity among societies. Although reading through these sentences would awaken a sense of going too far with horror stories, we all know that AI technology of today embodies inequity and inequality to some extent, and it would not be a false prophesy to think that these inequalities would deepen through time. This issue will be addressed more elaborately in the following chapter on the ethical implications of advanced AI technology use in healthcare services.
4.5 Bioethical Issues on Strong Artificial Intelligence
61
4.5.3 Changing Human Nature Irreversibly Science has recently been involved with developing mechanisms to enhance human nature by using AI technology. Merging the human brain or body with AI technology to enhance human capacity or uploading the human brain to AI are some of them. These studies raise inquiries in the ethical realm, such as the plausibility of such irreversible acts, considering their effects on human dignity and the essentials of being human. Technology has been in use to ease the lives of people with disadvantages and improve their quality of life. Physical support systems for people with disabilities to walk, smart medications to target cancer cells, mini chips to find a diseased place in the human body, and restore the harm or eradicate the cause have been in medical use for some time. However, none of these mainly aims to enhance human nature and to turn human beings to a humanoid irreversibly to create a superhuman. One may argue that it is individuals’ right to ask for improved capabilities or appearances if the technology can do it. Many people would like to take a pill to overcome their extreme shyness, which definitely would annihilate the social and professional disadvantages of diffidence. High school students would opt-in for any possibility to enhance their concentration and memory while preparing for exams for university. Middle-aged people would like to have access to pills or interventions which would reverse the signs of aging. Currently, most of these are available in the market, but what AI technology can do is beyond taking medication for negative mood or stop aging lines on the face. AI technology can change the nature of human being irreversibly and fundamentally by creating new species; humanoid or superhuman. It is a new entity with similarities to a natural human being, but something definitely different from it ontologically, which would have serious social, ethical, and legal implications. However, what AI technology can offer is not limited to altering human irreversibly. AI technology has another expectation from the very beginning, of which we can trace the signs of in Alan Turing’s paper titled: Computer machinery and Intelligence, published in 1950. In this paper, Turing provided hints about the ultimate goal of AI technology; to create intelligence which is similar to, if not better than, human intelligence. The word “create” might be used intentionally to emphasize the divine and celestial implications of this goal. The developments in robotics since the 1950s indicate that the expected creation is not limited to a general intelligence without the capability of physically acting. It was quite surprising to see the videos of the robot “Sophie” on YouTube with facial expressions like a human trying to answer the questions of the audience with a calm voice tone. Hence a new species would be formed; AI robots, who can learn, think, consider, evaluate, decide, and act accordingly. They are not pre-programmed entities constrained by their program like the ones in Lady Lovelace’s dreams. On the contrary, they are entities that can enhance their intellectual -and physicalabilities. Turing proposed to think of these entities as a child. The child is born with an infrastructure that evolves and develops through time. The input from experiences and education helps them to improve while they are becoming adults. Similar to this
62
4 Bioethical Inquiries About Artificial Intelligence
process, AI entities start with an initial program that gives them the capacity to evolve and turn into a mature ethical and physical agent. This new entity bears several ethical- and legal- issues. Referring to previous paragraphs the first issue is about the ethical- and legal- responsibility of these agents. As discussed briefly previously, if an entity can think, consider, vividly predict the consequences of an action, have the competence to decide and act accordingly, this entity has autonomy. Up to now, these capabilities were only existent in human species. Hence it was easy to say that humans have autonomy and have the right to decide for themselves. Enjoying the right of autonomy encumbers he humans with ethical and legal responsibility for their decisions and actions. It is plausible to think that this reasoning may apply to any entity with the capabilities we listed and also encumber them with ethical and legal responsibilities. This issue will be addressed in personhood and AI section to elaborate on ethical reflections more comprehensively.
4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology 4.6.1 Two Main Aspects of the New Ethical Frame for Artificial Intelligence Reading through the previous paragraphs, we can conclude that a new frame of ethics for AI technology should have two main aspects: 1. The ethical norms and principles which would guide the development, production, and utilization of AI technology. 2. The ethical norms, principles, and ethical reasoning for utilization and functioning of AI technology. Three terms generally address these two aspects. The first term is “ethics for design,” which defines the codes of conduct, benchmarks, and certification processes that guarantee the integrity of designers and users as they plan, design, utilize and oversee AI entities. The second term is “ethics in design.” It refers to the administrative and designing strategies that back the examination and assessment of the ethical implications of AI systems as these integrate with social structures (Dignum 2018). Ethics for design and ethics in the design complement each other to acknowledge the first aspect of the new frame. The third term is “ethics by design,” which embraces integrating ethical reasoning capability to the AI entity that can act autonomously and addresses the value-based decisions of AIs.
4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology
63
4.6.2 The Ethical Norms and Principles That Would Guide the Development and Production of Artificial Intelligence Technology Primary discussion: Should we produce any technologic entity just because we can? In the previous chapters, we have discussed that a new ethical frame for issues in AI technology is needed because of fundamental differences in AI technology and conventional technology. These differences become more manifest as we sail from weak AI technology to strong AI, general AI, or above human-level AI technology. Although we have not witnessed the use of AGI level or above human-level AI technology in our daily lives yet, the improvements in science and technology inform us that it is only a matter of time. When these advanced AI technologies begin to be integrated into our lives, human beings will be living in a new form of society in which there are various forms of autonomous or semi-autonomous agents other than themselves. This new society will question the highest hierarchical position of human beings, among other beings in the world. Scientists, philosophers, and other professionals who are involved in theoretical discussions on AI technology development seem to have conflicting feelings and thoughts about this potential radical paradigm change. Some of them are pessimistic about this change. They think that human beings will lose their dominant hierarchical position because of having the lower intellectual capacity and will be disadvantaged in the new settings. It is beyond doubt that the new paradigm will induce essential changes in social, economic, cultural, and religious aspects of life. These predictions propound the following question: should we create an entity that would change the whole paradigm of life just because we can? From the perspective of technologic determinism, this question is absurd, because technologic development is considered to be developing autonomously, without the control and management of human beings. Technologic substantives would plausibly agree with determinists on this matter. Technology provides economic gains and power of domination; hence, any technologic entity which has the potential to bring these advantages to its producer will be created eventually. According to their perspective, the main motive of people who are involved in the development and production of AI technology is power and success. They are so busy with the constant competition to be the first one to go one step further than their rivals that they do not have time or awareness to raise their heads to see where the stairs are heading. This reasoning follows; we are in no position to ask this question, anything that science and knowledge enable us to create will eventually and inevitably be created, and we will have to learn to live with whatever consequences they bring together with them. On the other hand, recent philosophic perspectives such as SST argue for the opposite point of view by rejecting the idea that a predetermined logic inherent to the technology itself or a single determinant such as economic demand or the desire to gain power drives technologic development. They argue for the fact that the design and trajectory of technologic development are determined by several variables that
64
4 Bioethical Inquiries About Artificial Intelligence
can be regulated or modified. Moreover, they think that technologic development is not a linear journey departing from basic sciences and ending in the production phase. There are continuous interaction and collaboration of basic sciences, social sciences, clinical sciences, engineering, and design, and they are indispensable for invention, innovation, and dissemination of technology. This new perspective requires a continual assessment of the procedure, which would enable change at any phase due to social, ethical, legal, cultural, economic, or scientific factors. According to this perspective, questioning if we should proceed with developing AGI or above human-level AI technology is reasonable and relevant. However, even if we find this question relevant, we do not have a straight answer to it yet. Moreover, we are not sure if humanity would ever be able to agree on one concrete answer on this query. Although contemporary science and developments in technology provide testimonials that advanced AI technology will be available in the future, there are contradicting arguments that refuse the possibility of AI technology to challenge human beings’ ultimate higher hierarchical position in the world. In all probability, most stakeholders might agree that we need a new ethical frame which could be improved to cover existing and potential new developments in AI technology. This reasoning was behind the motive to determine principles and norms for AI technology such as the Montreal Declaration, guidelines by the European Commission, and the Asilomar Principles, which will be discussed in the next section. These norms and principles were developed to make sure that AI technology gives no harm to human beings or other living beings or make irreversible fundamental changes in the world and the order of our lives.
4.6.3 Evolution of Ethical Guidelines and Declarations The general propensity of scholars when facing ethical inquiries about design, planning, development, and production of AI technology has been to write down a list of principles so that undesirable consequences of AI technology would be eliminated or limited. Science fiction author Isaac Asimov’s “Three Rules of Robotics” is one of these well-known attempts of humankind to regulate AI development so that undesired consequences would be avoided, and the existence of humankind would not be threatened (Asimov 1950). His well-known three rules are as follows: First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its existence as long as such protection does not conflict with the First or Second Laws. Asimov later added a preliminary law intending to protect the existence of humanity. Zeroth Law: A robot may not injure humanity or, through inaction, allow humanity to come to harm.
4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology
65
Asimov’s laws may be regarded as a naïve reaction to protect the sustainability of humankind against the cheesy horror stories on autonomous robots that enslave humankind and destroy the world. His vision on AI technology—or robots as he calls—was similar to Lady Lovelace’s view, which envisioned AI technology as something that human beings have absolute control. However, the time has proved Alan Turing right about his argument for the possibility of annihilating human beings’ absolute control on the functions of AI entities while Asimov was writing down laws. Asimov’s vision has been overruled with the fact of machine learning and deep learning in particular, which provided AI entities the ability to function beyond their original program. The discussion about the consequences and ontological inquiries of AI in the ethical discourse has gained visibility and popularity over time. “Computer Power and Human Reason,” a book published in 1976, has particular importance in this respect. The author of the book, Joseph Weizenbaum, was a well-known scientist and a pioneer in developing AI technology. He was from MIT. He was a member of the team who developed two outstanding expert systems of the time: the ERMA project, the first large scale computer data processing program in banking and, the ELIZA, the first program to position the machine as a psychoanalyst and enable it to carry out an appropriate psychiatric interview session with a real patient. Weizenbaum was also one of the first scientists who raised inquiries about the morality of this technology. In his book, he not only discussed the ethical implications of AI technology but also developed two abstract ethical norms. These norms can be summarized as follows: Computer technology—AI technology- should be avoided from domains where one of the following criteria exists: (1) The intrusion of computers would possibly create irreversible side effects which are not entirely foreseeable, (2) The computer system is proposed to substitute a function that requires interpersonal respect, understanding, and love. He thought computer systems that fulfil the first criterion had the risk to attack the life itself, and the ones satisfying the second criterion would damage human intimacy and solidarity. Therefore, these domains should be free from AI technology regardless of the utility or the benefits AI technology would provide. Although his fellows criticized heavily, his book has remained a cornerstone in the history of ethics of AI. His second determination was that AI technology was being developed by technical people with the motive to create to the extremes of their capacity. Weizenbaum argued that these people were not concerned with morality or ethical implications of the technology they create. Considering the growing financial incentives of developing AI technology, it would be plausible to think that the motive stripped from moral components might have blossomed since Weisenbaum’s times. Recently due to the vast improvements in AI technology, several institutions have been interested in the morality of the development, planning, design, and production of AI technology. In this section, we will give place to three of these initiatives taking
66
4 Bioethical Inquiries About Artificial Intelligence
into account their comprehensive perspectives to embrace actual and potential issues regarding AI technology. In 2017, a critical gathering took place in Puerto Rico in the Beneficial AI Conference. Ethicists, academicians, leaders, and several other influencer people from various backgrounds such as industry, economics, law, and philosophy involved in a workshop to develop the central norms of AI technology. As a result of comprehensive discussions, Asilomar AI Principles were written down. Asilomar Principles constitute 3 three main sections: Research, ethics, and values and, longer-term issues. Each section addresses the main concepts related to their discourse and explains the related norms with these concepts. Table 4.1 shows specifics of the Asilomar Principles (Asilomar AI Principles 2017). The second document to be discussed is “The Montréal Declaration for Responsible Development of Artificial Intelligence” which was published in 2018 by the Université de Montréal, in collaboration with the Fonds de recherche du Québec. It is one of the most prominent attempts to provide a guide for principles and recommendations for the ethical development of AI technology. The Montreal Declaration was also developed with the collaboration of several parties like the Asilomar AI Principles. Citizens, experts, public policymakers and industry stakeholders, civil society organizations, and professional orders were invited to collaborate for the preparation of the declaration. This pluralistic collaboration identified ten ethical principles (Montreal Declaration 2018). Table 4.2 shows the outlines of the Montreal Declaration (The Montréal Declaration for Responsible Development of Artificial Intelligence 2018). Principles of the Montréal Declaration for Responsible Development of Artificial Intelligence are in equal hierarchical position. However, it is stated that some principles may gain more importance depending on the circumstances. The declaration suggests interpreting all principles consistently to prevent any conflict that could prevent them from being applied. The third declaration, which we will address, was released in 2019 by the European Commission, the Ethics Guidelines for Trustworthy AI. According to this document, a system would be trustworthy if it “complies with all applicable laws and regulations, ensures adherence to ethical principles and values and robust from technical and social perspectives.” The guidelines aim to set out a framework for achieving Trustworthy AI by offering a list of ethical principles that should be respected in the development, deployment, and use of AI and providing guidance about how to operationalize these principles and values in sociotechnical systems. In Table 4.3 general principles of the Ethics Guidelines for Trustworthy AI are summarized (The Ethics Guidelines for Trustworthy AI 2019). Based on the ethical principles and their correlated values seven essential requirements are defined which should be ensured during the development, deployment, and use of Trustworthy AI systems;
4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology
67
Table 4.1 The Asilomar principles Issues
Concepts
Principles
Research
Research goal
The goal of AI research should be to create not undirected intelligence, but beneficial intelligence
Research funding
Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies
Science-policy link
There should be a constructive and healthy exchange between AI researchers and policymakers
Research culture
A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI
Race avoidance
Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards
Safety
AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible
Failure transparency
If an AI system causes harm, it should be possible to ascertain why
Judicial transparency
Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority
Responsibility
Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications
Value alignment
Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation
Human values
AI systems should be designed and operated to be compatible with the ideals of human dignity, rights, freedoms, and cultural diversity
Personal privacy
People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyse and utilize that data
Ethics and values
(continued)
68
4 Bioethical Inquiries About Artificial Intelligence
Table 4.1 (continued) Issues
Concepts
Principles
Liberty and privacy
The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty
Shared benefit
AI technologies should benefit and empower as many people as possible
Shared prosperity
The economic prosperity created by AI should be shared broadly, to benefit all of humanity
Human control
Humans should choose how and whether to delegate decisions to AI systems to accomplish human-chosen objectives
Non-subversion
The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends
AI arms race
An arms race in lethal autonomous weapons should be avoided
Longer-term issues Capability caution
There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities
Importance
Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources
Risks
Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact
Recursive self-improvement AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures Common good
Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization
(1) Human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being, and (7) accountability. The guidelines suggest adopting a trustworthy AI assessment in all three phases of AI development and continuously identifying and implementing requirements.
4.6 The Enhanced/New Ethical Framework for Artificial Intelligence Technology
69
Table 4.2 The ethical principles of the Montréal declaration for responsible development of artificial intelligence 1. Well-being: The development and use of artificial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings 2. Respect for autonomy: While developing and using AI, people’s autonomy must be respected, and people’s control over their lives and surroundings must be increased 3. Protection of privacy and intimacy: Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems 4. Solidarity: The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations 5. Democratic participation: AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control 6. Equity: The development and use of AIS must contribute to the creation of a just and equitable society 7. Diversity inclusion: The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences 8. Prudence: Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them 9. Responsibility: The development and use of AIS must not contribute to lessening the responsibility of human beings when decisions must be made 10. Sustainable development: The development and use of AIS must be carried out to ensure the robust environmental sustainability of the planet
Table 4.3 Ethical principles and their correlated values for AI systems in the ethics guidelines for trustworthy AI Ethical principles and their correlated values for AI systems Respect for autonomy, prevention from harm, and fairness and explicability should be considered in the deployment, development, and use of AI Situations that involve vulnerable groups should be addressed with concern Risks and negative impacts of AI systems should be acknowledged, and adequate measures to mitigate these risks should be adopted
4.6.4 Who is the Interlocutor? The main problem with these declarations is the ambiguity of their interlocutors. One may say we, all human beings, are the interlocutors of these principles, and we all should act following them. This view is compatible with the perspective of the three declarations. For example, in the Montreal Declaration, it is explicitly stated that “any person, organization and company that wishes to take part in the responsible development of AI, whether it is to contribute scientifically or technologically, to develop social projects, to elaborate rules (regulations, codes) that apply to it, to be able to contest bad or unwise approaches, or to be able to alert public opinion
70
4 Bioethical Inquiries About Artificial Intelligence
when necessary.” Likewise, the Ethical Guidelines for Trustworthy AI by the European Commission declares that the ethical principles addressed to all stakeholders. Although practical, this answer does not solve the problem of the interlocutor. History is full of guidelines and a list of ethical norms for responsible conduct, which were overlooked by humanity. Any norm which does not speak to a particular agent would be destined to bow to the fate of being ignored.
4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology In the previous section, we mainly focused on the ethical frame in the design and development of AI technology. We pursued a discussion beginning from the philosophy of the essential inquiry on if we should create an entity just because our level in science and technology enables us to do so, and recent declarations and guidelines produced to regulate this area. In this section, we will address ethical norms, principles, and reasoning while we face when AI technology is in use.
4.7.1 How to Specify and Balance Ethical Principles in Actual Cases in a Domain? Defining critical ethical principles and values that should be considered in the field of AI technology is a valuable and necessary act. However, this should be considered as an initial step since the difficulty and challenge of ethical assessment emerges when we begin to adopt these principles and values to a single sector or, more specifically, to a single case. Stakeholders might get confused about how to proceed in ethical reasoning and how the abstract list of principles and values would help them to conclude when they face an ethical dilemma. The prima facie ethical principles are too abstract to give a quick recipe in moral circumstances. They need the specification to be applicable for a particular context with an ethical dilemma, a situation in which we have to sacrifice an ethical value or disregard an ethical principle to protect or respect another one. Balancing, on the other hand, is another tool we needed in case of ethical dilemmas. (Beauchamp and Childress 2001) For example, imagine a situation in which well-being and respect for autonomy principles of the Montreal Declaration conflict. What happens if we are to develop an AI artifact that will significantly improve the well-being of all sentient beings, but will infringe privacy and intimacy to some degree? Most of the ethical issues we face in real life involve ethical dilemmas. In the case of a dilemma, ethical principles and guidelines are without content and general to steer us to a reasonable conclusion. We need to apply specification and balancing to the ethical principles, which are related to the context so that a reasonable and practicable solution would be produced as a result of ethical reasoning.
4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology
71
4.7.2 Should We Consider Ethical Issues of Artificial Intelligence in the Ethical Realm of the Domain They Operate? It is plausible to think that intelligent artifacts that are specific to a single domain may be subject to the ethical norms and principles of that field. For example, ethical issues regarding an AI entity operating in the health domain would be subject to values and principles of medical ethics or AI for nanotechnology would be in the realm of nano-ethics. Although this would be a practical approach, it would have some problems too. The first problem is; ethics is specified only for a limited number of areas. Medical ethics and bioethics are the best-known areas of specification in applied ethics. The first reference to the term “medical ethics” can be traced back to the Hippocratic Oath in Antiquity (500 B.C.E.). The emergence of the medical ethics concept in modern times was by the book “Medical Ethics; or, a Code of Institutes and Precepts” by Thomas Percival in 1803. Since then, medical ethics has been flourishing and evolving to answer the ethical issues of health sciences and health services. The bioethics concept is relatively new when compared to medical ethics. It was introduced to literature by Fritz Jahr, in 1927. However, it did not pervade until 1971, until Van Rensselaer Potter (1911–2001) published his book “Bioethics: The Bridge to the Future.” In this book, Potter defined bioethics as “ a new discipline to contribute the future of human species…” by enabling “two cultures, science, and humanities that seem unable to speak to each other…” to communicate and find common ground to contribute to the development of humankind. Since then, bioethics has evolved and enriched to deal with a wide range of ethical issues. Today medical ethics and bioethics are two realms of ethics in which principles, values, and norms are defined comprehensively to address ethical issues in their domains. Therefore, it would be plausible to assert that any AI technology entity, which operates in these domains, should be subject to the norms, values, and principles of these areas. However, this is not the case in so many domains. For example, we do not see a similar development in the ethics of engineering and mechanics. In the 1980s, engineering ethics has started to flourish as a specific area of ethics for the people who belong to the engineering profession. It was followed by computer ethics, robotics ethics, ethics of algorithm, and nano-ethics. These efforts were criticized with the claim that ethical issues raised by some of these areas were a variation or intensification of existing ethical issues common to moral philosophy. The specification in ethics can be defended by asserting that the necessity to specify in ethics does not necessarily emerge from the novelty of ethical issues. On the contrary, the need to address common ethical issues by domain-specific perspectives may be the mainspring to unearth the requirement for specification in ethics. Nevertheless, the areas mentioned above did not show the area-specific development as did medical ethics and bioethics. Therefore, the practicability of considering ethical issues of AI technology artifacts in the ethical realm of the domain they operate
72
4 Bioethical Inquiries About Artificial Intelligence
is limited, simply because of domain-specific ethics do not exist. However, this limitedness does not necessitate ignoring domain-specific ethical values and principles when addressing ethical issues. For example, in-personification in health services due to the use of AI entities should be discussed by addressing central ethical values and virtues of medical ethics such as graciousness, sincerity, empathy, and compassionate care. Likewise, when conducting clinical research on human subjects by using AI technology, the researchers should be bound with the principles of research ethics on human subjects. The second problem is that; the weak AI entity that operates in a specific domain may raise ethical issues that became a problem because of the AI entity. Think about a hospital which stores health records in a simple computer program, which does not transfer any data through to the internet. The program is solely used for filing and archiving patient data and hospital records. Consider replacing this simple program with another one with the capability of storing higher amounts of data and uses cloud technology to overcome storage capacity problems. Let us assume that this new program also can assess the stored data continues to give alerts when the incidence of a specific infection rises or when a particular type of disease is diagnosed more than usual. Any misinterpretation of data may cause false alerts, which result in a waste of time and sources or may harm patients. Also, let us assume that this new system shares stored data with that of other hospitals or health authorities or any other stakeholders who have an interest because of any motivation. Any confidentiality or privacy breach in that system has the risk of harming patients and infringe their rights. The use of AI technology that was incorporated into the system presented these risks, and they should be addressed in the domain of medical ethics too (Graph 4.1). However, addressing this risk only in the domain of medical ethics would be a minimal and insufficient approach. We need to enter the domain of technology ethics and analyse these risks in this context as well. For example, ethics by design, which indicates that ethical issues should be considered in every stage of technology development, from design to development and service, by all stakeholders such as Graph 4.1 Ethical assessment for a particular AI technology should be embraced by the ethical codes and principles specific for the domain
4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology Graph 4.2 The ethics of technology and domain specific ethics should embrace ethical issues inherent to a specific AI technology
73
ethics of technology
domain specific ethics
ethical assessment for a specific AI technology
developers, providers, and users, is a concept of technology ethics which we need to take into consideration. Moreover, accuracy, safety, transparency, and risk assessment are other values of technology ethics that apply to this situation. That is to say, ethical issues inherent to a specific AI technology require a comprehensive perspective to include the ethics specific to the operation domain, medical ethics in this example, and ethics of technology (Graph 4.2). The third issue grounds on the fact that was discussed in the previous sections; AI technology possesses fundamental differences than conventional technology. We may face ethical issues emerging from this different nature of AI technology, particularly ethical issues which were considered neither in ethics for specific domains nor in ethics of technology. Let us imagine a health centre in which a program with AI technology executes the task of constituting the waiting list for liver transplantation. The expectation of the form the AI entity might be to have utmost impartiality and minimum error for considering several criteria with varying weights for placement in the waiting list. Let us also assume that this AI entity has the capability for deep learning, which would enable it to update the criteria for liver transplantation according to the latest developments in science. Via deep learning, the system may discover the existence of a particular gene variation to improve the medical utility capacity of the patient who receives the transplant, and basing on this information AI system decides to give precedence to patients with higher prospects of success. Likewise, the AI system may learn that patients with low socioeconomic status or from a particular neighbourhood statistically have more alcohol addiction that reduces the chances of pursuing a healthy life after transplantation. Moreover, it may discover that people from a particular race generally live in that neighbourhood and have low socioeconomic status. These data may lead the AI system to conclude that a patient with that particular race, low economic status, living in that particular neighbourhood has statistically low medical and social utility compared to a patient without these properties and therefore may decide to give less
74 Graph 4.3 The position of ethics of AI technology in the new ethical frame
4 Bioethical Inquiries About Artificial Intelligence
Ethcis of AI technology
Ethics of technology
Domain specific ethics
Ethical assessment for a specific AI technology
priority to these patients and place them behind others. From the statistical point of view, the reasoning of the AI entity may be justifiable; however, it is entirely unacceptable in terms of fundamental human rights and medical ethics. A conventional computer program designed for constituting the waiting list for liver transplantation would not cause such a problem because it would lack the capability of deep learning. Besides, it would operate due to its predetermined criteria, and only the programmer would be able to change them. This situation is an example of ethical issues that may arise because of AI technology and requiring a more comprehensive perspective than domain-specific ethics—ethics of technology and medical ethics in this particular example. These are the type of ethical issues which require ethical assessment in the domain of AI technology (Graph 4.3). Having said these, we may conclude that intelligent artifacts that are specific to a single domain are -and should be- subject to the ethical norm and principles of that field. Therefore, the answer to the question, “should we consider ethical issues of AI in the ethical realm of the domain they operate in?” is “yes.” However, presumptively, the ethics specific to that domain would not be enough to address all ethical issues emerging because of AI technology in that domain. Hence, we need to develop a broader perspective that will consider technology ethics and AI ethics together with ethics of the particular domain, which constitute the enhanced/new ethical framework for AI technology.
4.7.3 Would Inserting Algorithms for Ethical Decision Making in Artificial Intelligence Entities Be a Solution? Its designer usually predetermines the aim and purpose of weak AI entities. The entity has no awareness about what it is made for and what role their functions
4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology
75
play in the bigger picture. For example, a weak AI technology entity produced to provide a just and fair waiting list for organ transplantation may use deep learning to improve its functions, but it cannot extrapolate about consequences of the list on patients’ health status or health system in general. In other words, such an AI entity lacks awareness regarding how the entire health system works and what its aim and purpose in this whole system is. Some of the ethical issues discussed above come to existence because of this ignorance or insufficient comprehension. However, ignorance or insufficient comprehension of these AI entities may be considered as bliss for those who are contemplating on ethics of AI technology because these features enable us to insert algorithms in the original program of the entity so that they can do ethical reasoning in the limited realm of their area of work. We can insert simple if-clause norms such as “do X if and only if it does not cause Y.” This formulation can be specified as follows: “do X if it does not harm human beings.” To address more complex decisions, we can insert algorithms for decision making. These algorithms would create a consistent framework for the AI entity that would limit destructive or harmful actions. These algorithms should be so deeply embedded into the essence of an AI entity’s original program that it would not be possible to change or erase them even if the AI entity can modify its-self. Although this seems like a practical solution, at first sight, we may recognize some of its shortfalls in real life, which creates grave doubts about the practicability of this strategy. For example, consider a robot nurse that is built to draw blood from patients. Let us assume that the robot nurse possesses the required visual and tactile sensations for doing its job. When the patient comes, the robot nurse asks her in a gentle voice to sit down on the chair and roll up her shirtsleeve, so that her brachial vein is exposed. Then the robot nurse explains what it will do and takes the syringe and aims the needle to the patient’s vein. However, suddenly, her program steps in and stops her immediately because it detects action, causing harm to a human being by needling its vein and cause bleeding, which apparently will harm the patient. One may argue that this problem can be overcome easily by designing the program in such a way to permit reasonable harm which is inevitable by the course of action of the robot. However, concepts like reasonable harm require a comprehensive understanding of the process. The reasonable harm for drawing blood may be needling the patient’s vein and causing bleeding a few drops of blood after the needle is withdrawn from the vein. On the other hand, anyone who has ever been to a blood drawing unit in a hospital or anyone who gave blood may know that it is not unusual for a nurse to fail to find the vein in the first shot. If she fails in her first try, then she takes her chance on another vein. How many shots would be above reasonable harm? The scenario may get more complicated if we consider a weak AI entity produced for surgery. What would happen if an action is required to save the life of the patient, but the same action carries a significant risk for harming the patient as well? How is the AI entity supposed to solve the ethical dilemma, especially in cases in which magnitude and probability of risk are high and require immediate decision? Would a predesigned algorithm help in such situations?
76
4 Bioethical Inquiries About Artificial Intelligence
AI entities designed for healthcare services may be subject to these kinds of complex decisions. The complexity of decisions may emerge from several facts: 1. It would be tough, if not impossible, to identify a Clear causal relationship between medical interventions—decisions and actions—and health outcomes— consequences. 2. Determining the exact probability and magnitude of harm may not be possible. An intervention that is usually well-tolerated by most of the patients would be lethal for another one. 3. Risk–benefit assessment in health services requires a case based approach that takes patient’s circumstances—social, cultural, religious, economic factors—and personal preferences into action. Taking these kinds of decisions require extensive scientific knowledge and technical ingenuity in the domain, with a profound comprehension of factors to be considered in the decision making procedure. Therefore, even if we insert algorithms for ethical decision making, unique properties of any case may require novel approaches custom made for that case, which tells us that algorithms may not be enough to provide enough tools. One may argue that physicians would also struggle in cases of ethical dilemmas, especially if the consequences of actions would be a matter of life and death. It is plausible to say that it would be tough for a neurosurgeon to take immediate action in the operating room if she faces an unexpected situation in which her next decision may either cause the death or well-being of the patient. Medical professionals may encounter ethical dilemmas while carrying out their profession frequently. If the decision should not be taken as urgently as it is in the previous example, the physician may have a longer time to consider the situation to find out the right course of action and consult with her colleagues or ethical committee if she cannot find the way out. This would be the solution for ethical reasoning in cases, which ethical algorithms inserted in original programs of AI entities fail or struggle to solve an unexpected ethical dilemma during execution. Depending on the magnitude and probability of risks and urgency of decision, human beings may be invited to take part in the loop, so that they would step in to help to solve the problem. For these kinds of complex cases, we should revisit the discussion we made for autonomous military drones. In the last part of this discussion, we concluded that if it is an irreversible execution which would result in the death of people and if the ethical equipage of the AI entity is not enough to ensure the absolute correctitude and accuracy of the ethical decision then human experts should be kept in the decisionmaking loop. Ethical decisions lack the essential qualification to be absolutely correct and accurate, and expecting such excellence from any intelligent agent—either human or non-human—would be contradictory to the ordinary course of life. Hence, we should not be in a position to expect ethical perfection, which does not exist in real life, from AI entities. On the other hand, we should take every chance to enhance the possibility of reaching the best possible ethical decision. In some cases, this can be possible by involving humans in the ethical reasoning procedure so that fundamental human rights such as the right to live, respect for human life, and justice would be
4.7 The Ethical Frame for Utilization and Functioning of Artificial Intelligence Technology
77
Graph 4.4 The new ethical frame for AI technology Human oversight
Fundemental human rights and common sense
Ethcis of AI technology
Ethics of technology
Domain specific ethics Ethical assessment for a specific AI technology
protected when it seems more feasible or reasonable for an artificial ethics algorithm to sacrifice these values. For example, it would be plausible for an AI entity to conclude to sacrifice the life a patient from a specific ethnic group in which life expectancy is shorter due to high drug and alcohol addiction for the sake of another patient which does not have such a linkage in her personal history by allocating the available liver to the second patient. This decision, although it might be feasible and reasonable due to statistical data, overlooks the rights of the first patient –including the right to liveand may destroy justice. Human beings’ involvement in the loop might ensure the protection of fundamental human rights and common sense so that ethical decisions would be more appropriate, and fundamental rights and values would be protected (Graph 4.4).
References Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/. Asimov, I. 1950. I, robot. New York, NY, USA: Gnome Press. Beauchamp, T.L., and J.F. Childress. 2001. Principles of biomedical ethics, 5th ed. Oxford, UK: Oxford University Press. Boden, M. 2018. Artificial intelligence a very short introduction. Oxford, UK: Oxford University Press. Bostrom, N. 2014. Forms of Superintelligence. In Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press. Dignum, V. 2018. Ethics in artificial intelligence: introduction to the special issue. Information Technology. 20: 1–3.
78
4 Bioethical Inquiries About Artificial Intelligence
Feenberg, A. 2003. What is philosophy of technology? Lecture for the Komaba undergraduates. https://www.sfu.ca/~andrewf/books/What_is_Philosophy_of_Technology.pdf. Franssen, M., Lokhorst, G-J., and van de Poel, I. Philosophy of technology. In Edward N. Zalta (Ed.) The stanford encyclopedia of philosophy (Fall 2018 Edition), https://plato.stanford.edu/arc hives/fall2018/entries/technology/ Harari, Y.N. 2018. 21 lessons for the 21st century. New York, NY, USA: Random House Publishing Group. The Montréal Declaration for Responsible Development of Artificial Intelligence. 2018. https:// www.montrealdeclaration-responsibleai.com/. Sisson, P. 2018. Rady children’s institute sets guinness world record. San Diego Union-Tribune. https://www.sandiegouniontribune.com/news/health/sd-no-rady-record-20180209-story.html. Wheeler, D.A., M. Srinivasan, M. Egholm, et al. 2008. The complete genome of an individual by massively parallel DNA sequencing. Nature 452 (7189): 872–876. Tegmark, M., 2017. Life 3.0 being human in the age of artificial intelligence. New York, NY, USA: Alfred A. Knopf. The Ethics Guidelines for Trustworthy AI. 2019. Independent high-level expert group on artificial intelligence set up by the European commission. Topol, E. 2019. Deep medicine: how artificial intelligence can make healthcare human again. New York, USA: Basic Books. Ingram Publisher Services US. Williams, R., and D. Edge. 1996. The social shaping of technology. Research Policy. 25 (6): 865–899.
Chapter 5
Artificial Intelligence in Healthcare and Medical Ethics
Ethics is a generic term to understand and examine the moral realm. It has two dimensions: normative and non-normative. Non-normative ethics comprises descriptive ethics, factual investigation of moral conduct and beliefs, whereas meta-ethics analyses the language, concepts, and methods of reasoning in ethics. Normative ethics, the dimension that we will operate in this chapter, constitutes the prescriptive dimension of ethics by defining ethical norms, which should guide us so that our conduct would be morally right. These norms are reasoned and justified in the frame of several ethical theories. Practical ethics applies these general ethical norms and principles to specific areas of conduct, such as professions. Medical ethics is one of the specific areas of practical ethics, which addresses the moral issues of medicine and health service. We can trace the archaic roots of medical ethics back to the 17th century BC, to the code of Hammurabi, the famous king of Babylon. His ethics of medicine were embedded in the 282 laws of his codes and prevailed in various Sumerian cities. Vaidya’s oath was another ancient code of medical ethics drawn up by Hindu physicians around the 15th century BC. The Hippocratic Oath assumed to be written around to 14th century BC by the legend physician Hippocrates, has been the bestknown medical code that defined the abstract the principles of a good doctor. The term “medical ethics” was first articulated by Thomas Percival in 1803 in the title of his book on the code of institutes and precepts adapted to the professional conduct of physicians and surgeons. His book was a product of the request from his colleagues to prescribe norms and principles to regulate relationships among physicians, surgeons, and apothecaries. This book had an extensive influence in the modern world and paved the way for the Code of Medical Ethics drawn up by The American Medical Association in 1847 (Boyd 2005). Non-maleficence and beneficence found a voice in most of these codes regardless of the date of the issue. Moreover, the emphasis was placed on preserving and acting duly with moral values. These values are; respect for the privacy and confidentiality of the patient to be a virtuous physician. Beauchamp and Childress condensed the principles of medical ethics of our era under four main titles: (Beauchamp and Childress 2001). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_5
79
80
1. 2. 3. 4.
5 Artificial Intelligence in Healthcare and Medical Ethics
Non-maleficence Beneficence Respect for autonomy Justice.
Respect for autonomy and justice are relatively new principles that did not catch the eye of professionals before the 21st century. It is plausible to say that these two principles have been solidified by the instances of the last century in parallel with vast advancements in biomedical technology and enhancements in the number and scope of researches conducted in this respect. Moreover, there is a bunch of ethical values that define the morals of a virtuous physician. The core values on which most medical professionals and academicians agree are listed below: 1. 2. 3. 4.
Veracity Privacy Confidentiality Fidelity.
The rules of medical ethics are derived from these principles and values. The main difference between ethical rules and principles is that rules are more specific in content and scope. We can derive an ethical rule specific to an application area by specifying the content of a principle for a particular context or a cluster of cases so that they can guide our action while operating in that area of application. Beauchamp and Childress define specification as “the process of reducing the indeterminates of abstract norms and providing them with action-guiding content.” (Beauchamp and Childress 2001). Ethical rules guide physicians and medical professionals while they are about to make a moral decision about an ethical issue or an ethical dilemma that arises during their practice. Also, rules can serve as a checklist in reviewing the legitimacy and appropriateness of the reasoning process and final decision. Medical professionals receive courses to learn about medical ethics principles, values, and rules. Although the content and scope of these courses vary significantly among medical schools, it would not be erroneous to say that physicians receive ethics courses at one point during their undergraduate or residency training (Ekmekci 2016; Ekmekci et al. 2015). Moreover, they have opportunities to discuss how to handle ethical issues arising in their area of expertise during their professional practice by attending scientific congresses, symposiums, or other types of professional gatherings. Therefore, it would be plausible to say that physicians are often acquainted with the norms and values of medical ethics. On the other hand, the consensus around some medical ethics principles is so substantial that they became the law. In this respect, acting accordingly with these medical principles and rules is not only a requirement of moral conduct but an obligation of law as well. Any medical professional who acts against these laws may be subject to being sued and be condemned to sanctions for their deeds. The principle of non-maleficence, protecting for privacy and confidentiality, justice respect for autonomy are some of the principles that have reflections in medical law.
5 Artificial Intelligence in Healthcare and Medical Ethics
81
Medical ethics and medical law face a challenge with the dissemination of AI technology in medical practice. These challenges range in a broad spectrum. On one end of the spectrum, ethical issues are emerging from the use of AI technology in particular areas of practice in clinical settings between physicians and patients or hospitals. For example, issues about data privacy and confidentiality, fidelity, and trust are some of them—another group of issues clusters at a meta-level. Justice and fairness for having access to AI technology in healthcare, replacement of physicians by AI technology in decision-making or providing service, uncertainty about the trustworthiness, impeccability, accountability and transparency and the magnitude and probability of risk of consequences arising from these unknowns are some of the ethical issues in this respect. In this section, we will address ethical issues and dilemmas caused by AI technology becoming prevalent in medicine. We will go through the principles of medical ethics and core values of physicians and reflect upon them.
5.1 Non-maleficence The phrase “first do no harm” has been the best-known ethical principle by medical professionals. Although there is no hierarchical ordering among the four main principles of medical ethics, non-maleficence seems to have a primus inter pares position. This position may be resulting from the unique nature of the physician–patient relationship. This relationship requires intimacy that the patient has consigned her body, soul, and her data to her physician. This consignment makes the patient vulnerable and susceptible to be harmed and is possibly the main reason why the principle of non-maleficence turns out to be the prominent ethical rule for a physician. Practicing physicians should be aware of the significance of this fiduciary and act with caution, precision, and sensibility not to induce any harm. Another reason for the significance of the principle of non-maleficence is the fact that it has broad implications in terms of other principles of medical ethics. For example, the principle that obliges physicians to respect the autonomy of the patients do so because acting otherwise would cause harm by impeaching the rights and dignity of the patient. Likewise, impairing the privacy and confidentiality of the patient brings about infringements of personal private data, which is a fundamental constituent of her being. Moreover, breaches in confidentiality might bring about undesired severe social, economic, and domestic consequences. Therefore, it would be plausible to say that most of the principles and rules of medical ethics origin from the devoutness to protect the patient from harm. The principle of non-maleficence and its insinuations in several other principles urge us to rethink what harm means. The explanations above imply that harm may be conceptualized in various ways; hence, we have to have a broad perspective when we are reflecting on this concept. Medical harm can be defined as physical, psychological, or social damage done to the patient because of faulty actions such as wrong diagnosis or disparate treatment.
82
5 Artificial Intelligence in Healthcare and Medical Ethics
Lack of knowledge, being inexperienced, or simply sloppy conduct may result in medical harm. That is why medical professionals go through vigorous training and stringent exams before they receive accreditation from practicing as professionals. However, this is not enough. They have to keep their knowledge updated and retrain themselves to catch up with the latest developments in their areas of expertise. On the other hand, technical and logistical meagreness in health facilities may result in medical harm. Therefore, the organization and financing of health systems are important aspects of avoiding medical harm to patients. Physicians, health personnel, or healthcare facilities can also induce harm to patients by breaching their privacy and confidentiality or disregarding their autonomy. We will call this “personal harm” because it embodies the potential to damage patients’ rights, dignity, and honour. The implications of using AI technology in healthcare on personal harm will be discussed under the subtitle of privacy and confidentiality and respect for autonomy, respectively. Apart from medical and personal harm, the susceptibility of the primary paradigm of health service provision should be considered while discussing potential harms invoked by AI in healthcare, which constitute a “paradigm shift risk.” Paradigm is a comprehensive model of understanding and perspective to conceptualize a field. In our case, the field is healthcare services. Medical ethics principles, rules, and values are determined in the discourse of the existing paradigm. In general, most paradigm shifts are risky to some extent, while some of them are harmful. Thomas S. Kuhn argued for the fact that there are always competing paradigms in science, as one paradigm gains more weight in terms of supporting proof, it becomes the primary paradigm in that era. Most of the time, paradigm shifts are induced by scientific or technological improvements in the field. The existing paradigm may change gradually in line with the newly discovered facts, or a new paradigm may immediately replace the old one because of high prospects of more utility and benefit. Another reason for an immediate paradigm shift may be a discovery that proves the wrongfulness of the whole existing paradigm in that area. The use of AI technology in healthcare is progressing rapidly. As will be discussed in the next paragraphs, this progress is about to change the whole paradigm of health service provision in some areas of medical expertise. Ethics requires being aware of risks of the potential paradigm shift so that if high risk of harm to sustainability of health services, progress in science or losing some abilities of physicians that may be risky for patients are predicted, the shift should be halted or modified to mitigate these risks before it comes to an irreversible stage. AI technology has offerings for all areas of medical expertise regardless of their function in health services; diagnosis, treatment, rehabilitation, palliative, or elderly care. However, two areas of expertise show higher promise for AI involvement compared to others. These areas of expertise are radiology and pathology. AI technology has been improving tremendously in the evaluation of radiologic images, and this improvement is the reason for radiology to be one of the pioneering areas of expertise in which AI technology will be used with near or above human-level precision. Geoffrey Hinton, a well-known computer scientist who is one of the three who won Turing price in 2018, predicts that AI technology will be so successfully
5.1 Non-maleficence
83
and widely used in radiology that we should stop training radiologists today since we will not need human radiologists in the next decade. Similar predictions are made for pathologists. The argument is that AI technology will be capable of recognizing pathologic signs in specimens more precisely and quickly than human pathologists, hence soon, the service from human pathologists would not be required. These developments imply that AI technology is advancing so vastly in these health service areas that they will replace the singularity of human physicians and take over the responsibility for decision making soon. For example, no human radiologists would be required to diagnose the cancerous formations in mammography or identify any abnormality in foetal ultrasonography. Likewise, non-human AI systems will transact and deliver the pathology results for any tissue that has to go under a pathology examination. There would be several advantages to these developments. First, the precision of diagnosis will be improved so that the number of undiagnosed cases will drop. This new situation may avoid medical harm emerging from underdiagnoses. Moreover, these systems will have the capability to improve their performance through deep learning by going through any radiologic image or pathological specimen apparition. Their ability to learn by processing through a massive amount of data, which is above the limit a human can examine in her entire life, is the source of their precision in performance. This ability will continue to be the source of improvement of their performance, meaning errors, inaccuracy, and inexactness in their outcome would diminish by time. Besides, the time needed for evaluation of specimens of radiologic images or pathologic specimens would shorten. Since AI entities will not need lunch breaks, full night sleeps or coffee breaks to perform in full performance, they will be able to write down reports 24/7, which would be a massive advantage for patient’s wellbeing especially for time-sensitive patients such as acute neurological disorders that would require emergent care. As these technologic entities would be in use widely, they would be more affordable, and this would enhance their accessibility. Even patients in remote areas may benefit from such technology since their physicians may upload the patient’s images to the system, and AI technology in the medical centre would evaluate them and inform the physicians about the diagnosis. This system will benefit the people who have limited access to advanced health services because of the remoteness of their homelands. If we think more thoroughly, we can find out several other medical risks and harms that would possibly be avoided by excessive use of AI technology in diagnostic health services such as radiology or pathology. These arguments imply that we will have the chance to reduce medical harm if the use of AI technology in particular medical areas more excessively soon. On the other hand, we should be sceptical about any medical harm that would be induced by excessive use of AI technology in these health services. Since we have been proceeding our discussion on radiology and pathology as pioneering areas in which AI technology would take presence over human physicians, we will continue to use these two areas as the basis of our exploration for risk of harm. To determine any risks of potential harm, we will move our attention away from the glittering
84
5 Artificial Intelligence in Healthcare and Medical Ethics
advantages of replacing human radiologists and pathologists with AI technology and carefully scrutinize all changes that will come with this new system of diagnosis.
5.2 Change of Paradigm in Health Service Replacement of human physicians by AI technology implies a paradigm shift in health service provision. Health services depend on the physician–patient relationship. As briefly discussed above, this relationship has a particular nature. Patients, when admitting to the hospital, express an implicit belief that they will be understood and helped in this facility. This belief embraces the conviction that physicians and other health personnel will be doing their best to diagnose, treat, or rehabilitate without breaching the rights and dignity of the patients. Health personnel, especially physicians, should treat patients with diligence and never forget that each patient is a different individual. Social, cultural, economic status or variables depending on age, sexual orientation, world view, philosophy of life, the religion of every patient may be different, and physicians should respect these diversities while approaching their patients. Being ill means not being in the best state of our-selves, and that is when we need compassion, fidelity, and genuine care. This unique nature of being a patient and physician–patient relationship constitute the main reasons for Joseph Weizenbaum, the man who developed the first program which used AI to carry out a psychiatric interview session with a real patient, to suggest that we should exclude AI technology from areas in which interpersonal respect and genuine interhuman relations. We have examined his perspectives about the ethics of AI technology in the previous section on bioethics and AI. Healthcare fits in this description. It would be plausible to think that Weizenbaum’s suggestion depended on foreseeing the paradigm shift in healthcare that we have been talking about, a shift that has the potential to damage human intimacy and solidarity that has been an essential component of healthcare domain since the beginning. At this point, one may reasonably refer to the two areas of expertise; radiology and pathology, that we have been discussing on and argue for the fact that these two areas require very limited, if not at all, contact between physician and patient. Patients barely see the faces of the pathologist who evaluate their specimens. Most of the time patient’s acquaintance with the pathologist is limited with seeing her signature under the report from the pathology laboratory. Likewise, radiologists spend most of their time at their desks examining images on their computers and writing down reports. They hardly meet the patients or patient’s relatives or need to communicate with them. There are several other areas in medicine which do not require close physician– patient relationship. So, is it reasonable to argue against the risks of the paradigm shift that infiltration of AI technology to healthcare we have been evaluating? The answer is both yes and no. It is yes because if the provision of health service does not require building a physician–patient relationship, then we are not risking anything. The lack of physician–patient contact in pathology supports this argument.
5.2 Change of Paradigm in Health Service
85
However, things are not that black and white in all health services. Medicine is improving every day, and physicians enhance their capabilities to do more in their areas of expertise. In this respect, due to the latest developments, a new sub-branch of radiology emerged, which makes it possible to combine diagnosis with treatment. This new branch is called interventional radiology. Physicians aim to reach the part of the body to be cured with exact precision by using the guidance of diagnostic images to manipulate fine catheters. Interventional radiologists are having contact with patients and patients’ relatives no less than any surgical or clinical expert. Therefore, it is not factually right to say that radiologists do not have contact with patients, and because of this, our answer to the previous question becomes “no.”
5.2.1 Abolition of the Consultation Process The example of the development of interventional radiology implies another significant reason for not replacing human physicians with their AI counterparts. The perspective which led to using of diagnostic imaging as a tool to perform the precise treatment is a result of human creativity. It is the same creativity that rendered any improvement in medicine possible. The first physician was the first human being who helped another one to reduce pain. The idea of doing an intervention to reduce the pain of another requires observation, processing empirical knowledge, generating a solution to change the existing state of sickliness, and capability to perform the intervention. By trial and error method, the ancestors of today’s physicians found out ways to cure their patients. However, improvement in medicine and science requires more than trial and error. Constant curiosity to know more, creative thinking, and a will to improve are essential for progress. In other words, if at some point in time physicians were permanently content with what they knew and could do and had no intention to do better and more, medicine and medical science would not be at its current development stage. Now let us go back to our discussion cases again. The suggestion was to stop radiology or pathology residency training since we will be replacing them with AI technology artifacts in the next decade. It is beyond discussion that their precision to diagnose pathologic formations would be much better than their human counterparts since even today’s literature show that AI technology artifacts perform with lower rates of underdiagnoses and false positives together with higher precision diagnosis compared to human physicians. It appears to be for sure that AI technology artifacts will be much better experts to diagnose, but what about the qualia needed for improvement, development? The curiosity to learn more, investigate deeply, creative thinking to transfer scientific developments in one area to another? Would it be possible to develop a creative idea like interventional radiology if we had only AI artifacts correctly performing to evaluate images? Since the beginning of this chapter, we have been focusing on the unique nature of the physician–patient relationship. However, there is another essential relationship in medicine, which constitutes one of the keystones of decision making and scientific development in medicine. That is the relationship among physicians. The interaction
86
5 Artificial Intelligence in Healthcare and Medical Ethics
between physicians, sharing experiences, consultation on challenging and diverse cases, conducting research, and disseminating results enable them to learn more and do better. The enormous scientific medical literature, international, regional, national conferences, symposiums, case discussions, or ad hoc clinical committees in clinical settings are all various tools to communicate and learn from each other. How can this learning process be possible? What is the basis for it? It is the selfawareness of the reasoning algorithm. Human beings are aware of the algorithm they use during they think and decide. Let us go back to our radiology example. A radiology resident learns which images, signs and, reflections would constitute diagnostic evidence, and gains the ability to give meaning to them to decide about the diagnosis of the patient. If asked, she can clearly explain her decision process. This explanation process makes it possible for her to discuss and learn from her peers. By going through her explanation, others can detect the righteousness of the data she used as evidence, the consistency of her thinking process and, the validity of her inferences. If they suspect any flaws, they can ask her for further explanation and come to a mutual understanding. Through this communication process, all parties can ask questions, express ideas, and learn from each other, so that the transfer of knowledge and know-how becomes possible. This whole process paves the way to the formation of new perspectives, inquiries about improvement, and creative thinking. In the case of AI technology artifacts, we lack this process of communication. At this point, deep learning, which has been a blessing for improvement and development of AI technology to an expertise level superior to humans, becomes a curse because we do not know how AI artifacts think. As explained in the previous paragraphs, we are in deep darkness regarding their decision algorithm. We have no chance to communicate with them about how they decide, the appropriateness of their reasoning flow, the validity of evidence, or the correctness of their inferences. In this respect, the term black box that is used to define the unknown operating system of AI artifacts with deep learning capabilities is pretty well-suited for the darkness where are in regarding AI’s thinking process. This lack of possibility of communication is one of the pioneering aspects of the paradigm shift we have been discussing. AI radiologists who have access to billions of images and capability to improve decision-making algorithms operate in their universe and provide us an output. In this scenario, we are told by science that the output is correct because the AI system works with higher precision than any human being, which implies that we are not in a position to challenge this output or ask for validation. Moreover, if we stop radiology residency training, there will be no experts to ask any questions. Our role becomes limited to take the report, read what is written on it, and accept it as a fact. This process is a manifestation of technological singularity, which leads to severe debilitation of human beings in this area of expertise of medicine permanently. The question we have to ask is if this is good or bad.
5.2 Change of Paradigm in Health Service
87
5.2.2 Loss of Human Capacity Losing an ability does not have positive connotations. However, one may argue that history has witnessed human beings losing many of their abilities in the course of development and advancement. Our ancestors could run faster than many animals, climb trees, and survive under harsh natural conditions. We do not have these skills anymore because we do not need them anymore. This loss is a consequence of natural adaptation. We develop technology, use it as a tool to do the job for us, or at least make it easier so that we can redirect our abilities and energy to other areas of need. Would it not be possible to think about the invasion of AI technology entities into some areas of expertise and do the job for us as a development and a chance to redirect our efforts to some other area? Is it not the whole idea behind technologic advancement? Having technologic tools to make life easier for humans is a good thing. Likewise, having AI technology as a means to provide more precise health services is also good. However, having a technologic singularity in an area such as a branch of medicine and stopping all human interference carry serious risks. It is evident that we can develop AI entities with the potential to replace humans at least some in branches of medical practice. Besides, we can foresee the vast benefits of this replacement as well, and apparently, we are motivated to proceed as fast as possible to have those advantages. On the other hand, we should be at least that eager to anticipate possible risks and harms of this transition since it would be too late if we realize them after human beings are cleared from the field of expertise.
5.3 Privacy and Confidentiality Privacy is a broad concept. Generally, its conceptualization has two main dimensions. The first one is rights-based. It positions privacy as a right to have control over access of others to her. In this respect, privacy is defined as the right to be free from being observed or disturbed by others or the right to keep one’s personal life or personal information exclusive to herself. These two definitions ground on the idea that each person should have an absolute right to have a personal space in which she realizes her physical, informational, and relational existence, and it is up to her to let anyone into this personal space. In this context, a person who has a right to privacy should have an awareness of her personal space and the capacity to decide to allow in or keep out others. Second dimension places emphasis on the value of privacy rather than personal rights. The idea behind this dimension is that privacy does not require awareness or capacity. The privacy should be respected even the person is unconscious. For example, medical professionals should protect the privacy of patients in palliative care or long-term care facilities, even if these patients cannot claim rights about their privacy.
88
5 Artificial Intelligence in Healthcare and Medical Ethics
Both dimensions of the concept of privacy embrace five different forms of privacy: 1. Informational privacy: This form of privacy is often addressed in the context of confidentiality. Personal information is a fundamental element of the person’s being, and as stated above, the right to privacy gives the privilege to decide whom to disclose this information exclusively to the person herself. In medical settings, patients need to deliver some of this personal information to the physician and other health personnel. Moreover, the patient’s personal information is kept in electronic health records. If the health facility does not have a sound regulation on who has access to these health records, the confidentiality of information would easily be breached. In this respect, informational privacy is closely related to the protection of the confidentiality of patients. 2. Physical privacy: Addresses the privacy of the physical being and personal physical space of a person. Patients waive this privacy to some extent so that their physicians can examine or treat them. Physicians have the privilege to oversee this privacy if the patient lacks the capacity to consent, and doing so is strictly required to save the life of the patient. Otherwise, breaking into the physical space of the patient is a violation of patient’s privacy and autonomy. 3. Decisional privacy: Defines the exclusive right to make personal choices such as giving or withdrawing consent for medical intervention. This form of privacy is covered under the concept of autonomy. In medical ethics, respecting the autonomy of the patient is one of the main principles that should be protected with diligence. 4. Property privacy: Refers to the property interest of the person. In medical ethics discourse, this type of privacy is often discussed in the context of medical care or long term care for elderly patients. 5. Relational privacy: This form of privacy embraces the intimate relations a person chooses to build with others. It asserts that the person has the exclusive right to decide the person(s) she will affiliate. In medical ethics discourse, this form of privacy becomes critical in the decision-making procedure. Patients should be free to choose whom to involve an inner circle of intimate relations while making her mind up about her which medical interventions to consent. Also, relational privacy becomes a prominent issue in information disclosure. Again, the patient herself has to be the one to decide to whom her personal information would be disclosed. The use of AI technology in medicine carries risk mainly for informational privacy, relational privacy, and decisional privacy. Breaches of privacy and confidentiality of personal data may occur in two main settings. The first one is infringements in the health facility-based electronic health records. There are Health Insurance Portability and Accountability Act (HIPAA) regulations in the US and General Data Protection Regulation (GDPR) in the EU to provide the minimum requirements for a healthcare facility to avoid breaches. However, confidentiality breaches are not precluded despite these regulations. On the other hand, no national regulation for data security exists in several countries. In some countries, there are regulations, but they lack ruling power so that they
5.3 Privacy and Confidentiality
89
are not respected in practice. Therefore, it is plausible to say that even without the involvement of AI technology, patient data security and confidentiality is a problematic issue in medicine. The following settings in which privacy and confidentiality breaches are seen in is research involving data-mining. Confidentiality is a related concept to informational privacy, but it has particular distinctions too. Confidentiality breach occurs when a patient’s private information is disclosed to an unauthorized third party. This situation may take place because of insufficient data security measures. An employee may gain access to personal data of patients and convey them to other parties. Another ubiquitous reason for breaching the confidentiality of patients’ data is research. Datamining research uses patients’ data to produce new information. They do not use patients as their research subjects, but their existing data. This data may be collected during patients’ regular visits to the hospital and recorded in electronic files. Of course, in this case, patients do not have any idea about the possibility of their personal information would be used in research since this was not the intention of the health facility to collect their data in the first place. On the other hand, conducting research on the patient or her data is the same in ethical terms. Personal data is a fundamental asset of the person; hence, any intervention on personal data requires the same consent as researching the physical being of the patient. Besides, the person is considered “a participant” of the research and the research is classified as human subject research (Berman 2002). AI technology is widely used in data-mining research. Data anonymization enables us to overcome privacy issues and confidentiality breaches. GDPR defines anonymization as “the process of stripping any identifiable information, making it impossible to derive insights on a discreet individual, even by the party that is responsible for the anonymization.” When anonymized, there is no connection between the data and the person. AI technology uses big data in two main domains: deep learning and generating algorithms for health services. As explained above, electronic health records provide a critical source for deep learning. Once this data is anonymized, issues of confidentiality and privacy are fixed significantly. However, in the case of research and developing algorithms for clinical medical interventions, severe ethical issues remain. The first issue is the stigmatization. Let us think of a hypothetical case in which AI technology would use big data to create an algorithm for a particular genetic mutation. Several health facilities provide data to the research centre. Data is anonymized to avoid any confidentiality breaches, but the algorithm can recognize which centre provided a set of data so that if the quality or nature of data from a centre does not meet the standards, then it can exclude whole data from that centre. At some point, AI technology realizes that a unique genetic mutation accumulates in a particular data set. AI technology also finds out the co-existence of a psychiatric disorder with that particular genetic mutation. The output that AI technology produces reveals the fact that the unique mutation co-exists with that particular psychiatric disorder and is prevalent in the data provided from a health centre that collects data where the members of a particular tribe live. Please note that the members of that tribe did not consent for participating in any research, and researchers thought that they did
90
5 Artificial Intelligence in Healthcare and Medical Ethics
not have any confidentiality issues since data was anonymized. However, the output shows a crucial health issue about the members of that tribe. Once this information is made public, they are susceptible to be stigmatized. The second issue is related to incidental findings. Imagine we have a similar scenario to the previous one, but this time the AI system is working to develop an algorithm for the risk factors of cardiac disease. Data is collected from four health centres and anonymized. The quality of anonymization meets the criteria of GDPR, meaning that even the party that is responsible for the anonymization has no clue to drive insights on the identity of the individual. The system cannot identify the source centre of the data so that any confidentiality issue such as the one in the previous example is avoided. The AI system develops the algorithm and specifies three risk factors. The output of the AI algorithm shows that the existence of three risk factors together increases the mortality of the patient significantly which is essential knowledge for diagnosis and treatment of patients who will seek medical service after this discovery, but what about the patients whose data was used to come to this conclusion. The AI system can go through the data and filter out the ones who have all three risk factors together. That is to say, we can find out which data sources are at risk for high mortality, but since data sources are anonymized, we only know them by their codes. On the other hand, these coded data represent real individuals with real lives. We took their data, anonymized it, and used it for generating knowledge for providing benefit for humanity. Although we can use this knowledge for the benefit of the patients whose data we used, we do not because the AI system recognizes them as data sources, and we have no means to identify the sources of data that are in the group with high mortality risk. One can argue that we have no confidentiality issues here; therefore, from the ethical point of view, we are stable. However, it is tough to deny that when we go through this case, we feel an uneasiness, which we cannot defeat simply by meeting the requirement about anonymizing the data. We indeed protected the confidentiality of the patients, but we broke two ethical rules: the principle of justice and not merely using human beings as a means to an end.
5.4 Using Human Beings as a Means to an End Immanuel Kant, the founder of deontological ethics, said: “Act in such a way that you treat humanity, whether in own your person or in the person of another, always at the same time as an end, never merely as a means.” Kant thought that human beings have an inherent value, which mainly emerges from the capability of reasoning. It enables human beings to find out universal laws of ethics and act due to the orders of these laws. Therefore, he stated, no human beings should be used as a means to an end without any exceptions. However, this argument has been challenged by various situations in which human beings sacrifice themselves or others for higher causes. For example, a group of scientists who had to enter the inner chamber of the explosion area to clear away the
5.4 Using Human Beings as a Means to an End
91
radioactive debris in Chernobyl had to sacrifice themselves and other personnel for the salvage of humanity. Although these people’s autonomous consent to sacrifice for the sake of humanity would be praised by most people as a heroic act, it would be considered wrong by Kantian philosophers because of the principle of not using humans as a means to an end. Contemporary Kantian perspective overcomes this challenge by revising the rule as follows: “although treating human beings as a means to an end is wrong, it may be acceptable in cases when other ethical principles apply.” For example, in our case about Chernobyl, the expected benefit for whole humanity is so high that the principle of utility may overweight the principle of not using humans as an end. However, it should be kept in mind that weighing any ethical principle against benefit or utility for a higher number of people may cause a slippery slope in so many cases that the main ethical principles may be undervalued. In our case about the AI system and incidental findings, patients were used as a means of collecting data. When the patients applied to the hospital, their motivation was to have access to health services based on their needs. They had no idea that their data would be used for purposes other than providing benefits to them. After they receive the health care they need and leave the hospital premises, they probably think that the hospital would keep their data safe until their next admission to the hospital and use these records for their benefit then. The assumptions of patients about the ends of their health data depend on their trust in their physicians and common sense about health facilities’ function: to help and cure patients. On the other hand, the core aim of the research is also helping and benefiting patients in need. Therefore, although patients were used as a means to collect data, that was done for a more significant cause. Also, patients’ privacy and confidentiality were protected by anonymizing the data. Hence, it would be plausible to apply the contemporary Kantian perspective and allow using humans as a means for a good cause while paying attention not to cause additional harm. Our justification for using patients’ data to provide input for machine learning depends on balancing the principle of beneficence against confidentiality and privacy. The expected benefit outweighs confidentiality and privacy breaches since we take precautions not to cause unnecessary harm by anonymizing the data. However, the whole reasoning is repealed when the system comes across incidental findings. The balance between principles of beneficence and confidentiality and privacy is un-stabilized because the principle of harm is involved in the equation. Although patients’ data were dehumanized by anonymization, this does not change the fact that this data belongs to actual human beings. Our justification becomes fallacious when we keep a piece of information that would benefit or avoid harm to patients. A counter-argument would assert that anonymization breaks all bonds between patients and the data so that the data no longer represent actual human beings, but sole input for AI systems. Therefore, it would be absurd to apply moral values and principles of medical ethics such as liability, fidelity, avoiding harm, or providing benefit to this case. These counterarguments overlook the fact that medical ethics have implications for any operation related to medicine and healthcare systems, including deep learning processes for the development of AI systems to operate in healthcare. The main
92
5 Artificial Intelligence in Healthcare and Medical Ethics
concern for research and improvement in medical sciences is to avoid harm and provide benefits for human beings. Therefore, the reduction of personal data, which is an essential block of the existence of human beings, to the data repository damages human dignity and demolishes the value of human beings.
5.5 Data Bias, Risk of Harm and Justice Data bias and inequalities in access to technology constitute the main issues of justice for AI in medicine. During AI technology development, one of the most significant steps was neural networks because it enabled AI technology artifacts to recognize non-quantitative things such as images and sounds. Before neural networks, AI technology could make comparisons between two images and determine if they had the same pixels, but could not recognize the same bird in both images. This inability was a big obstacle, but it had to be overcome because without being able to define and recognize non-quantitative data, AI technology could never operate in a wide range of domains, and that would restrict its use considerably. The first step for deep learning and image recognition was to develop a vast public data set of real-world images. The second step was to categorize these images under a schema to locate the hierarchical position of an image called nested category fields. For example, in mammography images, a calcification in would be nested under pre-cancerous/cancerous formations, which is nested under pathologic images. The third step was to apply neural networks to these classified images. Features of an image to be recognized are transferred to representing quantitative data. When features and quantitative data are coupled, they are sent to each neuron in the first hidden layer. Each neuron combines the numbers it receives to find out if input data has particular components such as a curve or a line specific to the image and combines them with quantitative data. This data is transferred to the next hidden layer of neurons, and the process continues the same way until the hidden layer succeeds to recognize what the image represents with the highest probability. What we call the recognition of the image is just the highest probability output provided by the AI entity. This whole process is a black box. It is black because we do not precisely know how neurons in each hidden layer work and box because we do not see how many layers there are. In this context, deep learning can be defined as the process of assessing the input data and deriving the functions to produce the output. As the number of hidden layers increases, the network of neurons get deeper, deep learning becomes more complex, and it becomes more a mystery for us to understand how the system operates. For example, the AI system with deep neural network provides us an output report saying that the image in the upper segment of the right lung is a cancerous formation; we do not have any clue how it came to this decision. Likewise, if the AI system in the pathology laboratory gives a negative result for PAP smear, we have no idea which features of the cells were the exact cause of this negative result.
5.5 Data Bias, Risk of Harm and Justice
93
The source of knowledge for the first versions of neural networks was labelled public images. As deep learning systems developed, they have been operating in specific domains like medicine, and the images began to be provided by radiologic images such as x-rays, magnetic resonance, and computerized tomography. The number of these images to constitute the learning material for AI technology is considerably high. For example, each year, 39 million MRI scans and 80 million CT scans are done in the USA. AI technology artifacts with deep neural networks learn how to diagnose by working on these images. Of course, this in-depth learning process depends on the quality of these images and the accuracy of nested category fields. In radiology and pathology, we are expecting precise outputs with highest possible correctness, since they will be the primary determinant for diagnosis and will have particular importance for generating the treatment plan and determining the prognosis of the patient. Any flaws in the process might cause severe medical harm to the patient. These explanations reveal the paramount importance of data for deep learning of AI systems. In brief, we can say that the perceptions of AI artifacts about a concept are defined and limited by the data that was provided. For example, if the data on the concept of drug addiction is provided only by images containing people of a particular ethnicity, an AI system would think that only members of that ethnicity get addicted to drugs. This evaluation would create a wrong perception of actual reality in the AI system, which would result in faulty outputs. In terms of medicine, the quality and plurality of provided data are particularly important since the outputs of the AI system would have a direct effect on patients’ health. Data quality and plurality are strongly linked to the concept of data bias. The presence of virtual data depends on the existence and achievability of technology. Health facilities in developed countries operate with recent technology and use electronic health data systems, which would be used to feed the input for deep learning to develop AI systems in medical services. However, this would not be possible for health facilities in remote areas or under-developed countries. The latest available technology for them would be far behind the recent technology of more developed parts of the world. They would not have the means to keep quality electronic health data or use digital systems for pathology or radiology images. Therefore, there would be no data to provide input for deep learning systems, which leaves us with the fact that, currently developing AI systems for medical services are learning via biased data. We can only feed the input from a particular part of the world, which constitutes only a small part of reality. Besides, there is data bias depending on other variables such as gender, age, ethnicity, and particular states of human beings such as pregnancy. This bias would cause incorrect perceptions in the AI system, which would result in severe harms for patients. For example, an AI artifact that learned by data only from male adults would think there is a cardiac problem when used by a woman who has benign arrhythmias induced by pregnancy. Another problem with data bias is related to the principle of justice. Developing AI technology requires well-trained human resources and high-quality research and advanced production facilities. Leading countries in developing AI technology have
94
5 Artificial Intelligence in Healthcare and Medical Ethics
these means. It is plausible to think that patients in these countries will be more advantageous in terms of having access to AI technology in healthcare. This situation would deepen global health inequalities. Moreover, health inequalities across one country would also be more profound, since advanced health facilities would have the required capacity, expertise, and financial means to include AI technology systems to their healthcare provisions, while others will have to suffice with current health services, which would be far behind. On the other hand, novel health services that work with high-level technology are costly when they are first put in use. For example, the cost of whole human genome sequencing decreased to an average of 2000 Euros in 2019 from 2.2 billion Euros in 2003. We can assume that a similar trend will take place for AI technology, which will enhance its accessibility by a larger group of patients. However, although its cost cheapens, it will still be far away from the reach of millions of people, who have no or minimal health financing funds. One may argue that health inequalities and limited access to health security funds have been a problem long before the initiation of AI technology, and there are multi etiological factors that cause and sustain global health inequalities. Although it would be plausible to argue that AI technology in healthcare might augment these inequalities, they will not fade away if AI technology is kept out of healthcare. One can argue against this resigned statement depending on two issues. The first one is, the impact on health inequalities should be one of the main concerns of ethical assessment of novel technology products in healthcare. There have always been inequalities and probably will be in the future, but this does not mean that we should take this fact for granted and not be concerned about the impact of new implementations or systems on deepening existing inequalities or creating new ones. On the contrary, while developing novel systems such as AI technology, one of our primary concerns should be fairness and equity for the accessibility of these products. The second issue which serves as a ground for counterargument depends on the possibility of enhancing humans by using AI technology. Human enhancement is a unique aspect of AI technology used in medicine. Several methods are argued for realizing human enhancement. One of them is uploading the human mind to an AI system and create a new being, which is neither human nor AI artifact, but rather a combination of these two. By all means, this new creature is essentially a novel one that is a total stranger to human beings. Another speculated method is inserting AI systems to human beings’ bodies so that physical or mental functions are enhanced. These are generally called humanoids or superhumans. Scientists have different assumptions on the possibility of this. One prevailing perspective agrees that AI technology may enhance human lifetime or improve its quality to some extent, but consider the development of humanoids or superhumans as elements of science fiction movies rather than reality. Other leading perspectives think that AI technology development embraces the capability to produce humanoids and superhumans. For them, the mystery is not about if this can be achieved, but when it will be achieved. Science may plausibly discover several more methods for developing enhanced human beings by time, but no matter what method is used main moral issue remains
5.5 Data Bias, Risk of Harm and Justice
95
the same; creating a new being with superior capabilities than rational human beings. Enhanced human beings would constitute a different society than rational human beings, and the possibility to become a member of this society depends on the persons’ access to these services. In the previous case, if a patient does not have access to healthcare provided by AI technology, that patient would have disadvantages compared to other patients who have access. Undoubtedly, this is an ethical problem in terms of equity and fairness, but the ethical issue in the case of creating enhanced human beings constitute a broader issue of inequality by defining a new superior form of humankind. Hence, anyone, either patient or healthy person, who has no means to access to these services would become a less capable being than the enhanced ones. The problem here is above equity and fairness; it is a problem of existence.
5.6 Lack of Legislative Regulations Medicine is an area that is regulated by laws. Health law is a broad concept that involves several substantive areas of jurisdiction, such as contract law, insurance law, data privacy and confidentiality law, and malpractice law. In this respect, the rights and responsibilities of physicians, patients, and healthcare facility administrations are well-established. Introducing AI technology artifacts to the healthcare system would plausibly require some modifications in existing regulations or developing new ones since the AI technology systems present a new player to healthcare providers with the ability to change some healthcare provision systems fundamentally. Especially in areas in which AI technology artifacts would have a voice in the decision-making process, if not become the primary decision-maker, the need for regulations would be particularly pressing. Likewise, medical interventions which are executed by AI technology artifacts would be another area which would require immediate regulations because, the role, effectivity and command of physicians in such settings would be significantly diminished that may result in inexplicitly of physicians’ responsibilities. As the role of AI technology in medical services gain dominancy, physicians’ leader position in healthcare services becomes more questionable, and that needs to be reflected in the law so that physicians would not be responsible for acts and decisions in which they do not or cannot take active role anymore. Renovation of existing health law or enacting new ones to address issues arising from extensive use of AI technology in healthcare would avoid infringements of patients’ rights as well. Patients are entitled to know the existence and extent of AI technology usage in the healthcare service they are provided. This information should be a part of the informed consent process. Patients should be disclosed necessary information about the scope of AI involvement in scanning, diagnosis, treatment, surgical intervention, or rehabilitation medical services. This information should be comprehensive enough to cover the risks and benefits of AI involvement in one or more of these procedures and alternative ways that would exclude AI from the intervention. Disclosing relevant information to patients and
96
5 Artificial Intelligence in Healthcare and Medical Ethics
getting their consent is not only a requirement for respect for autonomy but also an exigency of the principle of transparency and fidelity because when patients consent to medical intervention, they trust the medical team led by the physicians. Patients assume that they will find a physician interlocutor if anything goes wrong, who can explain the reason for unexpected consequences. For example, let us think about a case of misdiagnosis of breast cancer in which the AI system was the sole reporter for diagnosis. Depending on this misdiagnosis, the surgical team takes action to perform a mastectomy. The results of the pathological examination of extracted tissues reveal the fact that it is not malignant, and no surgery was needed. Who would be held responsible for this kind of misconduct if the AI system is the only authority to provide radiology results? Some would argue that we should always keep humans in the loop for decisions in medical services, and this would be enough to avoid problems of this kind. Unfortunately, this argument is not valid because the existence of an AI system in the health service provision changes the perception of all parties constructively. Even today, we read researches that assert that AI systems perform better than their human counterparts. We know that AI artifacts are smarter and more careful than human physicians since they are not distracted by humanly needs and situations such as under-performing because of a sleepless night or a discussion in traffic on the way to the hospital. So to whom a physician should trust in case of a haziness? Herself or AI system? Or let us revise the question; whom would surgeons choose to trust in case a physician and AI system have conflict in their reports? What about patients? Would they take sides of their physician even if the smart AI artifact disagree? Cases of this kind will likely occur soon as AI systems will be more actively involved in healthcare services. Inevitably they will bring along ethical and legal discussions which require attentively scrutinizing the cases and reviewing health law accordingly.
References Beauchamp, T.L., and J.F. Childress. 2001. Principles of biomedical ethics, 5th ed. Oxford, UK: Oxford University Press. Berman, J.J. 2002. Confidentiality issues for medical data miners. Artificial Intelligence in Medicine 26 (1–2): 25–36. Boyd, K.M. 2005. Medical ethics: principles, persons, and perspectives: from controversy to the conversation. Journal of Medical Ethics 31 (8): 481–486. Ekmekci, P.E. 2016. Medical ethics education in Turkey; state of play and challenges. Online J Educ Teach 3 (1): 54–63. Ekmekci, P.E., M. Oral, and S.E. Yurdakul. 2015. A qualitative evaluation of ethics educational program in health science. Medicine and Law 34 (1): 217–227.
Chapter 6
Conclusion
We have many uncertainties about AI. We do not know how fast it will develop. There are deep disputes about the possibility of generating human-level AI and its timing. Some people argue for a high probability of living together with human-level AI entities in a decade, while others think the stakes of having AI fellows around us in a life’s time is too low. Also, we are not sure if AI is a blessing that will ease our lives and provide productive and efficient tools for humanity or curse that will deprive us of our jobs, freedom, and privacy. On the other hand, considering the historical evaluation of AI, it would not be wrong to say that AI will develop faster in the future than it has done until now. As it develops, it will enhance its penetration to various sectors. Moreover, its capacity will improve to provide new forms of autonomous agents such as human-level AI or humanoids. As bioethicists, it is our duty to see this coming and reflect on its possible effects on humanity and the ethical implications of these effects. In this book, we aimed to comprehend and reflect on AI from a bioethical perspective. Our first task was to draw a realistic picture of what AI is. To accomplish that, we went back to ancient Greek philosophers, Aristoteles and Plato, and searched for the meaning of technology, physis, and poesis. Then we took a close look at the development and evolution of technology and philosophy of technology through ages to our day. Our second task was to compare AI technology with conventional technology. We found fundamental differences between these two concepts of technology. These differences suggested that the existing philosophy of technology would fail to comply with AI technology, and a new way of thinking is needed to conceptualize the ethics of AI. In this respect, we went through recent efforts to develop ethical frames for AI and discussed the main pitfalls of them. At this point, our third and most challenging task started: to develop a new bioethical frame for AI technology.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics, SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_6
97
98
6 Conclusion
After providing the new bioethical framework, we focused on the personhood of AI entities. The personhood of AI becomes more important as the decision capacity of AI improves because the autonomous decisions may possibly have ethical dimensions. If AI is to make ethical judgments and act according to these judgments, they become ethical agents. This new situation obliges us, ethicists, to discuss the ethical implications of AI ethical agency and the personhood. Since we are medical doctors, it would be absurd to overlook the ethical implications of AI technology to healthcare. In this respect, we reflected on what would change with the penetration of AI in health services. We thought about the effects of AI on four main principles of medical ethics. We concluded that the penetration of AI to healthcare would inevitably change the central paradigm of healthcare services. Due to this change, we need to redefine the main principles. Moreover, we took a close look at the core ethical values of medical ethics and pursued a comprehensive discussion about how AI in healthcare will comply (or not) with these values. We hope that these discussions will draw attention and create awareness about the ethical issues of AI in healthcare so that rational solutions would be provided. Our suggestion to readers is to consider this new bioethical frame as the beginning of a long and comprehensive discussion instead of a concrete frame that would answer all ethical questions about the issue. The new bioethical frame needs to be specified to apply to particular sectors and continuously updated to catch up with the vast improvements in AI technology. There is no doubt that as it will be subject to amendments that will enrich its scope and practicability. Developments in AI technology will guide this process. Hence, we encourage the readers to consider the new bioethical frame as an initial version of an ethical assessment tool.