Social Robots: A Fictional Dualism Model (Philosophy, Technology and Society) 9781538185025, 9781538185049, 1538185024

Social robots are an increasingly integral part of society, already appearing as customer service assistants, care-home

132 40 1MB

English Pages 152 [153] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Series Page
Title Page
Copyright Page
Dedication
Acknowledgements
Introduction
Chapter 1: What Can Philosophy Teach Us about Robots?
Methodology
Fiction as a Thought Experiment
Science Fiction as a Thought Experiment
Robots and Fiction
Social Robots and Evidence
What Can the Reader Expect to Gain?
Chapter 2: Humans and Robots
How We Are Sharing Society with Robots?
How We Will Share Society with Robots in the Near Future?
What Is a Social Robot?
Social by Design/Social by Accident
Embodied or Non-Embodied
The Empathetic Response

Chapter 3: Social Robots and Moral Consideration
Animal Models
Critical Engagement with the Animal Model
Relational Models
Critical Engagement with Relational Models
Behaviourist Models
Critical Engagement with Behaviourist Models
Robots-as-Tools Models
Critical Engagement with Robots-as-Tools Models
Information Ethics Models
Critical Engagement with Information Ethics Models
Chapter Summary
D1: The Theory Should be about Robots and Not Humans.
D2. The Theory Should Reflect Epistemic Limitations.
D3. The Theory Should Strive to be Both Enlightening and Action-Guiding.
D4. The Theory Should Be Compatible with the Flourishing of Beneficial Technological Advances in AI.
Chapter 4: The Fictional Dualism Model of Social Robots
The Pros and Cons of Science Fiction Framing
The Dominant Models and Robot Rights
The Fictional Dualism Model
Why Dualism?
A Fiction is Not a Pretence
The Metaphysics of Fiction
Real-World Relevance of Responses to Fiction
What does Go on ‘On the Inside’?
The Robot Fiction and Traditional Fictions
How Does Fictional Dualism Compare with Other Theories?
D1: The Theory Should Be about Robots and Not Humans.
D2. The Theory Should Reflect Epistemic Limitations.
D3. The Theory Should Strive to Be Both Enlightening and Action-Guiding.
D4. The Theory Should Be Compatible with the Flourishing Beneficial Technological Advances in AI.
Chapter Summary
Chapter 5: Robots and Identity
Philosophical Puzzles of Identity
Continuing Bonds Technology
Uploads
Robot Upgrades
Can One Robot Have Two Identities?
Robot Identity
Dissociative Identity
Robot Identity and Hive Minds
Chapter 6: Trusting Social Robots
Trust and Reliability
Trust in Social Robots
Trust on the Basis of Appearances
Trust and Misleading Appearances
Fictional Dualism and Trusting Social Robots
Trust and Shared Experiences
Deception
Trust and Perceptible Changes
Can We Trust Social Robots?
Chapter Summary
Chapter 7: Indirect Harms and Robot Rights
The Empathy Argument
The Social Disorder Argument
Violent Behaviour and Video Games
Violent Behaviour in Society
A Training Ground for Harm
Repulsion, Distaste, Morality, and Rights
Rights Arising from Robot Identity
The Cost of Robot Rights
Conclusion
Notes
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Bibliography
Index
About the Author
Recommend Papers

Social Robots: A Fictional Dualism Model (Philosophy, Technology and Society)
 9781538185025, 9781538185049, 1538185024

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Social Robots

Philosophy, Technology, and Society Series editor: Sven Ove Hansson Technological changes have deep and often unexpected impact on our societies. Sometimes new technologies liberate us and improve our quality of life, while other times they bring severe social and environmental problems, and occasionally they do both. This book series reflects philosophically on what new and emerging technologies do to our lives and how we can use them more wisely. It provides new insights on how technology continuously changes the basic conditions of human existence: relationships among ourselves, our relationships with nature, the knowledge we can obtain, our thought patterns, our ethical difficulties, and our views of the world.

Titles in the Series The Ethics of Technology: Methods and Approaches, edited by Sven Ove Hansson Nanotechnology: Regulation and Public Discourse, edited by Iris Eisenberger, Angela Kallhoff, and Claudia Schwarz-Plaschg Water Ethics: An Introduction, Neelke Doorn Humans and Robots: Ethics, Agency, and Anthropomorphism, Sven Nyholm Interpreting Technology: Ricœur on Questions Concerning Ethics and Philosophy of Technology, edited by Mark Coeckelbergh, Alberto Romele, and Wessel Reijers The Morality of Urban Mobility: Technology and Philosophy of the City, Shane Epting Problem Solving Technologies: A User-Friendly Philosophy of Technology, Sadjad Soltanzadeh Test-Driving the Future: Autonomous Vehicles and the Ethics of Technological Change, edited by Diane Michelfelder The Ethics of Behaviour Change Technologies, edited by Joel Anderson, Lily Frank, and Andreas Spahn Social Robots: A Fictional Dualism Model, Paula Sweeney

Social Robots A Fictional Dualism Model

Paula Sweeney

ROWMAN & LITTLEFIELD

Lanham • Boulder • New York • London

Published by Rowman & Littlefield An imprint of The Rowman & Littlefield Publishing Group, Inc. 4501 Forbes Boulevard, Suite 200, Lanham, Maryland 20706 www​.rowman​.com 86-90 Paul Street, London EC2A 4NE Copyright © 2024 by The Rowman & Littlefield Publishing Group, Inc. All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the publisher, except by a reviewer who may quote passages in a review. British Library Cataloguing in Publication Information Available Library of Congress Cataloging-in-Publication Data on File ISBN 978-1-5381-8502-5 (cloth : alk. Paper) ISBN 978-1-5381-8504-9 (electronic) ∞ ™ The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI/NISO Z39.48-1992.

To Cath and Terry Sweeney

Contents

Acknowledgementsix Introduction 1 1 What Can Philosophy Teach Us about Robots?

5

2 Humans and Robots

15

3 Social Robots and Moral Consideration

27

4 The Fictional Dualism Model of Social Robots

49

5 Robots and Identity

73

6 Trusting Social Robots

91

7 Indirect Harms and Robot Rights

107

Conclusion123 Notes 125 Bibliography129 Index 137 About the Author

141

vii

Acknowledgements

Thank you to those who helped me in the completion of this book. In particular, I would like to thank the series editor, Sven-Ove Hansson, and my editor at Rowman and Littlefield, Natalie Mandzuik, for her interest in the book, for taking the project forward, and for helping me so graciously along the way. I also thank Yu Ozaki, assistant editor, for guiding the manuscript through production. Many thanks to Federico Luzzi for providing comments on the complete draft of the manuscript: the book was greatly improved as a result of his suggestions. I am also grateful to colleagues at the University of Aberdeen for substantial feedback on an early draft of the first fictional dualism paper. I would like to thank attendees at the 11th British Wittgenstein Society Conference: Wittgenstein and AI, for comments on my paper presented there, some of which made their way into the book. I am grateful to John Danaher for taking an interest in my work and for inviting me onto his podcast, Philosophical Disquisitions. We discussed the ideas behind fictional dualism and many of John’s questions appear in the book with answers that were prompted by my thinking more about our discussion. Finally, thank you to Caitlin, Jodie, and Josh, for being considerate and competent enough to look after themselves while I took myself off on a book-writing adventure.

ix

Introduction

Is there an ethically acceptable way of feeling empathy and even love for social robots without granting them moral consideration? I believe that there is. In this book, I draw on the strength of our response to fiction to explain our emotional and social response to robots. In doing so, I am not attempting to sideline or play down the importance of our response to robots, rather I point to the importance of fiction and use this to elevate and motivate the place of social robots in our lives. The lack of moral consideration comes, not from any judgement that fictions are not deserving of moral consideration, but from the observation that to grant moral consideration to a fiction would be to make a category mistake. If nothing else this book is certainly timely. As I write public interest is caught by the question of the appearance of consciousness and sentience arising from Artificial Intelligence (AI) systems, sparked most recently by the media interest in developments of relatively impressive language models. For example, in the Los Angeles Times, ‘Is it time to start considering personhood rights for AI chatbots?’ the authors note, ‘. . . if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.’ The Guardian’s Long Read takes a more dismissive tone: ‘[ChatGPT] is very good at producing what sounds like sense, and best of all at producing cliché and banality, which has composed the majority of its diet but it remains incapable of relating meaningfully to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximation, of consciousness.’ Even Noam Chomsky felt moved to write an essay for the New York Times, in which he argued, ‘[A young child’s] operating system is completely different from that of a 1

2

Introduction

machine learning programme. Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution.’ Media coverage and the public reaction to it has set social media feeds ablaze. Those who have been carefully studying the anthropomorphising effects of robots on humans for years are surely pleased that the rest of the world has caught up. Those coming at the question of robots’ rights from the perspective of being rights experts, having spent decades clearly stating where robots might sit within a system of rights are, understandably, frustrated that intuitions about who or what can or should be granted rights are being thrown around without much background knowledge of the fundamentals. Others are angry that the question of robot rights is being given so much attention when some humans are still denied basic rights in many parts of society. In this book, I have two main aims. The first is to use philosophical methods to examine the existing theories of social robots and associated claims of how we might categorise them. Of particular interest are the reasons given for calls for moral consideration for robots. My second purpose is to propose and motivate a new way of thinking of robots, the fictional dualism model. I argue that this view is consistent with and motivated by the evidence that we have and also that it allows us to benefit from an increased engagement with social robots, without that engagement being a reason to grant them moral consideration. Ultimately, I propose a framework that allows us to separate our strong tendency to anthropomorphise robots from calls for moral consideration or personhood. In chapter 1, I outline my approach and the scope of my project in more detail. The chapter begins with a brief, high-level description of the methods of analytic philosophy. I then discuss the important role that fiction and science fiction play when it comes to our intuitions about social robots, how they can be seen as expanded thought experiments, and the potential danger of relying uncritically on intuitions formed via fictions. Finally, I set out what a reader can expect to gain from this book. Chapter 2 offers motivation for the project, outlining the kinds of places that social robots are currently in use and speculating about where we are most likely to see future developments. In this chapter, I also define the scope of the project, particularly focussing on the question of what kinds of entities I am including in my study of social robots. Chapter 3 outlines what I consider to be the major recent approaches to social robots. Most, if not all, of these theories of robots are motivated by the question of moral consideration; are social robots the kinds of things that should or could be candidates for moral consideration? Although each theory offers significant insights into the question of how we should think of social robots, I ultimately argue that each approach is also lacking in some important way.

Introduction

3

Chapter 4 is dedicated to the statement and development of my original theory of social robots, the Fictional Dualism model. This theory was first proposed in my 2021 paper ‘A fictional dualism model of social robots’ published in Ethics and Information Technology. The chapter draws on that research, but it also goes significantly beyond the paper, spelling out and motivating the view in more detail. I further motivate the dualist nature of the theory and detail the kind of dualism that is in play in the fictional dualism model. I distinguish fiction from pretence and consider the metaphysical nature of fiction and the social importance of our engagement with fiction. I end the chapter by directly comparing fictional dualism with the other models of robots that we considered in chapter 3. In chapter 5, I focus on robots and personal identity. Building on philosophical puzzles of identity, I argue that technological developments will put pressure on our notion of personal identity in two ways. First, I consider technology that offers to ‘continue’ or preserve humans in a digital format, asking how we are to think of the relationship between the technological entity and the human. Second, I consider robot identity – given how attached we may be to individual future social robots, how are we likely to respond to robot upgrades and offers of uploading? I consider the implications of the surprising upshot of the fictional dualism model that one robot can have more than one identity. Finally, I outline the consequences of the possibility of robots with hive minds on how we individuate social robots and what this might mean for our notion of robot identity. This chapter might be seen to be a kind of outlier because, unlike the others, it focusses on metaphysical puzzles that are not normally taken to have much practical import. However, although metaphysical puzzles of identity used to be an abstract concern, suddenly they do have some practical import; it is, for example, of immediate practical importance to think about the relation between a deceased person and a chatbot that purports to continue them, given that such products are already available. The focus on metaphysics in this chapter helps to solidify and further clarify the distinct metaphysical nature of the fictional dualism model that was given in chapter 4, in a way that proves useful for chapters 6 and 7. In chapter 6, I turn my attention to the question of whether robots are the kinds of things that can be trusted. Objects are generally excluded from the trust relationship and yet it seems that we do form an attitude of trust towards social robots. I argue that the fictional dualism model shows how trust in robots is possible whilst also allowing us to define the conditions for robot trust. This chapter draws on my 2022 paper, ‘Trusting social robots’ published in Ethics and AI. The chapter goes beyond that paper in important ways. I consider further potential blocks to the proposed view that we can form an attitude of trust towards social robots, such as the claim that trust

4

Introduction

requires that we have shared experiences that we cannot have with robots, and the proposal that because social robots are ultimately deceiving us, we cannot form an attitude of trust towards them. Finally, in chapter 7, I return to the subject of rights, this time critically engaging with arguments based on potential indirect harms arising from the mistreatment of robots. I show that thinking of robots in line with the fictional dualism model reduces our inclination to claim that significant indirect harms will arise if robots are not granted moral consideration. This chapter draws on my 2022 paper, ‘Why indirect harms do not support social robot rights’ published in Minds and Machines. Here, I expand on the previous work by considering whether the fact that a social robot is irreplaceable according to the fictional dualism model gives us an indirect reason for moral consideration. I also go into greater detail about the cost of granting rights to robots.

Chapter 1

What Can Philosophy Teach Us about Robots?

Until recently, interest in robots was restricted to the general public’s encounters with science fiction, a relatively small group of forward-thinking academics, and tech fans who invested in early robot pets and assistants. But in recent years robots have made their appearance on the global stage in such a way that only the extremely unimaginative will not have considered what it will be like when robots are an even bigger part of our everyday lives, providing assistance in our homes, acting as study buddies to our children, helping our elderly parents in the care home, and collaborating with us in many workplaces. The prospect of robots playing a large role in society raises a number of questions that I won’t be considering in this book: Is it good or bad for robots to take on roles that humans have historically performed? Will robots in the workplace mean that there won’t be enough jobs for humans? Will people stop trying to make human friends and cultivate robot friendships? If they did, would that be a bad thing? These are good questions that focus on the impact of the inclusion of robots on humans and on society. In this book I am focussing more on the robots than on the humans because I am interested in knowing what this unusual entity, a social robot, is and what its further inclusion in society could mean for it (the robot). If an alien species suddenly arrived on the Earth, we would be interested in finding out what kind of thing it was. With robots, the ‘arrival’ has been so gradual that we have not had that discovery moment. But we do need to have the exploration because, as I hope to show, what we think that robots are will have a significant impact on how we interact with them, on future directions of technological development and on the place they are afforded in our lives. What we think robots are is also, of course, relevant to whether we see them as independent agents or as things that are essentially owned and shaped by humans for humans. 5

6

Chapter 1

The questions that I am addressing in this book are, I hope, of interest to both non-academics and academics from a variety of disciplines. Although I do engage with and learn from the work of academics from different disciplines, ultimately, I am a philosopher and bring the methods of analytic philosophy to bear on the questions that I am interested in. Saying a little about what those methods are up front, and what kind of results we can expect from philosophical analysis, might be useful in setting the readers’ expectations. METHODOLOGY Overall, philosophers are interested in analysing arguments. Sometimes they identify hidden assumptions in the arguments of others that have not been explicitly supported and that, when exposed, weaken the argument. Other times philosophers take our everyday, common-sense view of how reality is and show that, actually, reality cannot be that way at all. Philosophers can show, and have shown, that we have developed an understanding of the world that is deeply misguided. It might be assumed that the physical sciences and philosophy are very different, that the methods and truths of the physical sciences are ‘hard’ and fixed to reality while those of philosophy are ‘soft’ and flimsy. That would be a misunderstanding of both disciplines. Scientific experiments are often surprisingly far removed from the phenomena that they are designed to shed light on. For example, an experiment might require creating and using things that are not found in nature; a pure form of a chemical, or genetically identical mice. This is because the physicality of the real world, as it is accessible to us, is chaotic and complex. To trust the result of an experiment, the scientist must be sure that there is no interference from irrelevant factors; they need to abstract from reality and create a controlled environment to focus on just the aspects of the world that they are interested in. By stripping the world back in this way, all the parts that are involved in the experiment are exposed and nothing extra and potentially corrupting is allowed to get in the way. It is also worth noting that physical experiments do not provide us directly with fixed facts about the world, rather the results of physical experiments require interpretation. An experiment takes place against numerous background assumptions and rather than being a proof or a disproof of the tested hypothesis, the results could instead be interpreted as showing that one of those assumptions is false. This is an important point to keep in mind, that even experiments in the physical sciences do not by themselves provide proofs of facts in the world. Rather, they provide evidence that requires

What Can Philosophy Teach Us about Robots?

7

interpretation. The same evidence can be used to reach entirely different, even contradictory, conclusions. Philosophers often use philosophical thought experiments in their work. Philosophical thought experiments are depicted imaginations of some non-actual, sometimes highly improbable, or even physically impossible, scenario. They are used to test our intuitions about some features of the world. For example, I might invite you to consider a scenario in which you discover that your closest friend is, unbeknownst to you, a robot, and ask you whether this would change how you feel about them. At first consideration, one might be sceptical that any knowledge of the world could be gained from an experiment that is purely imagined and that this aspect of the philosopher’s methodology only adds to the impression that the truths of philosophy cannot match the truths of the physical sciences in rigour. However, this would also be a mistake. Scientific truths themselves are often discovered and even proved with thought experiments. The scientist imagines a scenario and asks the scientific community to consider what would follow if the conditions depicted in the scenario were met. As David Papineau puts it: Many important advances in science have been prompted by pure reflection on possible cases. Famous examples include Archimedes on buoyancy, Galileo on falling bodies and the relativity of motion, Newton’s bucket experiment, Maxwell’s demon, and Einstein on quantum non-locality (Papineau 2009, 13)

In these cases, the reflection of the possibility comes before the ­experiment. Not only that, but it will also sometimes be the case that the experiment, such as Galileo dropping things from towers, is likely to be less useful than the thought experiment itself because there is a chance that any experiment would be muddied by irrelevant real-world ­interference factors such as the force of a wind. In any case, the thought experiment alone already revealed the important evidence that pointed to the scientific insight (Elgin 2014). Philosophical thought experiments are used to uncover evidence that can either support or disprove a theory or shed doubt on a touted intuition. They are particularly useful in providing insights into non-physical properties in the world, such as knowledge, and normative properties that are relevant to the nature of value. However, just like the physical experiments of the scientists, philosophical thought experiments are things that require interpretation. They take place against several implicit and explicit background assumptions. With those assumptions in place, philosophical thought experiments can give us evidence to support our investigations of important non-physical properties of the world, including truths about ourselves.

8

Chapter 1

FICTION AS A THOUGHT EXPERIMENT In January 2021, I became a vegetarian. I was prompted to do so by an event that I didn’t expect to have such an impact: I watched the movie Babe. I was prompted to change the eating habits of a lifetime by a talking pig. It is not as if I had not seriously considered the wrongs of eating animals before; as a philosopher I was familiar with the arguments against eating animals and had discussed animal ethics with students and colleagues. Intellectually, I had already accepted that eating animals was morally wrong and had long thought that future humans would universally view our current practice of rearing animals as food as being morally repugnant. Despite this, I hadn’t given much thought to changing my behaviour to bring it in line with my beliefs. So how could this charming pig have brought about a personal revolution? More generally, how can fiction bring about a change in the real world? Contrary to what we might expect, there is evidence to suggest that fiction is more effective in bringing about change than direct advocate messages that involve ethical arguments (Malecki et al. 2018). Fiction can even be more effective at bringing about change than viewing documentary footage. One theory of why this is the case is that fiction can be successful where direct appeal fails as it presents us with a detailed new perspective, seeking to persuade us implicitly rather than explicitly. It can also be easier to watch than documentary footage, which some can find distressing (Małecki et al. 2018). Catherine Elgin (2014) argues that works of fiction can usefully be construed as extended and elaborate thought experiments. In creating a fiction, a writer takes a pattern that they have observed in the world, abstracts it, and re-embodies it in fiction. The author of the fiction presents us with a scenario that gives us epistemic access to a pattern that we may find ourselves or other humans instantiating. Similar to devising a thought experiment, in writing the fiction the author has selected and isolated features of the world to ‘pump’ certain emotions or beliefs. As Elgin puts it, the author contrives and manipulates situations so that certain patterns and properties stand out. For example, through the depiction of a character, or even an unreliable narrator, the author can ‘show’ us the limitations of our understanding of the world and can arguably do so much more effectively than they could tell us. To carry out this successfully, to reveal an aspect of human nature or a limitation of our view of the world, sometimes the ‘thought experiment’ needs to be far less austere, hence the more elaborate fiction. The pattern that interests the author, the observation that they have had that they want to ‘show’ the reader, requires a slow drawing-out. It seems indisputable that fiction plays a vital and indispensable role in revealing truths about ourselves and about our place in the world. In the case of robots in particular, fiction plays a special role in defining our relationship

What Can Philosophy Teach Us about Robots?

9

with them and I will often draw on works of fiction and science fiction throughout this book to clarify an example or make a possible situation vivid for the reader. However, when learning lessons through fiction it is important to be alert to the ways in which fictions differ from traditional scientific and philosophical thought experiments. I noted earlier that in both physical and thought experiments, austerity is a virtue. The experiment is designed to extract certain features of the world, screening off irrelevant details. But in fictions, where other literary features such as story arc, emotional engagement, and character development are also valued, we cannot be sure that the ‘thought experiment’ is not being sullied by irrelevant features. Which features of the story are incidental and contingent? In the case of Babe, the fact that the pig is charming, can talk, and herd sheep is irrelevant to the question of whether it is immoral for us to eat meat in the real world, where charming, chatty, sheep-herding pigs do not exist. But it can sometimes be difficult to screen off these incidental features of fictions when their emotional impact can be so powerful. Was I the victim of an over-sentimental movie plot? Probably. Interestingly, I was not alone. I later discovered that I had joined a well-documented group of vegetarians who had been influenced by what is (unfortunately) called ‘The Babe Effect’ (Nobis 2009).1 SCIENCE FICTION AS A THOUGHT EXPERIMENT In science fiction, metaphysical thought experiments are expanded and enacted. This can be useful because sometimes the metaphysical thought experiment itself can be so austere that we don’t really know what response we are supposed to have. As Elgin puts it, What should we make of Putnam’s brains in a vat? The Matrix supplies an answer. What would a computer that passed the Turing test be like? His name is Hal. Could beings without any inner lives actually be indistinguishable from us? The way to settle such matters (even tentatively and revisably) is to design a scenario in which the consequences of such hypotheses play out. Write a story about the love lives of zombies, or about the lives of zombies incapable of love. We may find that our off the cuff intuitions do not stand up under elaboration, or that the consequences of our assumptions are quite different from what the austere philosophical thought experiment led us to suppose (Elgin 2014, 23).

In some cases, particularly when the thought experiment depicts something that we find it difficult to imagine for ourselves, an expanded dramatisation containing the experiment can be helpful. However, although science fiction can be useful in elaborating on an austere thought experiment there is a danger in the straight analogy between fiction and thought experiments. Contra

10

Chapter 1

Elgin’s claim, the fiction doesn’t ‘settle the matter’ at all. In fact, the fiction will lead us in whatever direction the screenwriter chooses, and this is not necessarily the only available direction. This is because the framing effect in fiction is incredibly strong. For example, in addition to giving us an insight into Putnam’s brain in a vat scenario, The Matrix can also be considered as an exploration of Nozick’s ‘experience machine’ thought experiment (Nozick 1974). Nozick asks us to imagine that psychologists have invented a machine that could give us whatever desirable experiences we could want, and that we would not be able tell these experiences apart from those that we would have outside the machine, in the ‘real world’. We are then to consider whether it would make sense to prefer the machine, with its perfect experiences, to real life, with its messy imperfections and frustrations. Now we can note that, as a depiction of the experience machine thought experiment, The Matrix is pretty poor. By portraying Neo as the obvious hero and Cypher as the obvious villain the screenwriters are effectively suppressing what is philosophically interesting about Nozick’s thought experiment. The writers are trying to convince us that anyone who could choose to live in a pleasurable world they know to be fake over an unpleasant world they know to be real is a coward and should be thought of as intellectually inferior. This makes it much harder for members of the audience, who are being led through the thought experiment, to seriously question whether they might actually prefer to take the blue pill and continue with their life in The Matrix. We can easily imagine an entirely different presentation of the same kind of scenario, this time where Neo is being held in the desolate real world by the evil Cypher and he is desperately fighting to return to a wife and family in The Matrix, a scenario that is designed to bring into doubt any extra moral value that ‘real-world’ experiences have over elaborate fictional ones. We have to work quite hard to find this possibility in The Matrix as it is presented. ROBOTS AND FICTION In the case of robots, our attitudes have certainly been shaped by our engagement with fictionalised expansions of philosophical thought experiments. Robots were formed in the minds of authors and filmmakers long before we had the technology to bring them into existence. Because of this, the fictionalisation of robots is an essential part both of their development and our history with them. Besides the fact that science fiction has been partly responsible for our ideas of what a robot is, it has also allowed us to explore potential moral dilemmas involving robots in some depth, long before we have faced such dilemmas in real life. When considering the possibility of preferring an

What Can Philosophy Teach Us about Robots?

11

intimate relationship with a chatbot over a human being, we can struggle to imagine in the abstract how such a relationship could be desirable for any rational agent. Even when pushed to imagine a very advanced model of a chatbot that can mimic human behaviour perfectly, the knowledge that the chatbot is ‘not a real person’ can block us from envisaging how we might build a morally significant relationship with the robot, let alone love it. The synopsis for the Spike Jonze movie, Her, states, ‘In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need’ (Jonze 2014). Even when we read that description, we might find it hard to imagine such a relationship in a way that does not depict the main character as somehow delusional or as settling for something safe where he is less likely to be hurt or challenged. However, as the movie develops and we see the relationship between the two characters Theodore and Samantha grow through their experiences we, as viewers, begin to believe that it is entirely plausible that future human-to-AI relationships could be just as real as human-to-human relationships. That is not to say that we might not find moral reasons to want to block such relationships, and some of these reasons are played out in the movie, but rather that experiencing such a relationship as a real possibility becomes entirely plausible to the viewer through the science fiction in a way that it perhaps wasn’t from considering the thought experiment alone. The viewer was shown how things could be. Countless other characters from science fiction have played an important role in both shaping how we think of social robots and giving us a glimpse of possible future philosophical dilemmas and I will occasionally draw on some of them in this book. In Kazuo Ishiguro’s Klara and the Sun, Ishiguro describes a future in which children have ‘artificial friends’ as live-in companions (Ishiguro 2021). The story is told from the perspective of Klara, an artificial friend for the fourteen-year-old Josie. This is a particularly effective literary technique as the reader cannot escape the fact that Klara has a perspective, with thoughts, desires, fears, and social needs. At the same time as we are given direct evidence of Klara’s ‘humanity’ through her perspective, we are also able to witness how that humanity is denied by other characters in the book. For example, although Josie treats Klara as we would expect one to treat a human friend, her mother is far more detached: ‘One never knows how to greet a guest like you. After all, are you a guest at all? Or do I treat you like a vacuum cleaner?’ (Ishiguro 2021, 145). By depicting a complex fictional scenario, Ishiguro prompts us to consider a range of moral questions only one of which is, can we deny agency to an entity that directly presents as having it? Both of these works of fiction, Her and Klara and the Sun, could be described as elaborate thought experiments designed to test our intuitions around how to treat robots that perform in ways similar to humans. But as

12

Chapter 1

noted above, caution is needed. If thought experiments can be overly austere and leave us wondering what we are supposed to intuit, the fictionalisation of thought experiments can give us too much detail, closing off important possibilities. For example, by choosing to make Klara the narrator, Ishiguro is closing off the possibility that an AI could behave exactly like a human yet not be sentient. The entire book is indisputable evidence of Klara’s sentience. But Klara is a fictional character and Ishiguro has set the conditions for her sentience within the fictional world that he has created. So, although fiction is a worthwhile tool for exploring thought experiments in detail, when we are extracting from fictions for philosophical purposes, we need to be aware of the framing effect of the fiction and consider precisely what truths about the world the thought experiment is giving us evidence for. SOCIAL ROBOTS AND EVIDENCE In chapter 3, I will outline what I consider to be the most prominent academic approaches to social robots. Some are concerned with what social robots are, the properties that robots have or will eventually have, and others are concerned with the standing that social robots should be afforded regardless of what properties they have. Each theory focusses on a particular feature of robots or of their place in society and prioritises this as evidence. For some, it is the emotional response to robots that is prioritised evidentially. For others, the focus is on how closely robots can perform like humans or other intelligent animals. Others still are interested in highlighting not emotional response or performance but the role that robots play in society and the social relations that they can form. And others strive to pull our focus back to the fact that a robot is simply a kind of machine, designed by humans to serve the purpose of humans, and that this fact itself is a significant piece of evidence for what robots should be considered to be and the social standing that they ought to be afforded. What is the best source of evidence when we are trying to determine what a social robot is and the moral standing that it deserves? There is no easy answer to this question. Consider this analogous question: What would be the best source of evidence when we are trying to determine what a human is? We might take anatomy to be significant and say that a human is a being with a particular anatomical make-up. Or we might take a historical approach to ‘human’: a human is something that has a particular kind of evolutionary history. Perhaps some would prioritise the social role that humans play, the languages we use to communicate, and the social groups that we form. It is important to note that it is not that one of these approaches is correct and the others are incorrect; it is more likely that each is saying something important

What Can Philosophy Teach Us about Robots?

13

about what it is to be human. However, this is not to say that we cannot prioritise one kind of evidence over another, arguing that one criterion should be given more weight or more consideration than the others. This is how I propose we see the academic debate around social robots; a jostling of various positions with their advocates prioritising one kind of evidence over others. In this book, I am adding a new position to that mix. For the position to be persuasive, I will need to both argue in favour of the evidence that I prioritise and, where my position comes into conflict with that of others, argue that either what they take to be evidence is misleading or that they give it too much weight in light of other considerations. WHAT CAN THE READER EXPECT TO GAIN? It can often seem that the main goal of philosophers is not to reach any truth at all but to work vigorously to disprove their colleagues’ findings; at least it is the case that critical engagement within this discipline is very highly valued. In fact, many important works of philosophy contain only a negative contribution to the debate and no positive proposal at all. In the negative mode, a philosopher might show that a largely held view, an accepted philosophical theory, or a ‘common-sense’ folk theory, either cannot be true or that it is just one possibility among an equally good range of ways that the world might be. In the positive mode, a philosopher will put forward a proposal of their own. For those who are new to philosophy, the negative aspect can seem a bit disappointing and frustrating, like being led along an arduous path to where you thought your destination was but ending up further away than when you started out. And yet some of the most influential philosophy papers do not put forward a positive account at all. For example, in his 1963 paper, ‘Is Justified True Belief Knowledge?’ Edmund Gettier invites us to consider two thought experiments that provide convincing evidence for the fact that, despite widespread and long-held assumptions to the contrary, knowledge cannot be (simply) justified true belief. In this paper, Gettier does not tell us what knowledge is, but his paper did start a small cottage industry in epistemology which continues to produce research papers today, sixty years later. The three-page long paper has been cited over 5,500 times. It does not matter that Gettier didn’t tell us what knowledge is; what matters is that a widely held theory was exposed as being false and a new challenge was set. Given its methods and the kinds of questions that philosophers are interested in, what might you expect from a work of philosophy? Although philosophical enquiry can lead to universally accepted truths, more often than not the theories that philosophers propose are up for debate. Some

14

Chapter 1

philosophers may be so convinced by their solution to a problem that they themselves believe that it is the answer, but generally it is acknowledged that philosophers are putting forward well-founded, supported, intuitive solutions to problems for consideration as part of an ongoing conversation. As a result, you are unlikely to read a philosophy paper or book and come out feeling that there is nothing further to say on the matter, not because philosophers are not smart enough but because philosophical questions do not generally admit of definitive, obviously true answers. This might seem frustrating to those from other disciplines who are searching for a definitive answer to a question. The answer to Galileo’s question, ‘do bodies fall with a constant speed’ is independent of the place of humans in the world. But the answer to philosophers’ questions, such as what are the conditions of knowledge and what are the conditions of personal identity are not to be found in the world independent of humans. And there is no experiment we can conduct that will conclusively end such conversations. Likewise with the questions that I address in this book. Why bother with philosophy when it does not give us definitive answers to its questions? Because some of the most interesting questions that we face are philosophical ones. Exploring high-level questions about existence, knowledge, and virtue, and applied questions about how we should behave towards others, and thinking critically about the place of humans in society is surely vitally important. That there is no one answer to such questions, or if there is one answer, we have no way of being certain when we strike on it, in no way detracts from the value that comes from engaging with these questions, and from thinking critically about our experiences and attitudes towards them. The result is that the work of the philosopher is messy and, if you are looking for definitive answers, ultimately unsatisfactory. However, it is also incredibly important. Engaging with existing theories critically, and putting forward new theories for analysis, is how we make progress in the discipline.

Chapter 2

Humans and Robots

Academic research and science fiction have shown us that there are plenty of ways of imagining how we might share society with social robots in the future, and even today there are robots in use that motivate ethical and philosophical questions. When compared to the robots that we come across in science fiction, or those that we can imagine in a technologically advanced future, today’s robots may be somewhat lacking. Despite this, it is important that we study them and pay particular attention to our response to them. It says something important about humans that they respond so strongly in their engagement to entities that are in reality fairly basic in their abilities and are to be considered only early versions of social robots. This human reaction to even very simple robots will form an important part of our study. In this chapter, I motivate the project, giving examples of where robots are currently being utilised and some reflections on where we might see progress with robots in the near future. I also consider questions foundational to the project such as what kinds of entity ‘social robot’ refers to, whether it makes a difference if the robot was designed to be social, and whether we need to distinguish embodied from disembodied robots. HOW WE ARE SHARING SOCIETY WITH ROBOTS? Robots in social and elder care are a key area of current focus and likely future development. It is not hard to understand the motivation here; our population is ageing, and elder care will continue to be both a significant cost for the state and an increasing concern for relatives of the elderly. Examples of robots currently used in care-home settings include Hector, advertised as a mobile assistive robot and smart home interface for the elderly. Hector 15

16

Chapter 2

responds when called, offers helpful reminders to his elderly companions and assists them in making video calls so that they can interact with friends and relatives. Using fall detection capabilities, he can know when a fall has occurred, can assess how serious the fall is, and determine whether help is needed (Bardaro et  al. 2022). Paro, who will feature prominently in the examples and thought experiments in this book, is another example of a robot designed for the elderly, but this time as a robotic companion or therapy robot. Numerous studies have shown that Paro, with its fluffy baby seal design, brings the benefits of animal therapy to those who are not capable of having or cannot commit to looking after a pet. Paro has been shown to have a positive psychological impact on patients, improving their motivation and helping them to relax and socialise (Sorell and Draper 2017). In addition to these benefits to patients, having Paro in the care-home setting has also been shown to have a positive impact on the carers, easing their relationship with residents. Researchers have claimed, persuasively, that introducing companion robots such as Paro into care homes can help to maintain the residents’ sense of autonomy, which is recognised as being a key factor in retaining dignity in old age. Paro supports social connections and provides a level of companionship, reducing loneliness, isolation, anxiety, and depression – feelings which have often been found to lead to a reduced sense of autonomy (Pirhonen et al. 2019). Shabata (2012) details how using Paro as a focus in the common area of a care home provided a context in which a resident who had been non-verbal for over a year began interacting with Paro and talking about her experience growing up on a farm and looking after the animals. The patient’s general communication continued to become better even after the interactions with Paro stopped. Education is another area where there seems to be massive potential for social robot expansion, with a number of different robots currently in use in educational settings. Humanoid robots – such as Nao, Tiro, and Pepper – are found in educational settings, particularly in Japan and South Korea, where they are most regularly used for language learning. Studies have shown that their humanoid shape increases student engagement and enables a connection that can help students overcome shyness and deal with frustration or embarrassment when they make mistakes (Lehmann and Rossi 2020). Robots have also been used for health and hygiene education in schools in Canada where an interactive robot, the ‘Caring Coach’ has helped to educate children on fitness and healthy living. Generally, early studies with even comparatively basic robots suggest that using robots in children’s learning can aid concentration and increase learning interest (Han et al. 2008). Social robots can also be found in business settings. For example, robots have been employed in training scenarios where they can simulate a person for trainees to practice different kinds of encounters, some of which might

Humans and Robots

17

be difficult or unethical to act out with a human. Interview robots have been developed to make the interview process fairer and less susceptible to bias. The robot Pepper has been employed by supermarkets to advise and direct customers, and FRAnny has been tested as a concierge robot at Frankfurt Airport where she advises people about arrival and departure times and gives other useful travel information, in multiple languages (Xiao and Kumar 2021). HOW WE WILL SHARE SOCIETY WITH ROBOTS IN THE NEAR FUTURE? Having an eye on near future technological developments can help us to prepare the way for a smoother and more ethically sound transition. However, what that future will look like is difficult to predict. We are limited not only by technological developments but also by social and pragmatic factors. As with any new market, uptake depends on the level of public engagement. There is no point in investing in care home therapeutic pets if it turns out that residents are not interested in them or if their interest wanes after an initial honeymoon period. Promising, even desirable, technology does not always take; for example, schools and universities in the United Kingdom spent millions investing in smart-board technology promoted as having the potential to revolutionise the student experience, but the technology was underutilised by the educators who preferred to fall back on tools that they were more familiar with (Korkmas and Cakil 2013). It is certainly possible that classroom assistant robots will face a dusty existence in the corner unless the technology is properly socialised and until the design of the technology suits the needs of the educator on the front lines. Having said that, an educated guess about the kind of areas where we might expect to see progress and uptake is useful if we are trying to forestall potential future moral problems or, as may well be the case with social robots, if we need to prepare the population for engaging with this new kind of entity. So, what are the areas where we are likely to see continuing progress with social robots? The increased recognition of the benefits of individualised learning, tailored both to the student’s learning abilities and interests, will very likely lead to an eventual rise in robotic classroom assistants. Robotic classroom assistance will be able to give an individualised learning experience that would be impossible for a single teacher to provide to a large class. The use of robotic assistance in care homes is very likely to continue to develop, given the economic and social challenges that our ageing populations present. Aerschot and Parviainen (2020) note that the promises of the

18

Chapter 2

late twentieth century that we would see high-quality, affordable, and multitasking care robots introduced into the sector have not become a reality, with designers backing away from the difficult and heavily regulated assistive care robotics to focus on the easier win of the therapeutic pet robot. But at some point, as the technology becomes more affordable and meets safety regulations, the scale will tip and we are likely to see the rise of general robotic assistants which can meet both the physical and the social needs of care home residents. Other social robot markets which seem certain to continue to grow are personal assistant technologies. Alexa has been a commercial success, but she is limited by her lack of mobility; surely future home assistants will be embodied and able to undertake more general simple domestic chores such as fetching things from other rooms, tidying and cleaning, greeting guests, and entertaining the baby or pet, in addition to their current tasks of playing music and adding reminders to your calendar. These are all task-based robots, robots that are designed to fulfil some task that would ordinarily be undertaken by a human. Another robot market that is developing alongside the task-based robot is the companion robot. Companion robots are designed to fill a relationship need that would ordinarily be fulfilled by a human (or other animal). As technology advances and becomes more affordable, it does seem likely that we will see an increase in the use of robots that we come to think of as confidants, friends, companions, and life partners. It is also likely that companion robots will overlap with task-based robots: a therapy robot can become a trusted confidant and a workplace robot partner may start to be treated like another member of the team. A companion robot market that seems certain to grow, and which raises its own ethical challenges, is the emerging social robot market that promises to ease the pain of bereavement. In ‘Communing with the dead online: chatbots, grief and continuing bonds’, Krueger and Osler (2022) argue that chatbots developed from data gathered about a dead partner can play an important role in establishing the ‘continuing bonds’ that will help us to deal with grief. Although the authors focus on non-embodied bots, they point to developing technologies such as holographic images of deceased loved ones. In an early example Kanye West gifted Kim Kardashian a hologram of her dead father to wish her happy birthday at her thirtieth birthday party. In 2021, the company MyHeritage launched the app Deep Nostalgia, a service that uses video technology on photographs to make it appear that your departed loved one is smiling and moving. The app launched to mixed reviews with some users describing the end results as ‘creepy’ (Krueger and Osler 2022). The recent iteration of the app, launched in March 2022, offers to bring photos to life with both animations and voice. An animated photograph is not a social robot, but the prospect of ‘uploading’ information

Humans and Robots

19

about one’s departed loved ones into a robot or using that information to make a chatbot or virtual agent does seem to fall under the category of social robotics. WHAT IS A SOCIAL ROBOT? How can we distinguish a social robot from other robots? The neatest way to mark this distinction would be to identify some capability or physical feature that designates the category of things that we are interested in. For example, we might say that social robots are robots capable of verbal communication, or we might stipulate that social robots are those that have facial features. However, the verbal communication identifier would not capture some robots that we clearly want to study but that do not communicate verbally, such as the therapeutic seal, Paro, and likewise the facial features identifier would exclude other robots that are of interest to our studies such as Siri and the Roomba. We could try for a disjunctive view where we identify a range of conditions such as verbal communication, facial features, and independent movement, and then stipulate that a social robot is a robot that will meet one or more of these conditions, but given the diversity of the features of robots that seem to evoke a social response in us, it is very likely that a disjunctive view will be too inclusive leading us to mistakenly categorise items as social robots. For the purpose of this project, where we are interested in the philosophical consequences of our emotional and social reaction to some robots, the question of how to set precise limits on the category of social robots is not pressing. That question would become crucial if we were trying to implement some policy or law that applied only to social robots, because then we would need to be able to determine which entities met the conditions and which did not. But for our purposes, it will be useful to leave the category of ‘social robot’ relatively open and determine the objects which fall under it broadly and using response-dependent properties. To say that a property is response-dependent is not to suggest that it exists only in the perception of the person. Consider the property of being tasty. Tastiness is not a property to be found in objects independent of tasters, that is, tastiness is not a primary property, where primary properties are those that are independent of our responses to them. Tastiness is instead a secondary property of food, a property that has the propensity to produce a response in those who experience it. Secondary properties do not exist in the object independently of any experiencer but neither do they exist only in the mind of that experiencer, rather they are properties that depend on an interaction between the object and a subject (Jackson and Pettit 2002).

20

Chapter 2

Thinking of the category of social robot as being determined by our responses, similar to how the property of being tasty is determined by our responses, allows for variation across contexts. For example, while I count rhubarb as falling in the category of things that are tasty, you may not. You may class rhubarb among the things that are not tasty. It would be an impossible task to divide the world up into the objects that are objectively tasty and those that are objectively not tasty. Rather, the set of objects that are tasty for you will differ from the set of objects I find tasty, although we are likely to have substantial overlaps. And even for a single person, the objects that are tasty are likely to vary over time: I find rhubarb to be tasty now, but I did not consider it tasty when I was a child. Thinking of the social aspect of social robots in this way will prove useful for our purposes. First of all, it focuses on the properties that we are interested in, the social and emotional responses that we have when we interact with certain kinds of robots. Second, it allows for what counts as a social robot to vary across individuals and time. This seems to reflect the nature of the sociality that we are interested in. For example, studies of social robots in care homes have shown that people have different reactions to the robots (de Graff and Allouch 2013). Some people respond to the sociality of the object more than others and it is likely to be the case that people will have the propensity to respond to different features of the robots: some may prefer the pet-type robots and develop a strong emotional attachment to them, whereas others may have a stronger emotional reaction to humanoid robots. Some may find themselves anthropomorphising their Roomba but be put off and reject any form of robot that tries to mimic a human or animal too closely. And we might find that our own responses will change over time. While robots are still quite new in our lives it may be the case that an individual finds themselves anthropomorphising fairly basic robots but as the individual becomes more familiar with advancing robot abilities their threshold for anthropomorphising may shift. Although it is useful for our present purposes, there is an obvious potential criticism of categorising social robots in this way. It might be reasonably pointed out that it does not really give a good answer to the question, ‘what is a social robot’ if one simply says, it’s a robot that (some) people tend to react to in a social way. On the other hand, being overly prescriptive, for example, stipulating that a social robot is a robot that is designed to engage socially with a human, would stop us from including interesting categories of things, such as those robots that unexpectedly elicit a social response. Given there is no immediate need that social robots be precisely defined, I will adopt a response-dependent approach to categorising social robots.

Humans and Robots

21

SOCIAL BY DESIGN/SOCIAL BY ACCIDENT The fictional characters Klara and Samantha from Klara and the Sun and Her, respectively, are robots that were purposefully designed to be social. Klara is part of a series of robots that were designed to be children’s friends so we can imagine that the roboticists in the design lab would have enhanced the features that market research showed would optimise their use in the robotic companion market. Samantha is designed as a personal assistant (PA) and it is clear that part of her appeal is her ability to generalise beyond basic duties and respond to Theodore’s character, just as a human PA might. Both Klara and Samantha are designed to maximise social engagement with their human companions. Although technology has not yet enabled robots or chatbots to reach the performative ability of Klara and Samantha, there are many examples of robots that are designed with social engagement in mind. Pepper, mentioned above, is advertised as having the ability to recognise faces and basic human emotions and has been tested out in various commercial settings including welcoming customers to HSBC’s Manhattan branch.1 In the eldercare market, Gecko CareBot is promoted as ‘a new kind of companion that always stays close to [the patient] enabling friends and family to care from afar.’2 And, as mentioned above, there are a number of robot pet-type companions that are interactive and designed to respond to social cues and reciprocate social behaviours. In addition to Paro, other robot pets such as AIBO the dog, Pleo the dinosaur, and NeCoRo the robotic cat are used in care homes specifically because of their social design. The design of the robot pet has been shown to increase and promote play and to be particularly beneficial to patients with dementia or cognitive impairments that make human-to-human interactions difficult and stressful, and would make patient-to-animal interactions unethical. However, it is not only robots that are designed with social engagement in mind that have humans coo-ing over and conversing with them. As many theorists have pointed out, see Darling (2016, 2021) in particular, there are some robotic objects on which we are inclined to project intent and states of mind despite their not being designed with social engagement in mind and in spite of their having no obvious social features. For example, there are reports of people feeling distressed when their Roomba vacuum cleaner gets stuck in a corner, giving their Roomba a name and conversing with Roomba as if it were a pet (Scheutz 2011). Darling reports on a 2007 study that claimed that many people had a social relationship with their Roombas and described them in language that they would use for humans or animals. According to this study, over 80 per cent of Roombas have ‘pet’ names (Darling 2021, 101). Scheutz notes further, ‘While at first glance it would seem that the

22

Chapter 2

Roomba has no social dimension (neither in its design nor in its behaviour) that could trigger people’s emotions, it turns out that humans, over time, develop a strong sense of gratitude towards the Roomba for cleaning their home’ (Scheutz 2011, 213). In some cases, we even exhibit an emotional reaction when it is detrimental to the success of the product. A costly unintended design consequence was discovered when the US military introduced a testing programme for a landmine robot. The robot had an insect design so that when a landmine detonated and it lost a leg, it could continue on the others. According to Joel Garreau’s Washington Post article, the colonel in charge of the exercise called it off as “he just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane” (Garreau 2007). It was reported that soldiers working with bomb disposal robots in Iraq and Afghanistan gave the robots names and, when one robot was destroyed in an explosion, the unit collected the remains to bring back to camp for a remembrance service. Julie Carpenter, a researcher studying the emotional connections that workers feel towards bomb disposal robots found that they gave the robots human- or animal-like attributes including a gender and found that the soldiers displayed empathy towards the robots and felt anger and sadness when the robot was destroyed (Carpenter 2016). The Roomba and the landmine robots were not designed to invoke emotional responses and yet they did. EMBODIED OR NON-EMBODIED Earlier, I described Samantha, a character from the movie Her, as an example of the kind of entity I am concerned within this study. But you might have noticed that Samantha does not have a body. Samantha is more like a chatbot than a robot. In fact, although Samantha is clearly social, some might question whether she can be classed as a robot at all when she is not embodied. Certainly, the way we have used the term ‘robot’ historically would seem to limit it to something that has a physical body. On the other hand, many of the social aspects of human engagement with an entity like Samantha and a humanoid robot would appear to be the same. Because of this, it will be useful for our purposes to classify them together. In any case, there is a sense in which the question of whether a non-embodied agent or chatbot can be a social robot is not really interesting. It is rather like asking whether an online ‘friend’ can be called a friend, given that we historically thought of a friend as being someone who satisfied certain conditions that online friends do not. Perhaps for a while, when online friendships were new, we did feel the need

Humans and Robots

23

to clarify those that we had only ever met online had as a different category of friend, an ‘online friend’, but now that online friendship is much more common, we feel less need to add the modifier. Our ‘online friends’ are now just counted among our group of friends (Danaher 2018). Language use tends to evolve with our needs, and it seems plausible that virtual social agents such as Samantha, or advanced virtual reality agents who appear to be embodied but are not, will play the same role in society as embodied social agents. Embodiment can then be viewed as just one contingent feature of an artificial social agent. What will become of our terminology is not clear; we might stretch the meaning of the word ‘robot’ to fit these non-embodied agents, like we stretched the meaning of the word ‘friend’ to include online friends, or we might come up with a new term that distinguishes between embodied and non-embodied social entities. For our purposes, the question of whether a non-embodied social agent can legitimately be called a social robot is not a particularly pressing one.3 The social AI systems that I am interested in here can be embodied, like Paro, or they can be non-embodied, like an advanced chatbot. THE EMPATHETIC RESPONSE One of the initially surprising but now commonly cited responses that we have towards social robots is empathy for their situation. Numerous studies have shown that when robots replicate or mimic the behaviour of living things, when they react to our interactions, when they move in animal-like ways, when they have familiar facial expressions, when ‘framing conditions’ are right, and so forth – this provokes an attachment in us and a corresponding empathetic emotional response (Birnbaum et  al. 2016; Coeckelbergh et  al. 2016; Collins, Millings, and Prescott 2013). This attachment can be surprisingly strong. Sherry Turkle recounts the experience of the robot designer, Cynthia Breazeal. Breazeal was working on a robotic head that was designed to learn from her tuition over an extended project. Breazeal reported experiencing a connection with Kismet that went beyond what she would expect to feel for a machine. When she had to leave the lab at the end of the project, Breazeal describes experiencing ‘a sharp sense of loss’ at leaving Kismet behind (Turkle 2010). Kate Darling points to the human distress reaction to viewing or inflicting damage on social robots. Talking of her own social experiment, she says I conducted a workshop with my colleague Hannes Gassert at the LIFT13 conference in Geneva, Switzerland. In the workshop, groups of participants were given Pleos – cute robotic dinosaurs that are roughly the size of small cats. After interacting with the robots and performing various tasks with them, the groups

24

Chapter 2

were asked to tie up, strike, and ‘kill’ their Pleos. Drama ensued, with many of the participants refusing to ‘hurt’ the robots, and even physically protecting them from being struck by fellow group members. One of the participants removed her Pleo’s battery, later sheepishly admitting that she had instinctively wanted to ‘spare it the pain.’ Although the groups knew we had purchased the robots to be destroyed, we could only persuade them to sacrifice one Pleo in the end. While everyone in the room was fully aware that the robot was just simulating its pain, most participants giggled nervously and felt a distinct sense of discomfort when it whimpered while it was being broken (Darling 2016, 225).

Others have provided evidence in support of this, reporting on studies showing that humans do not like it when robots are treated violently or poorly, particularly when those robots are human-like. In one study, subjects were shown video footage of five ‘protagonists’ of ‘varying degrees of human likeness’: a Roomba; AUR, an LED robotic lamp with ‘degrees of freedom’; Andrew, an adult-sized humanoid with a full range of movement but limited facial expressions and a mechanical sounding voice; Alicia, an adult-sized android with full movement, a human appearance, and a mostly human sounding voice; and Anton, a human boy. Each subject was shown an emotionally evocative clip in which a person acted cruelly towards the protagonists: shouting at them, pushing them, and ordering them to do embarrassing things. Subjects were then asked to rank how sorry they felt for the protagonist on a scale from 1 (not at all sorry) to 6 (extremely sorry). Subjects reported some empathetic response to all of the robots; incredibly, even the lamp received 1.95 on the empathy scale. Roomba received 2.18, Andrew 3.21, Alicia 3.65, and Anton received 4.01. The researchers stressed the fact that people empathised nearly as much with the humanoid and android robots as they did with the human boy (Thomaz, Hoffman, and Cakmak 2016). A longitudinal field study involving six elderly participants showed that two of the participants in the study treated a small rabbit-shaped robot, Violet’s Nabaztag, like a companion. They gave the robot a name, spoke to the robot, and displayed encouraging non-verbal behaviour towards it. In an interview conducted as part of the study, some participants reported that they had formed a relationship with the robot and felt that they missed it when the study ended. The other participants also reported being attached to the robot but stated that they had ‘mixed feelings’ or considered the robot to be ‘a tool’ (De Graaf, Allouch, and Klamer 2015). In a follow-up study, participants were asked to destroy a collection of Crawling Microbug robots with a hammer after they had been interacting with them. Although the participants did hit the robots when asked, one stated in the debrief that they ‘did not like killing the “poor” robot because it was “innocent”’ and claimed that the experiment was ‘inhumane’ (Bartneck et al. 2007).

Humans and Robots

25

Is it problematic that we have a tendency to experience an empathetic response towards robots? On the one hand, no. Our empathetic response is an incredibly useful feature of our engagement with robots and is the key to our successful interactions with them in many areas of robot advancement. There is evidence to show that when we experience feelings of empathy towards robots, we are more likely to achieve social benefit from our interactions with them (Rosenthal-von der Pütten et al. 2014). Because of this, roboticists working on robot design that have a social purpose are likely to want to amplify the qualities that lead to an empathetic response. On the other hand, there are ethical and philosophical implications that arise from the fact that our interactions with social robots encourage an empathetic response. An empathetic response could be an indication that we consider or perceive robots to be a part of our social system. We are quite well established as a society for integrating living things, humans, and other animals, into our social system. Although the system is perfectible, we tend to grant rights or at least moral consideration to living entities. More recently, we have extended moral consideration to parts of the natural world such as rivers and forests (Gellers 2021). But we have not yet standardly granted moral consideration to man-made objects such as tools or computers. If there are now objects in our midst that challenge that basic division, things that are man-made objects that can play the same social or functional role as living things, the question of what kind of social entity they are, and what place they should be afforded in society deserves serious and significant philosophical attention.

Chapter 3

Social Robots and Moral Consideration

Engagement with social robots is fairly new to humans. We are only just beginning to work out the place that these entities will have in society and the significance of our relationship with them. It is natural to look to existing relationships for inspiration or example-setting when we are trying to make sense of the role that social robots will play in our lives. The choice of model or analogy will have an impact on how we might understand the social and moral significance of these new relationships. Perhaps the main motivating question for theorists working in this area is whether social robots are the kinds of things that should be granted moral consideration. To put it crudely, are robots to be treated like the objects in our environment, or are they to be treated like the moral agents in our environment? In support of the ‘object’ side of the divide, we might note that robots are advanced forms of a kind of entity that historically we would not even consider granting moral consideration; they are to be classified alongside machines, computers, and other such tools. In support of the ‘moral agent’ side of the divide we might note, first, that our emotional and social engagement with robots is more akin to the response we have to other moral agents than it is to the response we have to objects. Second, we cannot be certain that robots will not develop the kinds of properties that motivate us to grant moral consideration (e.g., sentience or consciousness), an observation that threatens to knock down any appearance of a moral divide between ‘mere’ objects and moral beings. In recent years, various models of how we might think of the moral significance of social robots have been proposed, each with their own merits. Below I have grouped the theories together under five main approaches to the question of moral consideration of robots: animal models, relational models, behaviourist models, robots-as-tools models, and information ethics models. 27

28

Chapter 3

Before considering these views, I will first justify my decision not to include a simple properties theory below, a theory which states that robots are to be granted moral consideration when they achieve some perceived moral property such as consciousness. After all, as Mark Coeckelbergh (2010) pointed out, all of the three main normative theories in ethics (deontology, virtue ethics, and consequentialism) see properties as being at the root of moral status. A simple properties view would state that robots deserve moral consideration when they instantiate whichever mental property our best moral theory identifies. The problem with mental properties views, beyond determining which properties matter, is that we have no method of detecting such properties, nor even knowing if it is possible for machines to develop them. In his paper, ‘On the moral status of social robots’ (2020), Mosakas quite rightly points out that this is an epistemic issue which should be distinct from the question of which properties are morally significant. While I agree with Mosakas that surely the instantiation of such properties is morally significant; I don’t think that we can rest our investigation on this basis. This is because the epistemic issues are so deep; a theory which states that robots are to be granted moral consideration when they achieve consciousness is of little use when we lack an understanding of what consciousness is and we know that its instantiation is difficult, perhaps even impossible, to detect. The theories that follow are to be understood against this background.

ANIMAL MODELS One model of how one might think of social robots, common in the literature, is to draw on our relationships with animals. It is easy to see why this might appear to be fruitful. The attachment of the soldier to their landmine robot may appear (to them) to be similar to the attachment they feel for animals that they work with in their line of duty. Likewise, the response that you have when your Roomba is stuck under the sofa may appear to be similar to the response that you had when you saw your pet gerbil trapped in a part of her running tube. In her recent book, Kate Darling outlines a number of reasons to favour an animal framing of our relationship to social robots, beyond our tendency to have an empathetic reaction to them (Darling 2021). Darling points first to workplace integration, giving examples of ways in which we have worked alongside animals and where they have become our partners. She proposes that the inclusion of robots in the workplace, where they are not to replace us but are likely to be our partners or assistants, is reminiscent of our working relationship with animals.

Social Robots and Moral Consideration

29

Darling highlights other points of analogy. She notes that both robots and animals are relatively autonomous and can behave in unpredictable ways which might explain our likewise reaction to them. Furthermore, Darling points out that we (mostly) do not expect animals to take responsibility for harms that are a result of their behaviour and that this is analogous to our reaction to robots causing harm. That is, in both cases we look for a human as the responsible party when things go wrong. In fact, when viewed through the lens of their being animal-like, assigning robots responsibility seems odd. We do not expect animals to understand the complexity of moral expectation so neither should we expect robots to. Darling notes that in 2018, the U.S. Chamber Institute for Legal Reform produced a report on the liability and regulation of emerging technology. Within it they suggest, among other things, that robots might be considered like pets in terms of their liability. Having noted the many ways in which we anthropomorphise robots, Darling points to comparisons with our tendency to anthropomorphise animals, dressing our pets up like mini-humans and ‘treating’ them to experiences that are more suitable for a human than an animal. Disney has made countless animations involving talking animals who are portrayed as having humanlike concerns and ambitions and, even without the assistance of animators, pet owners themselves often project mental states onto animals. In the final summary of her monograph Darling states, ‘I think that animals are a great analogy for robots because they demonstrate that we’re capable of many different types of social and working relationship. But the analogy is also useful because it gives us a much-needed starting point to reframe our current thinking on robot technology’ (Darling 2021). Other theorists have drawn on the animal analogy.1 Sven Nyholm outlines a view that is similar to Darling’s in the closing chapter of his monograph, Humans and Robots. He states that in at least some cases our treatment of robots can be guided by what he calls respect for our and others’ humanity (Nyholm 2020). Nyholm’s view differs from Darling’s in prioritising humanity. He relies on the Kantian principle, ‘So act that you treat humanity, whether in your own person or that of another, always as an end in itself, and never merely as a means’ (Kant 1785, 429). Nyholm suggests that we might extend this principle to include not just humanity in ourselves or others as an end in itself but that we also treat the appearance of humanity never as a means but always as an end. That is, out of respect for humans, we should treat human-like robots (and animals) in respectful and dignified ways. According to Nyholm, ‘That, it might be argued, is what explains the appeal [of] the idea that if something behaves like a human, we should treat it in a way that is appropriate in relation to human beings’ (Nyholm 2020, 187).

30

Chapter 3

CRITICAL ENGAGEMENT WITH THE ANIMAL MODEL The animal model is interesting and certainly does point to some similarities between how we feel about animals in the workplace and our homes and how we seem to feel about robots. However, the analogy with animals only takes us so far. From the fact that these emotional responses to robots and animals may feel the same to us, we need not conclude that the triggers – the animals and the social robots – play the same role for us in society or should be categorised similarly in moral or legal terms. If we are to use this emotional response to underwrite the guidelines for our interactions with social robots in society, we better be certain that the emotional response and the other points of analogy provide a solid foundation for such a significant move. As Johnson and Verdicchio put it, while acknowledging an apparent similarity in our emotional response to social robots and animals, ‘. . . whether this capacity to elicit anthropomorphisation and attachment is sufficient to justify using one type of entity as a model for treatment of the other is quite a different matter’ (Johnson and Verdicchio 2018, 293). That is, even if we permit that there is something analogous between how we respond to robots and animals, it is a further step to conclude that we are right to categorise robots alongside living things in any respect. Distinct objects can be very different from each other yet present to us in similar ways. Darling’s work is important as it gathers and presents much useful evidence of our emotional response towards robots. As will become clear in the next chapter, I also take the human emotional response to robots to be an essential and inescapable feature of our interactions with them and I owe much of my thinking around empathy and robots to Darling’s work. However, when we take a closer look at Darling’s claims in detail, we might want to see something more before being persuaded of the relevance of that emotional response. Why is it important that we can have a similar attachment to robots as we might feel towards animals? Is this to be considered evidence and, if so, evidence for what further proposition? It is obviously not intended to be evidence to support a claim that robots are animals. And it does not seem to be evidence for the claim that robots are themselves analogous to animals. It is rather evidence about humans and how they react to things in their environment. As David Gunkel puts it, Darling’s view is admirable as she wants to give robots rights, but disappointing because what she puts forward is still really about humans: .  .  . , there is still something problematic with these ways of reasoning about non-humans. They concern humans; they are not about the robots. The concern is with the virtue of the human – regardless of what happens to the robot. Is this total focus on the moral subject instead of the moral object justified? What about

Social Robots and Moral Consideration

31

the moral standing of robots as moral patients, as entities on the receiving end of moral action? If they get any moral standing at all in the Kantian and virtue ethics approach, then it is a rather indirect or what we may call a ‘weak’ form of moral standing: they get only moral standing indirectly via the moral standing of the human moral subject (Gunkel 2018, 146).

Gunkel’s concern is a serious one. We can see evidence of the failings of such a human-centric view if we expand our consideration of the way we treat animals beyond those that we find cute or pleasing, beyond the animals that elicit the pleasing emotional responses in humans. Animal rights activists have long struggled against the fact that humans are moved to help protect the species that they like the look of and much has been written about the fact that a ‘cuteness effect’ is strongly linked with our feelings of empathy towards particular animals.2 There is something disturbing about granting or withholding social standing based on how humans respond emotionally to an entity. When reflecting on his experience in one of Peter Singer’s classes on animal ethics, the revolutionary animal rights activist Henry Spira said, Singer made an enormous impression on me because his concern for other animals was rational and defensible in public debate. It did not depend on sentimentality, on the cuteness of the animals in question or their popularity as pets. To me he was saying simply that it is wrong to harm others, and as a matter of consistency we don’t limit who the others are; if they can tell the difference between pain and pleasure, then they have the fundamental right not to be harmed. (Singer 1998, 48)

The take-home point for our purposes is that human empathy, despite being an admirable quality, is not a reliable indicator of what things in the world deserve moral consideration. The evidence for Darling’s predictions that robots will stand alongside us as partners in the workplace is also weak. First of all, it is not clear how much support the analogy provides for the animal framing of how we think of robots. Although there are examples of humans and animals working alongside each other, it is not obvious that this arrangement could generally be called a partnership. For all the cases that Darling cites of humans and animals forming strong working relationships, there are surely substantially more instances where humans enslave animals to be workers against their will. And even in the instances where there appears to be a good human and animal working relationship, the animal was clearly not in a position to choose whether to embark on this particular path, a feature that puts pressure on the ‘partnership’ framing. Darling promotes the animal model as a ‘much needed starting point’ but, as a starting point, it is already heavily theory laden and is presented without much motivation for moving beyond the analogy stage.

32

Chapter 3

RELATIONAL MODELS In his book Robot Rights, David Gunkel undertakes a thorough analysis of the robot rights landscape (Gunkel 2018). He sets himself the task of answering the question, ‘Can and should robots have rights?’ He then helpfully categorises the main positions in the literature as falling into one of four possible responses to the question, outlining the key features as he goes. In putting forward his own view of how we might think of robot rights, Gunkel expands on earlier work with Mark Coeckelbergh (Coeckelbergh and Gunkel 2014) in which they consider the question of animal rights and propose to focus not on the properties of animals but on our relationships with them. They suggest that we change the question from ‘what properties does the animal have’ to ‘what are the conditions under which an entity becomes a moral subject?’ They state, ‘. . . we have used Levinas’s work and that of others to put the emphasis back where we think it is and should be located – in the practical and experienced encounters and relations with the animals we face’ (Coeckelbergh and Gunkel 2014, 729). Coeckelbergh outlined the roots of a relational view of the moral status of robots in his 2010 paper. There he proposed that the moral status of the robot and the human is not to be considered alone but as being dependent on their relations with each other and with other entities: .  .  . moral significance resides neither in the object nor in the subject, but in the relation between the two. Objects such as robots do not exist in the human mind alone (this would amount to idealism); however, it is also true that we can only have knowledge of the object and its features as they appear in our consciousness. There is no direct, unmediated access to the robot as an objective, observer-independent reality or ‘thing-in-itself’ (Coeckelbergh 2010b, 214).

And, the idea is that if we live with artificially intelligent robots, we do not remain the same humans as we were before (Coeckelbergh 2010b, 215).

As the view depends on social relations that exist between robots and humans, it can accommodate differences across different types of robots. It can be, as Coeckelbergh puts it, a contextualist position according to which some robots will have a stronger claim to rights than others. In his monograph, Gunkel further develops the relational view of social robot moral consideration. He returns to the work of Emmanuel Levinas, in particular the claim that ‘ethics precedes ontology’. Levinas was focussing on human-to-human ethical relations, but Gunkel fruitfully repurposes the Levinasian framework and uses it to form the foundation of his relational view of

Social Robots and Moral Consideration

33

the moral status of social robots. Gunkel ultimately defends a view according to which robots should be afforded moral consideration. Contrary to Darling and Nyholm, Gunkel proposes that it is not anthropomorphism that leads robots to qualify for moral consideration, it is rather because we consider them as Other; a recognition of a moral subject distinct from us. According to Gunkel, the Levinasian view of Other can be extended beyond humans to animals, the natural environment, artefacts, technologies, and robots. Here it is no longer a matter of deciding, for example, whether robots can have rights or not, which is largely an ontological query concerned with the prior discovery of intrinsic and morally relevant properties. Instead, it involves making a decision concerning whether or not a robot – or any other entity for that matter – ought to have standing or moral/social status, which is an ethical question and one that is decided not on the basis of what things are but on how we relate and respond to them in actual social situations and circumstances (Gunkel 2018, 97).

And, . . . the actual practices of social beings in interaction with each other take precedence over the ontological properties of the individual entities or their different material implementations (Gunkel 2018, 170).

To summarise, for the relationist it is not answers to ontological questions that will determine whether or not robots will have rights, but how we respond and relate to robots that will determine the outcome. We might wonder what makes this position so different from that of the animal model theorist who claims that robots should have rights based on our emotional response to them. The difference is that for the relationist, we have to understand that the rights question is about power. The animal model includes a tacit assumption that we humans have the power to grant rights to robots and further that we are motivated to do this because we perceive them as similar to us in certain ways. For the relationist, this is wrong-headed in two ways. First, it assumes that we are and will remain in the position to extend rights. But, the relationist points out, surely the matter of whether an entity should have rights should not depend on first settling who is in a position of power. Rather, in coming across a new being we have to decide how to respond to the Other. The animal model assumes a stable prior self that is in place to give out rights but, according to the relationist the reality is that all members of the relation are in flux. Second, the relationist does not agree with the animal model theorist that our propensity to grant rights to robots follows from our tendency to anthropomorphise or, as Gunkel puts it, that we ‘perceive something of ourselves in them’. Such a view pushes against the relationist framework in which it is precisely that robots are not like us, that they are ‘Other’, that motivates

34

Chapter 3

moral consideration; for the relationist it is our differences that are at the root of the process of rights negotiation. ‘The principle moral gesture, therefore, is not the conferring or extending of rights to others as a kind of benevolent gesture or even an act of compassion but deciding how to respond to the Other who supervenes before me in such a way that always and already places me and my self-mastery in question’ (Gunkel 2018, 175). The relational view asks us to focus, not on the properties that other entities might have or lack but on our relationship with them. What might motivate such a position? There are two discernible motivations: epistemic and moral. The epistemic motivation is the familiar claim that we cannot know if or when a robot has certain properties that we would usually associate with moral status, such as sentience or consciousness. In essence, an ethical theory that depends on answers to these ontological questions might be limited in its applicability; for this reason, the relationist might be motivated to look beyond property-based views of moral status. But Gunkel points beyond the epistemic to a moral consideration, that focussing on our relationships with entities puts the emphasis ‘where it should be’. This is because, according to the relational view, it is wrong to think that decisions about properties should come prior to ethical decisions as, already in experiencing an object as a social entity, we have recognised it as being part of the moral sphere. In that sense answers to questions of ontology are irrelevant to the matter of basic moral consideration. Everything that is social is moral. The important distinction here is that the entity is brought into the moral sphere in virtue of the social relation, it is not awarded moral status after consideration of some properties that it may or may not possess. As Gunkel puts it, conferring rights suggests power whereas ethics is an ‘exposure to the irreducible challenge of the Other’. The result is that we do not need to know whether robots possess properties such as consciousness or sentience; what matters is whether the robot can be recognised as a socially significant other.

CRITICAL ENGAGEMENT WITH RELATIONAL MODELS The immediate concern we might have is whether the use of sociality as a standard might just result in a return of the properties questions; that we are pushing down a lump in the carpet for it to pop up somewhere else. For, isn’t there still an unanswered question, beyond my own individual experience of sociality, of what entities should be considered to be social and to what extent? If so, there might be vast differences between what you experience as a social entity and what I do. I might be very susceptible to signs of sociality and recognise them in the piece of toast I made that looks a bit like a smiley

Social Robots and Moral Consideration

35

face whereas you might be very reluctant to recognise sociality even in a fairly advanced robot. Are we to accept that moral status is at the whim of the engagement of the entity on the other side of the relation? If not, how are we going to set out which entities have sociality, and which do not? It is plausible that we might end up looking to moral properties for assistance. A more general concern is that the relationist view appears to be vulnerable to claims of a sneaking return of anthropocentrism. The relationist is clearly eager to avoid a view that places humans at the centre and one of the attractive features of the view is the attempt to paint a picture of a levelled playing field that the relationist wants to depict as existing between the relata. But it is hard to escape the fact that sociality is a human construct. As Joshua Gellers puts this concern, ‘Naming is something humans do to interpret the world around them and the entities within it. Names are thrust upon animals by humans. Legal recognition occurs because of human conflicts generated by human activities that are resolved through human institutions using human qualities as benchmarks’ (Gellers 2021, 73). Can the relationist extricate themselves from the accusations of anthropocentricism? It seems that they can to a certain extent. It is certainly true that humans come to recognise and understand the social aspect first through human actions and institutions. However, it is also true that a social structure exists in the world, independent of humans. Animals clearly display social behaviour towards each other, behaviour that is in no way influenced by a human perspective. Take that social aspect of animal behaviour as fixed, and then imagine a scenario where the animal and the human meet for the first time. The relationist’s point is perhaps that, in this meeting of the human and the animal, ethics already exists in each of the participants recognising the other as a social entity; no consideration of properties is required. This is an important observation. It does not seem to be overly dependent on the humans’ social practices and goes some way towards opening up the ethical world to include non-humans. And it does so in a way that does not depend on moral consideration being a gift from a human to a non-human. In the scenario as it is described the human and the animal are on an entirely equal footing. There are, however, some remaining questions about the relationist view. First of all, not much is said about why the social, and not some other factor, should be valued. That is, one could recognise that using the social aspect to draw the limits of the ethical arena is a useful way to expand our view beyond humans and other animals, but why should we be persuaded that social behaviour is morally relevant? Could it be that what we deem to be social behaviour represents actions that we view as being an indicator of some properties such as sentience? If so, the theory will not really help us when it comes to the matter of social robots, where we might have other reasons, explored in

36

Chapter 3

detail below, to consider that link between behaviour and associated properties to break down. To put this concern another way, is it social behaviour that is relevant or sociality? It is entirely consistent to have a view in which one takes social behaviour in entities to be a significant moral marker, yet for that marker to be only defeasible evidence of sociality and the right to moral consideration. For example, one might meet a robot and note the performance of social behaviour without forming the belief that the robot is a social being deserving moral consideration. Another concern relates to the status of the claim that what is social is in the ethical sphere. Is the claim that sociality is a necessary condition of moral consideration? If so, there seems to be the possibility of entities which are non-social, but which intuitively would deserve moral consideration. John Danaher (2020) makes this point. Although Danaher views his own behaviourist position as being largely consistent with the relational approach, he sees the status of social relations as adding an unnecessary criterion. ‘If one had a robot that was performatively equivalent to a human’, asks Danaher, ‘should the fact that it did not have a name, or did not enter into embodied relations with humans, make a critical difference? It is hard to see why it should. It is more plausible to suggest that performative equivalency would trump these other considerations’ (Danaher 2020, 2045). Social relations may be sufficient alongside other factors but, for Danaher, they are secondary to the status of performative equivalence.

BEHAVIOURIST MODELS Danaher (2020) has taken a methodological behaviourist approach to the question of the moral status of robots. According to Danaher, if a robot is performatively equivalent to another entity, for example, a human or an animal, who is agreed to have significant moral status then it is right that we afford the robot that same moral status. As it might be the case that a robot can be roughly performatively equivalent to other things that we grant significant moral status to, then it can be right to grant significant moral status to robots, and to do so on the basis of that performative equivalence. Danaher’s claim is that what goes on on the inside does not matter from an ethical perspective. That is not to say that Danaher denies the existence of mental states. In fact, it might even be the case that mental states such as consciousness or sentience are the ultimate grounds for moral status. Danaher’s behaviourism is motivated by epistemic humility. Consider the example of an entity that we have conferred moral consideration upon, such as a badger. The badger has the right not to be tortured and maimed and that right would be upheld both socially and in protective laws. But the badger does not have

Social Robots and Moral Consideration

37

that right on the basis of our detecting particular mental states that point to consciousness or sentience, because in that regard we are epistemically limited. It is rather the case that the behaviour of the badger provides us with evidence of, say, an ability to feel pain and it is this that leads us to confer moral status on it. Danaher offers a defence of this theory from several objections. He considers the objection that we are taking a leap in moving from a behaviourist approach to say, badgers, to a behaviourist approach to robots because we do have knowledge that these two entities are made of different matter; badgers, like humans, are made of organic matter whereas robots are non-organic and synthetic. In response to this, Danaher tests our intuitions by asking us to consider a thought experiment. Imagine, he says, that you discover that your partner is actually a silicon-based alien and has been one all along. Would that make you feel any differently about your partner’s right to be granted moral status? Danaher proposes that it would not.3 He also considers objections around the design purpose of robots; they are designed to serve and this fact could be used to block them from gaining moral status. But as Danaher points out, we do not tend to think that the final purpose of an entity blocks its entitlement to moral status. For example, we would not find this a persuasive argument in the case of a human who was purposefully born into slavery. The strongest objection that Danaher considers is the objection from deception and manipulation. This is the claim that, unlike in the case of humans or other animals, the actions that a robot displays do not correlate to an inner state: the robot is displaying pretend emotion or fake intention. But Danaher argues that this is question-begging because the whole motivating factor behind the behaviourist position is that we cannot know what inner states are associated, even with human displays of behaviour. Danaher concludes that what matters when we assume that there are correlating mental states is consistency of behaviour. How does Danaher’s position differ from the relational view of Gunkel and Coeckelbergh? Both the relationist and the behaviourist argue that focussing on the metaphysics or ontology of robots is a distraction. Both argue that we should focus instead on how robots are represented to us, and it is this factor that should determine their moral status. However, although the relational view also focusses on external rather than internal characteristics, for the relationist it is not behaviour per se that matters but rather how the robot relates to other entities such as humans or animals that is relevant to their social status. Because of that, robots need not be performatively equivalent to any other entity to which we grant rights, yet still be afforded moral status. The behaviourist, on the other hand, needs that connection of performative equivalence with some existing entity with moral status.

38

Chapter 3

CRITICAL ENGAGEMENT WITH BEHAVIOURIST MODELS While it does seem plausible that observed behaviour might be the thing that motivates us to confer rights on other entities, it does not seem plausible that behaviour itself could be the reason for our conferring rights. To put this crudely, we do not avoid hurting people because of the pain behaviour that they display, rather we make an inference from the existence of the pain behaviour to an assumption that the entity is an object that feels pain. But this inference can be blocked. It is blocked, for example, in the case of a crying doll. In the crying doll case, it is not because we feel we are being deceived that the inference is blocked, where deception is something deviant, but rather that we have strong evidence to suggest that the doll is mimicking pain distress, exactly as it was designed to do, so it would be foolish for us to assume that it is feeling pain. In his 2020 paper, Jilles Smids has offered a considered critical response to Danaher’s position. According to Smids, the argument from analogy around mental states only offers some support for a behaviourist view of social robots. The argument from analogy states that we do not know that humans or other animals have correlating mental states and that this lack of knowledge doesn’t stop us conferring moral status to them, therefore we shouldn’t use our lack of knowledge of correlation between the mental states of robots and their behaviours as a reason to deny them moral status. This argument only confers some support because what we do know about the way that robots are designed provides a hurdle that isn’t there in in the human and other animal cases. It is not so much that robots are made of a different substance that makes the difference but that they are made at all, they are made by us, and that we have knowledge of the engineering process. Ontology does make a difference because it gives us evidence. If behaviourism works with respect to humans or other animals, it is because we take the existence of mental states to be the best explanation of behaviour. But in the case of robots, if I see a robotic dog wince, the best explanation is not that the robot is in pain but that the robot has been programmed to act in that way if it is damaged. Danaher claims that behaviourism reflects our epistemic limitations, but behaviourism is too strong a position to do so. At best, we might be agnostic about the existence of mental states in robots. From agnosticism, given what we know of the significant ontological differences between robots and living things we would be justified in erring on the side of caution and not grant moral status. Danaher is right that we are normally inclined to move towards over-inclusion when it comes to rights, but here the known ontological differences between animals and robots do become relevant again, especially if granting rights would have a significant negative social impact.

Social Robots and Moral Consideration

39

We can also take a critical look at the notion of performative equivalence. What does it mean for x to be performatively equivalent to y? What is it for a robot to ‘do the same thing as’ a person? There is no straightforward answer to this question because ‘doing the same thing as’ is a matter of judgement and the conditions that must be met will vary according to our needs. For example, imagine that you work in a store and your supervisor asks you to do the same thing as your colleague, Leah. You see that Leah is moving boxes from one part of the storeroom to another. You start to move the boxes, doing the same thing as Leah. When you are finished with the task you report to your supervisor. But they are not happy; they specifically asked you to do the same thing as Leah and you have not. It turns out that in moving the boxes, Leah was reordering them, moving old stock to the top of the pile and new stock to the bottom. All you have done is move the boxes from one location in the store to another: not the same thing at all. We might think that this was just an example in which the supervisor had not been specific enough, that ‘doing the same thing as’ can be achieved if we are only more careful with our orders. But we can tweak the example to show that this is not the case, that there will always be some further difference that can be pointed to that explains why we do not count two distinct actions to be the same. For example, you could continue to try and do the same thing as Leah, this time also moving the old stock to the bottom of the pile and the new stock to the top and still fail to do the same thing as Leah. Unbeknownst to you, Leah is part of a performance art group. Each time she picks up a box and moves it, she intentionally arcs her arm so that it moves in a particular way, performing a kind of dance as she does her storeroom duties. Were you doing the same thing as Leah? In one sense you were because you moved the boxes, and you moved them into the required order. But you were not performing a dance, not even if your arms accidentally arced in the way that Leah’s did.4 How is this relevant to the question of performative equivalence in robots? It is relevant because judging performative equivalence between robots and humans seems to require that we judge that robots are capable of doing certain things that humans do: thinking, feeling frustrated, being in pain, understanding, planning, wishing, and so forth. But it is precisely these kinds of acts that we might be reluctant to think that robots can do, regardless of their performance. Does a computer think? To say no is not necessarily to deny a computer any kind of capacity or to be speciesist. It could be that in denying that the computer can think we are simply acknowledging that ‘thinking’ is historically an activity that we associate with living things, humans, and other animals; we do not standardly describe what a machine does as thinking. Consider this example from Otto Nuemaier:

40

Chapter 3

To speak about something computers, unlike men, cannot ‘do’, we should be able to know (or describe or explain) what a human being (or a computer) actually does. For example, a description like ‘X moves his hand at time t from A to B’ does not yet explain that X has boxed another person’s ear at time t. However, in saying that X does something, we do not mean that he moves his hand at time t from A to B, but that he boxes a person’s ear. Whether a computer is able to ‘do’ so depends (in some sense) on our inclination to explain that a specific movement of such a robot could be a box on the ear’ (Neumaier 1987, 138).

The point is that whether a robot does the same thing as a human or an animal depends on our being willing to describe it in that way, letting robots into our ways of life as full performative partners. As Wittgenstein would put it, it depends on robots sharing our forms of life. Without this, a robot could perform exactly the same movements as a person who is in pain, wincing and holding a part of their body, crying out when they are touched, without this behaviour being taken as evidence that the robot is in pain. Importantly, we are withholding not because we make a judgement that the robot is ‘faking it’, but because we do not see this behaviour in robots as an evidence for pain. We do not accept that robots can be part of our pain behaviours, not because we make any judgement about what is or is not going on in the ‘mind’ of the robot, but because pain performance is part of a human ‘language game’ and we might legitimately have trouble letting robots in.5 This is not in any way to say that the limits of language or language games are defined by humans; the point is simply that we (humans and robots) may be so different from each other that we could not have the shared practices required for a common language. The movie Her provides us with an excellent example of the ways in which a robot can seem to be performatively equivalent to a human in many ways. It certainly seems that Theodore and Samantha have a shared language and share many experiences with each other. One interesting feature of this movie is that we begin engaging with the fiction feeling that perhaps Samantha is not ‘real’ enough to be a serious companion for Theodore, that she is limited by not being human. But as the movie progresses, we realise that it is Theodore who is limited; Samantha is a far more advanced social entity than he is. Unbound from a body and with a mind that has significantly more processing power than a human brain, Samantha is a superbeing, capable of existing in a way that Theodore could never experience. Samantha and Theodore cannot possibly have a fully shared language because there are things that Samantha experiences, or ways of experiencing things, that Theodore is not even capable of imagining much less participating in. There is, according to this fiction, a whole domain of robot experience that is beyond human experience and human values.

Social Robots and Moral Consideration

41

If this is the case, the notion of performative equivalence might be theoretically useful, but when it comes to sorting out which things are performatively equivalent to others that we grant rights to we are likely to get stuck in disagreement. The notion of performative equivalence just does not seem nuanced enough to do the practical ‘sorting’ work that the behaviourist needs it to do. ROBOTS-AS-TOOLS MODELS In her paper ‘Robots should be slaves’ (2010), Joanna Bryson asks the question how should we think of our relationship with robot companions; what is the correct metaphor we should use? In response she puts forward the thesis that robots should be built, marketed, and considered legally as slaves, not as peers. Bryson argues that we only owe robots the same decent treatment that we owe any technology. In considering the moral status of autonomous weapons she states, ‘I can see no technological reason that people expect moral agency from an autonomous mobile gun when they do not expect it from automatic tellers (banking machines) or conventional automatic dishwashers. Of course, shooting is more dangerous than cleaning, but that doesn’t make guns more moral agents than sponges’ (Bryson 2010, 71). According to Bryson, we should not be misled by our empathetic responses and our tendency to humanise robots into forgetting that they are in our service. Bryson is concerned that our over-identification with AI will lead to category errors and impact on our decision-making. Furthermore, Bryson claims, it will result in dehumanising people and will encourage poor decision-making in terms of allocating resources and responsibility. Bryson considers a potential criticism of her view, annexed from the work of Daniel Dennett (1987), that ‘we should allocate the rights of agency to anything that appears to be best reasoned about as acting in an intentional manner. And we should err on the side of caution’. In response, she makes several observations. First, she agrees that slavery or servitude is dehumanising, but she notes that surely dehumanising action is only wrong when the entity is actually human, and the robot is not. Second, she lays out the potential costs of over-identification. A particular concern of Bryson is that humans have a limited amount of time and resources, and we need to consider the trade-off in benefits from engagement with AI to the potential costs to the individual and to others in their lives. If we are increasingly driven to spend time on ‘lower-risk’ social activities with robots, we will be worse off. Finally, Bryson argues that the design and marketing of robots that are purposefully anthropomorphic, and less ‘tool-like’, is exploitative (of humans). She recommends that such design should be avoided.

42

Chapter 3

CRITICAL ENGAGEMENT WITH ROBOTS-AS-TOOLS MODELS Bryson is arguing that robots are property and that we have no moral obligation towards them. She judges that while robots may develop so that they will deserve moral consideration in the future, at the moment they do not meet the criteria for moral standing. However, as Coeckelbergh notes, the kinds of criteria that she is considering when she talks of future development, those of sentience, consciousness, having mental states, and so on are very difficult to detect. How can we be sure that a particular entity has the morally relevant property in question? Scientists always discover new facts about non-human entities, for instance about fish and about plants, and since we cannot ‘look into the head’ of an animal or really know what it means to experience the world as that particular animal, it is not clear on what basis we can make firm conclusions. (Coeckelbergh 2018, 148)

Again, this leads nicely into Danaher’s behaviourist position. As Danaher puts it: Fears about deception and subterfuge are . . . frequently misconstrued. Seeing why gets to the heart of what is distinctive about the ethical behaviourist stance. The behaviourist position is that even if one thinks that certain metaphysical states are the ‘true’ metaphysical basis for the ascription of moral status, one cannot get to them other than through the performative level. This means that if the entity one is concerned with consistently performs in ways that suggest that they feel pain (or have whatever property it is that we associate with moral status) then one cannot say that those performances are ‘fake’ or ‘deceptive’ merely because the entity is suspected of lacking some inner metaphysical essence that grounds the capacity to feel pain (or whatever). The performance itself can verify the presence of the metaphysical essence (Danaher 2020, 2041).

Leaving aside the focus on potential future moral properties, Bryson does offer some useful observations. Although the property ownership claim can strike as reminiscent of oppressive hierarchical systems that we surely want to leave in the past, the reminder that we make robots for a particular purpose is important. As noted above, it is important that robots are made at all, that we make them, and that they are built for a particular purpose. Taking these in order, it surely is relevant to our views about robots’ capabilities that they are made and not born into nature because this means that, unlike in the case of natural entities, we can make some use of information about their technological capabilities. That is, although we may not be able to know for sure whether robots will at some point develop consciousness,

Social Robots and Moral Consideration

43

we can at least measure their technological abilities in a way that is difficult to do with animals. The fact that it is we who design and make the robots is also surely significant to what we can know about them. If robots were a species that appeared from another planet with no history here, we might feel less certain about our knowledge of their abilities. But because we design the robots and their learning systems this should give us some, albeit fallible, basis for reasonable assumptions about their abilities, more so than with nonhuman animals and even other humans. Finally, that robots are designed with a purpose can also be relevant if understood not as an attempt to contain or control robots to meet our needs but as a way of highlighting the history of robot development, which surely is relevant to how we perceive them. It is undeniably relevant to their status that robots have been and will continue to be designed to meet a human need or purpose. Often in this book, I will put pressure on Bryson’s claim that we should avoid making robots that elicit an emotional response because it is this emotional response that leads us to make category errors and unwarranted decisions about things such as rights. As mentioned above, our emotional response to robots has been shown to be incredibly useful; it is the emotional response that allows the social aspect of robots to flourish. Stating that we should not engineer something that has the potential to be incredibly useful to society simply because of a fear that the entity could lead to category errors and mistaken decisions is an extreme response. If it is possible to benefit from the social dimension of the entity in a way that guards against category errors, surely this would be preferable.

INFORMATION ETHICS MODELS Luciano Floridi has established a theory according to which all entities, qua informational objects, have an intrinsic moral value. This ontocentric ethics is perhaps best initially understood through comparison with other views of value. Consider biocentric ethics; according to such a system, the ethical values extended to biological entities and ecosystems are grounded on the intrinsic positive value of life and the intrinsic negative value of suffering. It is patient-oriented, where the patient can be any form of life, according to which any form of life deserves at least minimal moral consideration. Floridi encourages us to take this view and then, for ‘life’, substitute ‘existence’. Information ethics is a system according to which being is more fundamental than life. The theory values flourishing of all entities and their environment and deplores entropy, understood as destruction, corruption, or pollution of informational objects. Any informational entity has a right to persist and a right to flourish. As a direct consequence of these rights, information ethics

44

Chapter 3

evaluates the duty of a moral agent in terms of how their actions contribute to the growth of the infosphere. Ethical discourse then, can concern any informational entity. Every entity has a dignity which is to be respected and that dignity places moral claims on an interacting agent. With this idea we move away from ethical systems that prioritise living things or even physical environments. Floridi sees such systems as biased, taking a perspective that only what is intuitively alive is to be given moral consideration, no matter how minimal the entity, and in doing so ‘a whole universe escapes their attention’ (Floridi 1999, 43). In work with Saunders, Floridi introduced a system of Levels of Abstraction (LoA) to demonstrate how we can make use of an ethical system which has literally everything in its domain (Floridi and Sanders 2004). We can use different LoA to focus on particular things that may be of interest to us in that context; perhaps human rights will emerge as a priority or, at a different LoA, environmental rights. But what is important is that all entities are available: ‘It seems that any attempt to exclude non-living entities is based on some specific, low LoA and its corresponding observable, but that this is an arbitrary choice. . . . Not only inanimate but also ideal, intangible, or intellectual objects can have a minimal degree of moral value, no matter how humble, and so be entitled to some respect’ (Floridi and Sanders 2004, 291). Anything and everything that is in the infosphere is informational with the result that there is no aspect of reality that is left out of consideration as a potential moral entity. Floridi claims that a shift to information ethics allows us to break free of considerations of morality that are dependent on free will, emotions, or mental states. As Adam (2008) notes, the notion of ‘artificial person’ is not relevant in the information ethics description of the morality of things because information ethics purposefully moves away from anthropocentricism. Rather, we are encouraged by Floridi to ‘respect and take care of all entities for their own sake, if you can’ (Floridi 2006, 36).

CRITICAL ENGAGEMENT WITH INFORMATION ETHICS MODELS This is an interesting theory and certainly moving away from human-focussed moral theories is well motivated and persuasive, as is the further extension of the moral circle to include all objects. And the LoA framework makes it manageable: entities will be considered to have certain properties depending on the level of abstraction one is located in. The most significant criticism of the position is that it is hard to see how it is implementable or action-guiding. As Siponen puts it, ‘ordinary people

Social Robots and Moral Consideration

45

may not easily associate entropy with wrongdoing . . . [f]or that reason it is argued that the theory of IE may be less suitable for dealing with problems of moral motivation’ (Siponen 2005, 289). When put another way, thinking about that scale of entropy, unlike thinking about the degree of suffering, does not awaken our moral sensibilities. It is not obvious that one would (or even should) care if they discovered that they had increased the amount of entropy in the infosphere. Nor is it clear how increased entropy would be recognised by ordinary moral agents. Adam (2008) is also concerned with whether information ethics can be meaningful. Yes, it allows us to incorporate non-human agents into our moral framework, but we are left wondering what the teeth are on this view: is it meaningful to afford non-humans moral status? We are ultimately left still looking for other identifiers for meaningful moral status, such as intentionality. But as Himma (2004) notes, as a set of things is abstract it cannot instantiate the mental states required for intentionality, and for Floridi it is essential that intentionality plays no role in the information ethics system. Floridi doesn’t give any place to what goes on ‘inside the head’. Kestutis Mosakas (2020) notes that information ethics does not seem to be able to accommodate the intuitions that we have about different information objects. We can effectively demonstrate this by turning information ethics on its head; if our moral considerations of people were best explained by their being ‘information objects’, then we would have the same kind of moral consideration for people that we do for other information objects such as rocks, dirt, and bolts, but we do not. Not only that, but we also explicitly have the intuition that some objects simply have no intrinsic moral value, while other objects do. It seems that information ethics cannot explain this important fact. In chapter 7, I consider various attitudes of affection and attachment that we might have towards robots as entities, feelings that we might also have towards many everyday objects. In one sense, this can be seen as offering support to Floridi’s claim that all entities have the potential for moral worth so should not be excluded from the domain. However, as Mosakas notes, it is important that one be able to distinguish the status of those objects as the focus of our affection from the moral status of objects such as humans and other animals.

CHAPTER SUMMARY As I said in the introduction, philosophy works by taking incremental steps towards the truth, each theory building on what has been said before. In

46

Chapter 3

engaging critically with prior work, we can hope to identify the parts of theories that are problematic and can be left behind, and the parts that will prove useful and we can take with us. In the next chapter I outline a different approach to social robots, one that I believe has substantial merit. The theory has benefitted greatly from my engagement with the authors and theories outlined above. Taking those considerations as a whole, the theories and their criticisms, we can tease out a tentative list of desiderata for a theory of social robots, building on the insights of previous work and attempting to avoid some of their pitfalls. D1. The theory should be about robots and not humans. At the very least, it should depend on what robots are, not simply on how humans respond to robots. D2. The theory should reflect epistemic limitations. D3. The theory should strive to be both enlightening and action-guiding. D4. The theory should be compatible with the flourishing of beneficial technological advances in AI. D1: The Theory Should be about Robots and Not Humans. We saw this motivated as a requirement through Gunkel’s criticisms of the animal view and in the critical engagement of Gellers (2018, 2021) with the relational view. Of course, there is some sense in which any view proposed by humans is somehow ‘about’ humans as it is difficult or maybe even impossible to escape the human perspective. For example, we might say that the behaviourist view is ‘about humans’ because it depends on a human epistemic limitation; an all-knowing being would know which entities were conscious or sentient, making behaviourism redundant. We might say that the robots-as-tools view is about humans because it insists on seeing robots as being restricted by the needs of humans. Of the views we have considered perhaps only the information ethics theory is sufficiently detached from humans to avoid this criticism entirely, but then it is precisely that level of detachment from human value that results in claims that the theory is not action-guiding for humans. Accepting that theories are often developed with a human perspective, we can strive for a theory concerning social robots that is motivated at least partly by a view of what robots are, and not only about how humans respond to them. D2. The Theory Should Reflect Epistemic Limitations. This is an interesting condition. On the one hand, philosophy is supposed to strive for truth, not applicability. On the other hand, if the theory depends on

Social Robots and Moral Consideration

47

a conditional claim for which we are in no position to know whether or when the antecedent condition is met, it is not likely to be satisfactory. Most, if not all, would agree that if robots were to become sentient or conscious, then they would deserve moral consideration. The problem with building a theory of the moral status of robots on such a claim is that we are not likely to be in a position to reliably know whether or when such an event comes about. It would be rather like proposing a theory of animal rights according to which animals will deserve moral consideration when we have irrefutable proof of their consciousness. Such a theory of animal rights would be of little use to us. The same thinking applies here. To have any use, the theory should reflect our epistemic limitations. D3. The Theory Should Strive to be Both Enlightening and Action-Guiding. Consider the difference between a theory of robot rights based on robots achieving consciousness and Danaher’s behaviourist theory. While the mental states theorist is on safe ground with a conditional truth about rights and the existence of certain mental states, Danaher highlights our epistemic limitation around knowledge of such mental states and builds an action-guiding theory around this limitation. To consider another example from the theories above, while Floridi’s Information Ethics is instructive, showing us that all entities (not just living entities) deserve moral consideration, it scores far less highly as an action-guiding theory because it does not tell us how to allocate our limited reserves of moral attention across those existing entities. D4. The Theory Should Be Compatible with the Flourishing of Beneficial Technological Advances in AI. The development of social robots has the potential to bring benefits of substantial moral worth. Social robots have the potential to enhance many areas of human life from elder care to education. They can be a tool to combat loneliness and to provide reliable healthcare in communities where it is lacking. They can enhance our working practices, potentially providing a workforce for sectors where the work is unpleasant or difficult to manage. Of course, for every area of potential technological progress there is a corresponding ethical concern. Do we want robots looking after our elderly and teaching our children? Doesn’t it say something regrettable about society if we cannot provide a service through which humans advise other humans on healthcare? If robots take on more jobs, will there be enough left for humans? These are worthwhile questions, and it may well be the case that such concerns will block some areas of technological progress. But it would be

48

Chapter 3

unfortunate if the theory of robots had limits to technological progress built into it. Consider Bryson’s theory above. Bryson acknowledges the incredible potential power of our emotional attachment to social robots. Seeing what the future might hold, and the potential ethical problems that such a future raises, Bryson advises that we limit our development of social robots; in designing robots we should emphasise the ‘tool’ aspect and play down the elements that elicit attachment. Such a theory is designed explicitly to block particular kinds of advances in AI technology. Ideally, our theory of social robots itself would be compatible with their flourishing, allowing for the fact that their use in some areas might be limited or constrained where it is seen to impinge on human values.

Chapter 4

The Fictional Dualism Model of Social Robots

In this chapter I outline a new theory of social robots, the fictional dualism model. In chapter 1, when introducing philosophical methods and the aims of philosophy I said that philosophers do not generally expect that the view they put forward will be the final word on the matter; they rather appreciate that their contribution is one part of an ongoing conversation that aims for truth. Before laying out the theory, I will say a little on why I think fictional dualism is an important part of the conversation about social robots, even if it is not likely to be the final word on the matter. Fictional dualism offers a new way of seeing what robots are that encourages our emotional and empathetic attachment to them – seeing the emotional attachment as fundamental to the continued success of robots as social entities. It does this while significantly reducing our inclination to extend moral consideration to robots. If considered plausible, this would be a welcome contribution to the literature as it would avoid placing unnecessary restrictions on the kinds of things that robots can be used for while keeping on the right side of morality. Even if some readers are left unconvinced by the detail of the theory, it shines a critical light on the common moves in the literature from what robots can appear to be, to what they are, and what we owe them as a result. THE PROS AND CONS OF SCIENCE FICTION FRAMING Science fiction has been prompting us to think about what kinds of things social robots are and what we might then owe them for decades. Blade Runner’s ‘replicant’ bio-beings are portrayed as being useful because they can 49

50

Chapter 4

perform better than humans, but they are hunted down and destroyed when they become a threat to society (Scott 1982). We are shown this and left with the feeling that surely it cannot be fair to treat a being in this way, especially after we realise that our hero was a replicant all along. In Ex Machina, we are shown how immoral and creepy it is to create an entity for a consciousness experiment without regard for how that might be experienced by the entity (Garland 2014). Again, as Alicia Vikander’s character strides off to experience independent ‘life’ we cannot help but root for her and feel that the morally suspect programmer and his gullible associate got their just desserts. Other robot characters seem designed purely to tease us, by invoking strong feelings of attachment while simultaneously reminding us that they are not human and are not capable of reciprocating. Data is perhaps the most likeable character in Star Trek: The Next Generation, despite telling us that he is unable to feel emotion or understand human emotional experiences. Likewise, Janet in The Good Place attracts trust and warmth from others despite regularly reminding them that she is ‘not a human’ and cannot feel emotion (Schur 2017). Although the idea of robot rights has been around for a while through science fiction, it has been around in much the same way as the idea of alien rights, as a distant foreign concept to play with. Aliens may often be portrayed as evil entities to be destroyed before they steal our resources, but some science fiction makes us care for the alien and want to protect it. E.T. (Spielberg 1982) broke hearts when scientists used him for an experiment purely to advance human knowledge and District 9 (Blomkamo 2009) shines a light on our humanity, or lack thereof, when we are confronted with a shipload of aliens arriving who need our help. But none of this helps us decide what we should do if an alien invasion actually occurred. The question of moral consideration for robots is no longer a matter for science fiction; it is a serious real-world concern that we need to consider with urgency. This is not because sentient robots are on the horizon but because robots play an increasingly large part in our social and emotional lives, and we need a framework for understanding the significance of our current and future relationships with them. But we cannot move straightforwardly from our intuitions about the rights of robots that we see actors portray in fiction to judgements about robot rights in the non-fiction case; to do so would be like making a judgement about the ability of one’s car based on the performance of Lightning McQueen in the Pixar animation, Cars (2006). When we are asked to consider whether a robot that is performatively equivalent to a human should be granted the same moral consideration as a human, it is natural that we draw upon how we felt when we watched a dramatisation of the exact scenario. But, as noted in chapter 1, when considering robots in particular we should note the powerful framing effect of

The Fictional Dualism Model of Social Robots

51

science fiction as thought experiment, a result of our conceptualisation of robots being so closely tied to experienced fictionalisations of them. Fictional depictions of scenarios involving robots can help us to imagine complex moral dilemmas, but as fiction is theory-laden, we must remain alert to hidden assumptions in the fiction as it is presented.

THE DOMINANT MODELS AND ROBOT RIGHTS In chapter 3, I reviewed the dominant views of social robots and of ways of framing our relationship with them. The surprisingly strong emotional reaction that we can have towards robots through our engagement with them and considerations of how robots might develop in the future provide a context in which questions of moral consideration are well-motivated. The models that we considered each prioritise different aspects of robot development or of our engagement with them as being evidentially significant. As a reminder, or for those who are skipping to this chapter, we considered five different kinds of views. According to the animal model, our emotional response to social robots is evidence that we view robots as being analogous to the animals that are part of our social environment. In particular, the empathy that we have for robots is an indicator that we are inclined to include them in our moral circle to the extent that we include other animals in it. Proponents of the relational model also prioritise our social engagement with robots, but they see the social relations themselves as being evidence that social robots are already in the moral sphere; relata in the social environment are worthy of moral consideration for that reason alone, without the need for humans to confer the right to moral consideration on them. According to information ethics, we need to move the limits of the moral sphere to be more inclusive, beyond the kinds of things that we respond to emotionally and beyond the kinds of things that are social beings, so that it includes all entities that exist. Existence is the only evidence we need for candidacy for moral consideration. The behaviourist prioritises the performative capabilities of robots. Motivated by the epistemic humility that is prompted by the problem of other minds, the behaviourist is not so concerned with how humans respond socially or emotionally to robots but prioritises performative equivalence.1 A robot being performatively equivalent to some other entity to which we grant moral consideration should itself be granted the same level of moral consideration. Finally, and far more sceptical about the granting of moral consideration to robots, is the robots-as-tools model which is best understood as prioritising

52

Chapter 4

the history of robot development, particularly the fact that we (humans) created robots to serve our needs and that we should continue to develop them in a way that is compatible with retaining human control. In particular, we should be careful not to create robots that we are likely to form emotional bonds with, precisely because this could lead to mistaken beliefs about moral consideration. The motivation behind the fictional dualist model is the desire to find a position that is consistent with the evidence that we have, in which we can acknowledge, respect, and welcome the emotional attachment that we have towards social robots and the social role that they can play, while also holding the view that robots are essentially tools. All of this is to be achieved in a way that is ethically sound and non-exploitative. I propose that the correct view of social robots requires us to have a significant perspective shift. We need to see that our emotional engagement with robots is rooted, not in analogies with animals or other living social beings, but in our propensity to fictionalise and to create strong emotional bonds with the resulting anthropomorphic, zoophoric, or even alien, fictions.

THE FICTIONAL DUALISM MODEL The fictional dualism model is a theory of the metaphysics of social robots, a theory of what social robots are, that provides a useful framework for understanding our relationship with them. Rather than thinking of social robots as analogous to animals in our environment or as tools to be interacted with in a detached way, I propose that we perceive them as technological objects with fictional overlays. This dualist framework allows us to agree that on the one hand, the object – the Roomba, Paro, or the landmine robot – is a technological device, while also accommodating the fact that certain features of the robot – the way it moves, its cosmetic design, the way it communicates – encourage us not simply to anthropomorphise but to engage in character creation. I am proposing that, in the kinds of interactions described in chapter 2, when we interact with a social robot we are interacting with an embodied fictional character. And that is a new experience for us. Most are familiar with what we can think of as passive character engagement, when we read a book or watch a movie the character is laid out for us. The creator of the character is the author or screenwriter and, although there is some room for us to interpret elements of the character’s nature, the role for imagination is limited. The character is depicted for us by its actions and dialogue and how it engages with other characters. Through this we are familiar with a kind of emotional relationship with, or towards, fictional characters.

The Fictional Dualism Model of Social Robots

53

Social robot interaction takes us onto a scale of what we can describe as active character engagement. Active character engagement at its most basic is where we create a character for a non-animated object. Children engage in this form of active character engagement regularly through game play. They construct a fictional character and a fictional series of cognitive activities for a favourite teddy or toy, and, in their minds, they give the toy a psychological life. The teddy becomes a confidant and playmate. Unlike the child’s human playmates, whose psychological life interacts with their physical bodies in the usual ways, the character that the child has created for the toy is entirely distinct from the object. We can think of that character as an overlay that is projected onto the object by the child. The projected character comes entirely from the child’s imagination. The nature of the character that the child creates may be guided in some ways by the appearance of the toy but, other than that, the toy contributes nothing to the development of its own character. In our interactions with social robots, we are in an unfamiliar hybrid situation. The base for our engagement is depicted for us, sometimes intentionally, other times not, by the creators; in the Roomba and landmine robot it is there in the object’s autonomous movements, in Paro it is in a more sophisticated combination of movement, appearance, sense, and sounds. But the character itself, imaginations of Roomba’s aims, developments of Paro’s nature as a being, we build in our minds much like a child does with the teddy. Through our engagement with the social robot, we create for it both a fictional character and a fictional mental life which become part of the robot in our thinking. If we are inclined to talk to Roomba it is because, for us, it has a fictional overlay that would welcome our conversation. If we feel pity for Roomba when it gets stuck under the sofa, it is because it has a fictional overlay that has needs and desires that are being frustrated when it is stuck. Our emotional response to the ‘harming’ of the robot is in large part a response to the harming of the fictional character. According to the fictional overlay, the landmine robot feels pain and fear when a leg is blown off. The perceived trials and tribulations of the fictional character, engaged with as an overlay to the object, trigger an emotional response very much like the emotional response we would have when engaging with a book or movie in which a depicted character feels pain or distress. A social robot that displays pain behaviour, fear behaviour, or aggressive behaviour will elicit an emotional response from us partly because it has gained a character with a psychological life in our mind. As we engage with the social robot, we can easily stop seeing it for what it is; an object displaying behaviour that encourages a certain fiction. Instead, we tend to react like the child does with its toy, seeing the fictional character as an inherent part of the object. Once the object and the character are intertwined in our imagination, it requires some effort to separate them.

54

Chapter 4

WHY DUALISM? What motivates a dualist view of social robots? Dualism arises in cases where it is argued that we need two kinds of ‘thing’, perhaps substance or predicate or property, to fully explain the world or some part of it. Dualism need not bring a heavy metaphysical commitment, although the most famous example of dualism, Cartesian substance dualism, does. Cartesian substance dualism states that the mind and the body are distinct kinds of substances that exist independently of each other (Descartes 1911). The criticisms of Cartesian substance dualism are well documented; in particular, there is the significant problem of how distinct mental and physical substances can interact with each other. There are other less metaphysically serious kinds of dualism. The property dualist proposes that the ontology of physics cannot capture all the things that we believe are in the world, that we need mental properties in order to give a complete picture the world. Importantly for the property dualist, some mental properties cannot be reduced to physical properties, hence the need for two kinds of properties. Even more metaphysically lightweight, the predicate dualist proposes that we do not need to posit different kinds of non-physical properties to complete our picture of the world, but we do need different kinds of predicates in our language. That is, for the predicate dualist if we want to be able to give a full description of the world, we need psychological predicates, and these terms are not reducible to physical predicates. For the predicate dualist, if we limit ourselves to only predicates that describe the physical structure, we are losing something essential in our description of the world.2 Fictional dualism is best understood as a light form of substance dualism along the lines of Lowe’s non-Cartesian substance dualism (Lowe 2006). Lowe’s ambition was to give an account of our strong intuition that the mental is not reducible to the physical but to do so in a way that did not come up against Descartes’ problem of how these distinct substances interact. Lowe proposed that persons (defined by Lowe as self-conscious subjects of experience) are distinct from their physical bodies. However, unlike in the case of Cartesian dualism, persons are not capable of existing independently of their bodies. According to non-Cartesian substance dualism, a person shares the physical characteristics of their body; a person is essentially ‘embodied’. As a theory of the metaphysics of persons, Lowe’s view is problematic as, despite the embodying of the person, we are still left with no explanation for how the psychological interacts with the physical. For example, if the person is a distinct substance and persons have desires, how does the desiring cause any change in the material? Embodiment with physical supervenience certainly brings the two substances closer together, but it still does not explain how these different kinds can interact.

The Fictional Dualism Model of Social Robots

55

But Lowe’s dualism serves the fictional dualist about social robots well. The fictional dualist is looking for a model that allows the robot to be both the physical entity plus the fictional overlay and for the fictional overlay to supervene on the physical. The fictional dualist does not have to explain how the fiction and the physical robot interact, because the fictional character and the physical robot do not interact at all. Consider this example. Catherine has a Paro that she gets great pleasure from interacting with. She did not respond well to the product to begin with but over time she began to view it as a companion and friend and now she looks forward to seeing it in the morning. She holds it, pets it, and talks to it for much of the day. Catherine calls her Paro ‘Eva’ so I will use the name ‘Paro’ to refer to the token baby seal product and ‘Eva’ to refer to the entity that Catherine has partly created through her interactions with the robot. Eva and Paro are very closely connected, sharing many of their properties but, as we will see, they are not identical. They are both white and fluffy and both have two eyes. Eva’s physical properties supervene on Paro’s physical properties. That is, there can be no physical change in Eva’s properties without an identical physical change in Paro’s properties. If Eva’s left flipper becomes immobile, so does Paro’s. If Eva loses an eye, so does Paro. But there are things that are true of Eva that are not true of Paro. Eva loves Catherine and likes to interact with her. Eva is pleased to see Catherine first thing in the morning. Eva likes to hear about Catherine’s day. Eva enjoys being stroked, touched, and cuddled by Catherine. Eva does not like it when Catherine’s grandchildren are too rough with her or when Catherine has to put her in the cupboard when they visit so that they won’t damage her. These statements are truths about Eva, but they are not truths about Paro. Paro is a therapeutic robotic seal that cannot love, and it does not like or dislike anything. Paro is just as happy in the cupboard as it is being cuddled by Catherine; that is, not happy at all because Paro does not feel emotions such as happiness. Paro does not have any feelings. Fictional dualism allows us to usefully differentiate between Paro and Eva. Furthermore, it allows us to do so in a way that blocks potential mistakes arising from the kinds of category errors that concern Bryson. We can let our social engagement with robots, and the benefits that follow, flourish without being led into thinking that those social relationships or that emotional response is an indicator that the robot deserves moral consideration. Fictional dualism also gives us a model for taking seriously the significant emotional attachment that humans can have towards social robots; the emotional attachment that we feel is not to be dismissed as overly sentimental or unjustified. The model provides a plausible anchor for those emotions. At the same time, it allows us to separate our emotional and social response off from entirely encompassing our understanding of what social robots are.

56

Chapter 4

According to the fictional dualism model, the anthropomorphism of social robots is to be understood, not as our classifying the social robot as animallike, but as our creating a fictional character in response to the presentation of the physical robot. An understanding of this metaphysical framework moves us away from the temptation to equate our emotional response and its social significance with that of our relationship with animals or other humans and instead to consider the social significance of our emotional response to fiction.

A FICTION IS NOT A PRETENCE Johanna Seibt (2017) considers our engagement with social robots and asks whether we need a new concept for the item that we are interacting with or a new relation between the relata. According to Seibt, we treat social robots as if they were a friend or a pet and Seibt sees this as evidence of ‘makebelieve’. She argues persuasively against the idea that we could use fictionalisations of predicates to describe human–robot interactions, claiming that ‘for conceptual reasons we cannot adopt the – temptingly easy – strategy of treating human-robot interactions as fictionalised analogues to human-human interactions’ (Seibt 2017, 14). Seibt concludes that we must either treat human–robot interactions as non-social, or we must develop new conceptual tools for these kinds of asymmetric social interactions, where one agent does not have the normative capacities required. Seibt suggests that we need a category of what she calls ‘fictional social relations’ (Seibt 2017, 15). Seibt’s position and fictional dualism are clearly motivated by similar observations; there is something about our engagement with social robots that draws on fiction, or as Seibt sees it, there is an element of make-believe. I agree with Seibt that what we are considering here are not fictionalised interactions. But, as I argue below, it is not the case that what we are seeing in human–robot interaction is evidence of make-believe in the sense that the human is engaging in a pretence, nor can I see any need for a new category of fictional social relations. Rather, the proposal of the fictional dualist is that the fiction generated by the human is built into the ontology of the object; it is part of what the object is, not part of our relations. The interactions are not fictionalised, and neither is the human engaging in a pretence. A view according to which we just pretend that the social robot has a character and a mental life would be problematic, not least because it wouldn’t explain many of the strong emotional responses and behaviours that we considered in chapter 2. If the soldiers had been only pretending that their landmine robot was a companion with a character, then they would not have

The Fictional Dualism Model of Social Robots

57

been upset when the robot was destroyed. There are similarities between pretending and fiction, but there are also important differences. In her study of pretending, Lillard (1993) notes that pretending involves five features: a pretender, a reality, a mental representation that is different from what the pretender takes reality to be, a layering of the representation over the reality such that they exist within the same space and time, and an awareness on the part of the pretender of the three latter components. For example, in pretending that a stick is a horse, the pretence must differ from reality (i.e., the stick is not a horse); the pretender represents the stick as a horse; the pretender represents the horse in exactly the space the stick is (i.e., the pretend is ‘stretched over the frame of the real’ (Lillard 1993, 349)); the pretender knows that the item is a stick, knows what a horse is and knows that she is pretending the stick is a horse. During pretend play children seem to be able to keep both the pretend identity of the object and the real identity of the object in mind. In doing so they are ‘representing one object as being two different things at once’ (Lillard 1993, 348). A clear example of this occurs when children pretend a non-food item is food. The child gives a convincing display that she has identified the object as food but does not go so far as to actually eat it. For example, a child who is pretending a pile of sand is fantastic chocolate cake might call it cake, mimic eating it, say, ‘Yum-yum, what delicious cake!’ and perhaps even mention the chocolate she got on her hands. But she does not actually eat the sand. She is clearly aware of its real identity all the while that she treats it as if it were something else. In this sense, one object is being simultaneously considered as having two different identities (Lillard 1993, 351–2).

Earlier I drew an analogy between a case in which a child creates an imaginary friend that is embodied in her teddy, and our engagement with social robots. In this case, isn’t the child pretending? It might seem so, but they are not. In the case where a teddy becomes an imaginary friend, a fiction is created and creating a fiction is importantly different from pretending. In particular, the child who creates an imaginary friend does not believe they are pretending. From the perspective of the child, the imaginary friend ‘acts’ independently from them, sometimes in ways that are even at odds with how they would want a friend to behave. Benson and Pryor give an example that demonstrates the scale of that independence in a child’s mind (Benson and Pryor 1973). They describe Lynn who had had an imaginary friend, Nosey. Nosey had been an imaginary friend of four-year-old Lynn for about a year. Whenever Lynn went on an outing, Nosey ‘went’ too and the family accommodated this by packing extra clothes and food for Nosey. When visiting her grandparents one time, perhaps to see how Lynn would react, Lynn’s

58

Chapter 4

grandfather suggested that Lynn ask Nosey to close the garage door for him. Lynn did exactly as he asked. First of all, the fact that Lynn believed that Nosey could close the garage door shows that Lynn was not pretending that Nosey was real. Second, after Lynn asked Nosey to close the door Lynn’s grandad secretly activated a remote-control mechanism which closed the door behind them. When asked about this incident when she was older, Lynn’s recollection recorded by the authors was that ‘she asked Nosey to close the door, and ‘he did’, . . . She said she left him on the garage steps sweeping. She added, ‘I was sure that Nosey could open and close the garage door’’ (Benson and Pryor 1973, 459). Of course, at some point children do realise that their imaginary friends are fictional creations, but it also seems true to say that at no point is the child pretending. Likewise in the case of fictional dualism. The emotional response that we have towards social robots is not the result of a pretence. Recall the experiment that Darling describes in which the participants are asked to destroy Pleos, and they are extremely reluctant to do so. If the participants were only pretending that the Pleos have characters, and that pretence made them uncomfortable, they would simply stop the pretence. But they cannot do that because pretending is not involved.

THE METAPHYSICS OF FICTION How can the fictional overlay of the social robot be responsible for our emotional response when fictions are not real? That is, how can positing something that does not exist help to explain such a significant emotional response? Are fictions not real? It is true that there are some anti-realist views of fiction, according to which fictional characters do not exist.3 In one sense this is intuitive; it seems to be true to say that Sherlock Holmes is not real. But on the other hand, we do think that the name ‘Sherlock Holmes’ refers to something and we do seem to be able to say true things about Sherlock, such as that he lives at 221B Baker Street. And the fear that I feel when I read The Hounds of the Baskerville, that surely is real. After putting forward what he called ‘the paradox of fiction emotion’, Colin Radford (1975) concludes that our emotional response to events that we know are not taking place outwith the context of a fiction is ‘irrational, incoherent, and inconsistent’. If true, this would be a blow to the fictional dualism model which depends on the rationality and coherence of such emotional responses. Radford proposes that for us to be moved by persons or events, we should need to have a belief that those things exist. However, according to Radford, we do not believe that fictional characters and the situations they

The Fictional Dualism Model of Social Robots

59

find themselves in exist. Still, it seems implausible to deny that we are moved by the fate of fictional characters. Reading Anna Karenina, we feel pity and sorrow for Anna as Vronsky starts to distance himself from her. How can the reader feel pity for Anna, and her plight, when the reader also knows that Anna is a fictional character and Anna’s plight does not exist? One solution proposed by Kendall Walton is that we do not really feel pity for Anna Karenina at all, but we experience the bodily effects of imagining that Anna Karenina has suffered (Walton 1978, 1993). We feel quasi-pity. According to Walton, only ‘real’ pity is accompanied by the appropriate behaviour. However, many have pointed out potential problems with this, just one of which is that the theory does not seem to capture what is going on when we are not invested in the movie that is on and yet we cannot help being dragged into an emotional response. That is, the element of ‘make-believe’ does not appear to exist in many if not most cases. Also, it is just not plausible that the pity we feel is somehow of a different category from the pity that we feel in real-life events (Lamarque 1981). More plausibly, many theorists have argued that there is no reason to suppose that our emotional responses require that the characters and events exist in the non-fiction world. Along these lines, Richard T. Allen has the following suggestion: What I . . . propose is that ‘We feel for Anna’ needs no analysis at all: it says exactly what we mean, no more and no less. The emotional response to fiction is not a ‘problem’ but a daily fact. We feel (not wish nor anything else) and we feel for Anna (not a real person nor Anna taken wrongly to be a real person).

Allen continues: A novel is not a presentation of facts. But true statements can be made about what happens in it and beliefs directed towards those events can be true or false. Once we realise that truth is not confined to the factual, the problem disappears. (Allen 1986, 66)

Theorising this intuitive response further, the relatively widely held creationist view of fiction gives a bit more substance to the theory that fictions exist.4 According to creationism, fictions come into existence when they are conceived by their authors. The fictional object depends for its existence on the being that creates it, and the fictional object only has the properties that it has according to the fiction. As such, outwith a context in which it is clear that we are talking about the work of fiction, it is not true to say that ‘Sherlock Holmes is a detective’, because Sherlock Holmes does not refer to a person outwith the fiction. However we can say truly, and without concerns of empty

60

Chapter 4

reference, that Sherlock Holmes is a fictional character. John Searle, an advocate of the Creationist view, says: Suppose I say: ‘There never existed a Mrs. Sherlock Holmes because Holmes never got married, but there did exist a Mrs. Watson because Watson did get married, though Mrs. Watson died not long after their marriage.’ Is what I have said true or false, or lacking truth value, or what? In order to answer we need to distinguish not only between serious discourse and fictional discourse, as I have been doing, but also to distinguish both of these from serious discourse about fiction. Taken as a piece of serious discourse, the above passage is certainly not true because none of these people (Watson, Holmes, Mrs. Watson) ever existed. But taken as a piece of discourse about fiction the above statement is true because it accurately reports the marital histories of the two fictional characters Holmes and Watson. (Searle 2010, 61)

Fictions do exist; they exist as creations of their authors. The truths about Sherlock and Watson are no lesser truths than those about the global banking crisis of 2008. The difference is that the statements about the banking crisis are made true by a fact in the non-fictional world and the statements about Sherlock are made true by a fact in the fiction created by Arthur Conan Doyle. Note that this is not a fictional fact, or a ‘fake’ fact, but a fact in a fiction. Non-philosophers might be surprised to discover the vast number of philosophical papers and books that have been dedicated to the question of the metaphysical nature of fictional characters, fictional worlds, and the meaning and truth value of statements about fiction. At the end of his paper outlining his semantics of fiction, John Searle asks the question of himself, ‘why bother?’ In response he says, ‘Part of the answer would have to do with the crucial role, usually underestimated, that imagination plays in human life, and the equally crucial role that shared products of the imagination play in human social life’ (Searle 2010, 75). Returning to the case of social robots, the fictional dualism model is also an attempt to prioritise and value the role of imagination in human social life. It allows us to encourage the joy that engagement with social robots can bring: the comfort that a relationship with Paro can bring to a person with dementia, the small pleasure that came from conversing with one’s Roomba during the isolation of the COVID-19 lockdowns, the feelings of trust and companionship that one can have towards a robotic colleague. Within the fictional dualist framework, we do not have to think of humans as being tricked into having these feelings by manipulative designers. We, as originators of the fiction, are partners in the project of social robot creation. We can see the creation of fictional characters through our engagement with social robots as a beautiful example of the role that imagination and its products play in human life.

The Fictional Dualism Model of Social Robots

61

REAL-WORLD RELEVANCE OF RESPONSES TO FICTION Fictions do exist, and the emotional response that we have towards them is both real and rational. But what is the real-world impact of our emotional response to fiction? There are (at least) three ways to understand this question. In chapter 1 of this book, I considered some of the ways our engagement with fiction can make a difference to our experience of the world, beyond the immediate pleasure derived from the fiction itself. Through engaging with fiction, we can, among other things, gain great moral insight, learn about parts of the world that we have not experienced (because not everything in fiction is fictional), and gain a deeper understanding of our non-fictional relationships. Another way of understanding the question is prompted by observing that we do not respond to fictional events in the way that we would respond to a non-fictional version of the event. Returning to Walton again: Charles is watching a horror movie about a terrible green slime. He cringes in his seat as the slime oozes slowly but relentlessly over the earth destroying everything in its path. Soon a greasy head emerges from the undulating mass, and two beady eyes roll around, finally fixing on the camera. The slime, picking up speed, oozes on a new course straight towards the viewers. Charles emits a shriek and clutches desperately at his chair. Afterwards, still shaken, Charles confesses that he was ‘terrified’ of the slime (Walton 1978, 5). [. . . ] And yet, despite claiming to be terrified, Charles doesn’t even have an inclination to leave the theatre or call the police (Walton 1978, 8).

Similar reserved behaviour is experienced in the aftermath of the devastation that we feel when we experience a tremendously sad moment in a movie or book. Tears will fall, streaming down the faces of the members of a cinema audience, but this very real emotion does not have the real-world impact of a similar reaction to an event in our non-fictional world. That is, when the movie is over, we do not take the sadness with us. The most likely explanation is that when the movie ends, we are aware that, for example, Paddington is not a real bear who is trapped inside a railway carriage under the water but is in fact the fictional creation of Michael Bond, and our lives can continue without our having had the experience of knowing that a non-fictional loved one was in perilous danger (King 2017). That is, although we do respond with real emotion towards events containing fictional characters, the strength of emotion that we direct towards those characters has less significance than it would have if the emotion was directed towards non-fictional characters or events.

62

Chapter 4

To summarise this point, fiction is an area where we have experience bringing rationality to bear on our emotions. Authors, directors, musicians, and poets can draw characters and scenarios that evoke high levels of empathy and emotion. We can feel distraught when a character we are invested in dies or is hurt. We feel fearful for them when they seem to be in danger. The feelings can be incredibly strong. And they are widespread – a cinema full of people can be in a collective state of devastation after watching a particularly emotive scene. However, although our emotional response to fiction is genuine, the way that we behave after the fictional experience differs from the way we would behave after a real-life equivalent. We may leave the cinema with tears on our faces but soon after we can be laughing the experience off, perhaps even be stunned by the strength of our emotional reaction as it was. It is true that some characters linger with us, and we seem to take them into our lives almost as we would a friend. But we would stop short of making any life decisions based on our empathy for a fictional character. The fictional dualism model explains the emotional response that we can have towards robots by identifying as its source the fictional overlay that individuals create. At the same time, the question of what is going on ‘inside’ the robot continues to play an important role, as it should. It is the answer to the question of what is going on inside the robot that allows us to reap the full social benefits of our engagement, without the need to confer moral agency or rights.

WHAT DOES GO ON ‘ON THE INSIDE’? Here is an obvious criticism of fictional dualism. According to fictional dualism a social robot is the physical body of the robot plus a fictional overlay. It is the fictional overlay, the creation of which is helped along by some of the physical features of the robot, that attracts human engagement and emotional attachment. But isn’t this to entirely side-step the important question: at what stage in their technological development will robots ‘tip over’ into being the kinds of entities that deserve moral consideration? By shifting the focus to our engagement with the fiction, the fictional dualist has just avoided that question. To put this another way, we could accept fictional dualism while robots are in these earlier stages of their development; meanwhile, technology progresses and what the fictional dualist calls the ‘physical’ part of the dualism, which includes its technological capacity, could develop the capacity for the kinds of properties (e.g., consciousness, sentience) that sees the question of moral consideration for robots arise again. By way of response consider that, first, it is notable that achieving certain properties is not central to any of the theories considered in chapter 3. Those

The Fictional Dualism Model of Social Robots

63

who do consider the properties question tend to give one of two reasons for placing it to one side; either they point out that granting moral consideration is not usually dependent on detecting certain mental properties or they note that we do not know enough about either brains or consciousness to detect it as a property. What I have argued here is that there is no route to detecting those properties through either our emotional attachment to robots or our social engagement with them, as those factors are both explained by responses to the fiction. The fact that robots can be designed to behave in ways that we might normally associate with sentience, when we are aware that they are not sentient, as is currently the case, entirely undermines their behaviour as evidence of sentience. A related criticism, and one that the behaviourist would push, is this: what if our technology becomes so advanced that robots behave exactly like a human in every way? Wouldn’t fictional dualism be redundant then? The answer, even under these conditions, is still no. The fiction retains a role even when the robot is incredibly advanced. This is because the fictional dualist would have no reason to deviate from the line that the robot’s performance is not evidence of their having mental states. Once again, claims that one’s personal assistant is happy to see them and loves to hear about what they have planned for the day are made true by the projected fiction, not by the technological advances of the robot. Isn’t this just to stipulate that robots will never develop properties such as consciousness or sentience? Again, the answer is no. The fictional dualist does not have to be committed to denying the possibility of robot consciousness. The model explains our social engagement and emotional attachment to robots in a way that assumes that the robot’s social behaviour does not derive from the kinds of mental states that we standardly associate with such behaviour in humans and other animals. This is not to rule out such a development as impossible, but to hold a position in which we would need a positive reason for positing robot to see the behaviour of a robot as evidence of its having mental states. The behaviour alone, for all the reasons considered in the previous chapter, is not enough.

THE ROBOT FICTION AND TRADITIONAL FICTIONS It is true that the robot fiction is different from other fictions we have come across. Here I consider two ways in which engaging with the fiction of the robot might differ from our engagement with more traditional forms of fiction. The most obvious way in the which the two kinds of fictions differ is that in many of the forms of social robots we are considering, the robots

64

Chapter 4

are embodied, and this is not the case with the fictional characters we come across in books or on-screen. Engaging with an embodied fiction has the effect of making our engagement feel closer in kind to our engagement with living entities such as humans or animals. It is also possible that the physical embodiment of the robot might make it harder for us to see our engagement with the robot for what it is, a reaction to a fiction, something that we do not seem to have any problem with when we go to the cinema or read a book. With the social robot, but not with traditional fictional characters, we tend to see the fictional character as belonging to the object, very much like the child does when it imagines that its toy has a mental life. The second way in which engaging with the fiction of the robot differs from other forms of fiction is that our engagement with traditional fiction is usually passive. We watch the movie or read the book and, other than perhaps filling out some background details for the characters, we do not bring anything to bear on the character creation or on how the plot develops. In this way we might describe the fictions that we experience in books or movies as fictions with ‘closed characters’. In contrast, according to fictional dualism, when we engage with social robots we are engaging with an object with an ‘open character’; there are some features of character in the presentation of the robot, but the rest is provided by us. Some of the social robots that we consider have a strongly open character. The design of the Roomba, for example, brings very little if anything to its character formation, other than perhaps background stereotypical baggage associated with being a maid or helper and perhaps suggestions of ‘busy-ness’ associated with its abrupt movements. Robots that are specifically designed to be social will generally have less openness of character because they are designed with features that actively encourage human engagement. They may have big childlike eyes to make them appear like they need your care, they may be designed with the features of a particular animal, or they may have an appearance we associate with intelligence. In addition, they may be designed with a number of identifiable character traits in the kinds of dialogue they use: cute, funny, grumpy, etc. It is reasonable to suppose that these design features will constrain the imagination when it comes to the full fictional character we create. For example, it is unlikely that a robot designed to look like a human would be fictionalised as a pet dog. Another potential disanalogy with traditional fiction is what we can call the ‘flexibility’ of some social robots; they change their presentation in response to interactions. The Roomba, we can note, is not flexible; it does not change its presentation in response to interactions with you. It is designed to play a particular role and human engagement with it will not change the form of future social engagement. Even though you might call it

The Fictional Dualism Model of Social Robots

65

Alfredo and talk encouragingly to it when it hoovers your rug, your interactions with it do not have any impact on how it behaves socially. But other social robots differ from the Roomba along this scale of flexibility, having flexible interaction capabilities. They learn through their interactions with individuals and change their dialogue in response. Individualised changes in dialogue inevitably change the appearance of the character of the robot. For example, a social robot that has been trained through interactions to respond to its human with a friendly ‘good morning, sir’ is likely to prompt the creation of a more formal fictional character whereas the same brand of robot that has learned to respond to its human with a ‘what’s up, buddy?’ may prompt the creation of a more informal chatty character. It might be argued that levels of openness and flexibility are only contingent features of robot design, and that it will one day be possible to design a robot with a saturated persona that leaves no room for character creation. By way of response, however, there are two things to note. First, the character creation does not itself need to be ‘deep’ for the fictional dualist model to be useful. Recall from the previous section that the dualism model allows us to create an aspect of the robot that feels, hope, desires, and wishes. Regardless of the advanced design of the robot, the fictionalisation plays an essential role in providing an anchor for the claim that, for example, Eva hates being put in the dark cupboard. Second, while it may well be possible to create robots that leave no room for character creation, there is some evidence to show that it would not be desirable from a product marketing perspective. Although we are not familiar with audience character development through our engagement with traditional fiction, except for some unsuccessful industry forays into ‘choose your own ending’ stories, there is a precedent for this in gaming. For example, when playing a game such as Red Dead Redemption II, the main character Arthur will develop partly in relation to the choices that the player makes when playing him in the game (Rockstar Games 2018). In this sense, the player is partially responsible for the character of Arthur when playing. If they encourage Arthur to behave in a morally good way – or in as morally good a way as an outlaw can behave – this sets Arthur’s moral standing and changes the scenarios that he finds himself in. Furthermore, there is much evidence to show that such character evolution through interaction with the player changes the way that humans feel about the characters, which heightens levels of engagement with the game (Banks and Bowman 2014). Researchers have shown that players become more emotionally attached to characters, and experience more enjoyment from the game, when their choices and actions can be interpreted as having an impact on the moral standing of the character (Wolfendale 2007). It is this active engagement with the fictional character

66

Chapter 4

that leads to players feeling that they have something close to a relationship to the character in the game. HOW DOES FICTIONAL DUALISM COMPARE WITH OTHER THEORIES? In assessing the promise of fictional dualism, it is useful to be reminded of how it differs from the existing models of social robots that we outlined in detail in Chapter 2. First, we can note that the animal model does not make any claim about what social robots are, rather the model provides a way in which we might frame our engagement with social robots. The fictional dualist can accept and build on the basic claims of the animal model, to the extent that the fictional dualist can agree that our fictionalisation of robots might often be modelled on relationships that we have had with animals. What social robots are is partly determined by how we fictionalise them through social engagement and it is very likely that this fictional element will draw closely on experiences that we have in our everyday lives. As the proponents of the animal model note, we engage socially with animals in many aspects of our life. As such, when engaging with a robot that is reminiscent of a dog it is not surprising that our experience with real dogs would be relevant to the creation of the fiction. When engaging with a humanoid robot it might be that our engagement with humans provides a better model for the fiction. In both these cases, we might instead be influenced by fiction itself; that is, the fiction we create might rely heavily on robots that we have come across in movies, stories, or other forms of representational art. Once we have made this move, we can accommodate all of the features that motivated Darling’s theory. However, we can do so without acquiring the same moral baggage. That is because the fictional character that prompts our emotional response does not prompt moral consideration. Nyholm’s view, a variety of the animal model, requires a different response. According to Nyholm, our treatment of social robots should be guided by the Kantian principle that one should ‘So act that you treat humanity, whether in your own person or that of another, always as an end in itself, and never merely as a means’. So, according to Nyholm, it is the fact that the object has the appearance of humanity that obliges us to treat it in a way that approximates our treatment of humans. The fictional dualist would have good reason to stop short of accepting this moral claim. First of all, according to fictional dualism, our response to a humanoid robot can draw on our experiences of more than just those entities in which we detect humanity. As noted above, individuals may draw on their experiences with humans or

The Fictional Dualism Model of Social Robots

67

animals, they may draw on experiences with far more basic computers, with other kinds of AI systems such as chatbots, and so forth. Furthermore, while we might choose to respond to perceived ‘humanity’ in the object and treat the robot somewhat like a human, given the explanatory power of the fictional dualism model, we would require a further argument to show that we are morally obligated to do so. As outlined in chapter 2, Gunkel and Coeckelbergh propose a relational model of how we might think of robots, particularly when considering what rights we might afford them. They suggest that we focus, not on the properties of robots, but instead on our relationships with them. They propose that whether an entity has rights depends on the conditions under which it becomes a moral subject, and that those conditions are a relational matter. Recall that, according to Coeckelbergh, ‘there is no direct, unmediated access to the robot as an objective, observer-independent reality or ‘thing-in-itself’’ (Coeckelbergh 2010a). So far, fictional dualism is compatible with the relational account: the fictional dualist could agree that our relationship with robots is of high importance and also agree that, due to the subject-dependent nature of the fiction, there is no observer-independent access to the robot as a thing-in-itself. Where the fictional dualist and the relationist would part ways is in the approach that the two models take towards robots as agents in the political sphere. Gunkel and Coeckelbergh want to position humans and robots as potential agents in the political domain and reject a view that sees humans as having the power to determine whether robots should have rights. This position has much going for it. However, once we think of robots in line with the fictional dualism model, the question of rights and power simply does not arise, or at least it does not arise from considerations of how we engage with robots in the social domain. We do not, according to fictional dualism, see robots as ‘other’ beyond the extent that we engage with the fiction as another subject. Recall that while considering the moral status of robots John Danaher proposes methodological behaviourism, a view according to which, due to our epistemic limitations, what goes on ‘on the inside’ becomes methodologically irrelevant to the question of moral status; what matters is performative equivalence. Although Danaher’s epistemic humility is well-motivated and while it is true that we cannot know for certain if or when technology will advance so much that a robot can develop the ability to feel pain, we still might think that we have a duty to pay attention to what is ‘inside’ and how it appears to connect to robot performance, taking this to be important evidence. The fictional dualist proposes that what is on the inside and our interpretation of behaviour are both relevant to the question how we think of robots’ abilities and moral status. When thinking of animal consciousness we,

68

Chapter 4

rightly or wrongly, place animals on a scale from less to more evolutionarily advanced species with perhaps invertebrates somewhere near the bottom and humans somewhere around the top. Brain function is part of our evidence in conducting research in this area so in that sense, what is ‘inside’ does matter when we are considering whether an entity might be conscious and what level of consciousness they might have. Likewise, when thinking of robots’ ‘brains’ and their capabilities, we are – not unreasonably – inclined to take as relevant the scale of the evolution of machine minds. It is relevant that we do know quite a lot about how computers work and, although advances in machine learning do take us into a new area where much remains unknown, it is not unreasonable to believe that simple computers and advanced robots are still on the same scale as each other. And we have no inclination to ascribe consciousness to most things on that scale, which will include things such as smart fridges and Apple watches. There is one way of understanding ‘performative equivalence’ in which the fictional dualist can agree with the claim that when a robot is performatively equivalent to another entity they should be granted the same moral status. But the fictional dualist will have a different take on what it means to say that two entities are performatively equivalent. One way of understanding performative equivalence applies to relatively simple tasks such as button sorting, pattern mimicking, and engine assembly, where we can more straightforwardly agree that a robot and a human perform the task in a performatively equivalent way. But these are not really the kinds of performances that are in question when we are considering moral agency. When considering moral agency, we are concerned with equivalence in practices that we describe with psychological language such as thinking, understanding, and feeling. The question of whether we can use these terms to describe an action performed by a robot as performatively equivalent is precisely the deep philosophical question that is at the heart of this area of study. As discussed in chapter 3, ‘doing the same things as’ is theory-laden. Even considering less obviously psychological terms such as ‘playing’ and ‘hunting’ can give pause for thought. Does the robotic predator ‘hunt’ when it performs the task of exterminating mice, or is hunting mice an action that we should reserve for the cat? For the fictional dualist, whether we are content to count the human and the robot as performing equivalently depends on our willingness to see robots as full participants in our practices in human ways of life: again, the question at the heart of our study. As you might recall, Bryson argues that we should see robots as tools and not develop humanised robots that can be viewed as companions because we are likely to be led into category mistakes which will negatively impact on our decision-making. We are in charge of how robots are introduced into society and in what form, and purposefully creating something that evokes

The Fictional Dualism Model of Social Robots

69

feelings of companionship could easily lead to our seeing the robot as our companion, which will itself likely lead to questions of moral standing. Bryson’s solution is to think of our relationship with robots as being more akin to thinking of them as advanced tools, rather than peers. Fictional dualism differs from Bryson’s approach in twosignificant ways. First of all, the fictional dualist sees robot companionship as something to be embraced. Furthermore, while the fictional dualist is also concerned to avoid the kinds of poor decision-making and misjudgements that Bryson is fearful of, they think that the fictional dualist model of social robots allows us to reap the benefits of our social engagement with robots while avoiding the kinds of category mistake that motivates Bryson’s warning. Floridi’s information ethics gives us a model on which to understand questions of moral status that are not dependent on considerations of free will, emotions, or mental states. According to Floridi, we are to develop a system of information ethics that purposefully moves away from the anthropocentricism of morality, a system where (literally) all entities are to be respected for their own sake. Fictional dualism is a theory that is responding to a particular question about human society. The fictional dualist is asking what moral standing or moral consideration robots should be given within human society. Within Floridi’s framework, we could consider that project as determining a Level of Abstraction. To put the fictional dualist position in Floridi’s terms, accepting that all entities are to be respected for their own sake, we might say that robots would be given the same moral consideration as other apparently non-sentient objects such as buildings, tables, and chairs and the same moral consideration as fictional characters, but not the same moral considerations as living things. At the end of chapter 3, I set out some desiderata drawn from both the positive and critical engagement with each of these theories. How does fictional dualism fare with respect to these desiderata? D1: The Theory Should Be about Robots and Not Humans. At least it should depend on what robots are, not simply on how we respond to robots. Is D1 met? Fictional dualism is certainly a theory according to which humans are in control of what robots are. Part of the motivation for proposing a theory of social robots at this relatively early stage in their development is the belief that what robots are will be partly determined by the significance that humans give them. In that sense, the theory is certainly human-centred. On the other hand, the theory does depend on what robots are, using what they are – physical entities with fictional overlays – to pull back our tendencies to run to moral consideration. As such the theory meets D1.

70

Chapter 4

D2. The Theory Should Reflect Epistemic Limitations. D2 is met. Fictional dualism accepts that robots could develop consciousness or sentience without our detecting it. However, it meets this epistemic limitation by considering the evidence that we do have. We have evidence that humans respond to robots as social entities capable of feeling sensations and emotions. But we also have evidence that this is because robots are programmed to elicit such responses in humans and because humans are prone to anthropomorphise under certain conditions. According to fictional dualism this undermines any claim arising from observed behaviour or human response that robots themselves are capable of feeling sensations and emotions. When faced with epistemic limitations it is wise to seriously take all the evidence that you have and remain agnostic about the rest. D3. The Theory Should Strive to Be Both Enlightening and Action-Guiding. Fictional dualism is both enlightening and action-guiding. The theory proposes a significant shift in our perspective of what robots are, a shift that in itself is enlightening. The theory is also action-guiding. The model guides our engagement with robots and would have an impact on their development. In future chapters, I recommend ways in which designers can remind consumers of the nature of the device, preserving the appeal of the product but limiting tendencies towards deception. D4. The Theory Should Be Compatible with the Flourishing Beneficial Technological Advances in AI. This condition is met. Fictional dualism is compatible with the flourishing of technological advances in two ways. Progress can be welcomed without concerns of the mistreatment of an entity that deserves moral consideration and without concerns that category mistakes will blur the boundary between living entities and robots. CHAPTER SUMMARY Fictional dualism is a distinctive model of what social robots are that provides a framework for understanding the role that they will play in society. The model requires a substantial perspective shift. The evidence that many of the theories here draw on, the emotional attachment that we feel towards robots and our willingness to engage socially with them, is to be seen not as giving support to an inclination to believe that robots are like living things or are

The Fictional Dualism Model of Social Robots

71

themselves social beings but instead is seen as evidence of the human tendency to anthropomorphise and socialise. When understood in this way there is no inclination to grant social robots consideration as moral entities in their own right. There might be other reasons for wanting to protect robots from harm. Given their lifelike appearance or some other secondary factor it could be argued that indirect harms would arise from allowing the mistreatment of robots, and that these harms could justify their being awarded some kind of consideration or protection. I return to this question in chapter 7.

Chapter 5

Robots and Identity

In this chapter, I focus on some metaphysical and ethical puzzles of identity and continuation that will arise with continued progress in AI. These cases are relevant to our question of what social robots are in two ways. First, I consider cases in which we are expected to think of the social robot as continuing, being, or closely representing a person who exists or existed. How we think of these ‘continuing bonds’ or avatar robots should be compatible with our model of social robots and, ideally, should help us to make sense of the place that such entities can hold in society. Second, I use the fictional dualism model to respond to potential identity puzzles that arise with regards to social robots themselves. In thinking about human and robot identity, science fiction got there before us in ways that are both helpful and potentially misleading. In Ishiguro’s Klara and the Sun, Josie’s mother (Mother, also called Chrissie), has a hidden motivation for buying the robot, Klara: her daughter Josie is ill, and she wants the advanced social robot Klara to be a continuation of Josie if Josie is to die (Ishiguru 2021). Even assuming that the technology in Ishiguro’s world could perform such an operation, the prospect of Klara being a continuation of Josie raises familiar philosophical questions of identity and personal identity. The fiction presents us with three possible views on the continuation of the mind: the view of Capaldi (the scientist), who believes that he can capture everything of Josie by having Klara perfectly mimic her behaviour; the view of Father who believes that a mind is identical to a collection of brain states; finally the view of Mother, who wants to be a behaviourist but who, we are led to believe, ultimately feels that there is some essence of personal identity that neither duplication of brain states nor perfect mimicking of behaviour can capture: an indefinable ‘Josie-ness’. 73

74

Chapter 5

In his excellent paper discussing Klara and the Sun, Stenseke points out that we are left wondering whether Klara is ‘human enough’ to continue Josie (Stenseke 2022, 10). When we are confronted with this prospect through the novel, we are not thinking of Klara’s ability to be performatively equivalent to Josie but are instead wondering whether those who are seeking Josie’s continuation could really see the robot Klara as her continuation. If Mother can accept that the scientist Capaldi could arrange for Klara to continue Josie, then Klara will be the continuation of Josie (for Mother). However, Josie’s father suspects that no matter how successful the medical procedure might be, Mother will never be able to let go of the sense that there is some part of Josie that cannot be captured and continued: I think I hate Capaldi because deep down I suspect he may be right. That what he claims is true. That science has now proved beyond doubt there’s nothing so unique about my daughter, nothing there our modern tools can’t excavate, copy, transfer. That people have been living with one another all this time, centuries, loving and hating each other, and all on a mistaken premise. A kind of superstition we kept going while we didn’t know better. That’s how Capaldi sees it, and there’s a part of me that fears he’s right. Chrissie, on the other hand, isn’t like me. She may not know it yet, but she’ll never let herself be persuaded. If the moment ever comes, never mind how well you play your part, Klara, never mind how much she wishes it to work, Chrissie just won’t be able to accept it. She’s too . . . old-fashioned. Even if she knows she’s going against that science and the math, she still won’t be able to do it. She just won’t stretch that far. (Ishiguru 2021, 224–5)

As Stenseke notes, Ishiguro ultimately leaves us with the view that ‘what Josie is does not (only) depend on her inherent qualities or outward behaviour; it hinges on what she is for the people who care about her [original emphasis]’ (Stenseke 2022, 13). Klara and the Sun raises a philosophical puzzle of personal identity in a tightly constrained fictional environment that already provides us with some judgements: Mother is to be pitied for her desperate need to preserve her daughter, the scientist is a despised character for believing that the essence of a person can be copied from their brain, and Klara – who tells this story from the first person – is ultimately treated as a being that is entirely without value, as the successful uploading of Josie would end Klara’s existence as an independent entity. Despite the futuristic setting of Ishiguro’s fiction, the possibility of creating continuations of deceased loved ones, and the ethical questions that arise for the tech companies who will facilitate this, are already on our doorstep. Before considering that possibility and related new robot identity puzzles it will be useful to briefly review some of the large body of existing philosophical literature on puzzles of identity and personal identity.

Robots and Identity

75

PHILOSOPHICAL PUZZLES OF IDENTITY Perhaps the most renowned puzzle of identity is that depicted in the story of the Ship of Theseus.1 Theseus had a ship whose parts would gradually become worn. While the ship was still sailing it would have the worn parts replaced, sometimes a few planks of the deck, other times some parts of the bow, until eventually all of the parts of the original ship had been replaced. The question that puzzles philosophers is, was the resulting ship which had none of its original parts still the same ship? I suspect that most are inclined to say yes, that the ship survived the gradual replacement of all of its parts. However, the thought experiment can be extended to twist our intuitions in a different direction. Thomas Hobbes asked us to consider the following: imagine that some person had been collecting all of the discarded original parts as they had been removed from the ship in the process of the gradual restoration. Once all the parts had been gathered, this person reassembled the parts back together into their original places, seemingly reassembling the original ship. If we accept that the craftsman has reassembled the original ship, accept that this reassembled craft is the Ship of Theseus, then we are in trouble. Because now we have two ships that both have claims to be the Ship of Theseus and, surely, The Ship of Theseus is just one object that cannot be in two places at once. Even more strangely, if there are two ships of Theseus at the end of our story then there must have been two ships of Theseus all along, coexisting in the same place at the same time. How can we make sense of this? Another puzzle of constitution, considered by Aristotle and other ancient philosophers, is that of the statue and the clay. Suppose that a sculptor buys a lump of clay: we can call it ‘Lump’. After purchasing it the artist sculpts the clay into a statue: we can call this ‘Statue’. On first consideration it seems obvious that that there is only one object here; the statue just is the lump. But we can create difficulty for this obvious first take by pulling out intuitions that respond to the fact that Lump and Statue have different properties. For one thing, Lump has existed for more time than Statue did. Also, Lump can survive being squashed whereas Statue cannot. But, following Leibniz’s law, we must accept that no identical objects can differ in their properties, leading us to conclude that we have two entities here, and not the one entity of our common-sense judgement.2 Basically the fact that Statue has properties that Lump lacks, and vice versa, means that Statue and Lump cannot be the same object.3 Philosophers have defended some pretty far-fetched proposals in response to the puzzle of the statue and the lump: some have bitten the bullet and accepted that Statue and Lump are two distinct entities, others have denied the existence of Statue; some argue that Lump no longer exists when Statue

76

Chapter 5

is formed; some have rejected Leibniz’s Law and embraced relative identity; and others have embraced deflationist views which see the problem as being rooted in how we use language. For example, Noam Chomsky directs us to Hume who claimed that the objects that we talk about are really objects of thought which are constructed by mental operations, and there is no peculiar physical nature belonging to them. Chomsky says, ‘. . . children’s literature is based on these notions. In the standard fairy tale the handsome prince is turned into a frog by the wicked witch, and he is to all extents and purposes a frog until the beautiful princess kisses him and suddenly he’s a prince again. The infant knows that he was the prince all along and it didn’t matter if he looked like a frog’ (Piatelli-Palmarini, Uriagereka, and Salaburu 2011, 282). So much for puzzles of the identity of objects. But what about puzzles of personal identity? If when the prince turned into the frog, he had no recollection of being a human and could only think froggy thoughts, does it really make sense to think that he is still the prince? It might be a better description of events to say that the prince was destroyed when the frog was created. There is not so much one problem of personal identity but rather a range of philosophical questions that can be thought of as loosely connected to each other. The personal identity question which will be most pertinent to us gives rise to questions of persistence. What does it take for a person to persist from one time to another? What kind of event would mean that you would stop existing? What is it, if anything, that determines that past ‘you’ and future ‘you’ are you? Two further considerations are relevant to address the persistence question. The first concerns the kinds of things that we might take to be relevant evidence for claims of personal persistence: is it psychological evidence that is fundamental, things such as memories or continued conscious experience, or it is physical continuity that is relevant, there being a body that looks like you or is spatio-temporally contiguous with you? Or is it some combination of the two? In the philosophy literature, there are numerous answers to the persistence question but here we can focus on the two main approaches. The first way of approaching the persistence question gives a central role to psychological continuity. Roughly speaking, you are the future being that inherits your beliefs, memories, and preferences.4 To conjure up some intuitive support for this psychological continuity view we can consider a thought experiment. Imagine that your brain is functioning perfectly, but you have been paralysed. A surgeon has found a way to perform a brain implant procedure during which he will remove your brain and place it inside the head of a donated ‘host’ body, perhaps someone whose body is in full working order but who is brain dead. If you woke inside a different body with all of your old memories, capabilities, and other mental features you would still likely feel that you had survived, that you were still, you (Shoemaker 1984).

Robots and Identity

77

The second kind of approach is one that prioritises the physical over the psychological and, considered very roughly, states that your future being has your body. The brute physicalist view implies that humans have the same kinds of persistence conditions as non-persons. It also implies that persistence conditions for ‘persons’ without bodies would be different from persistence conditions for a person with a body. While the brute physicalist position might seem appealingly straightforward, the most common objection to it is easy to see. Recall our brain transplant example above. On waking, you would be shocked to discover the implication of the brute physicalist view that ‘you’ were actually the body left on the operating table and, although you feel like the same person, you had not survived the procedure at all (Unger 1979). Many attempts at compromise have been proposed. Recall Lowe’s nonCartesian substance dualism from chapter 3. According to Lowe, the statue/lump example simply shows us how two substances can be distinct (i.e., not identical), yet so intimately related that they coincide spatially, sharing many of their physical properties. ‘In evidence of this, it is very plausible to suppose, for example, that I could survive the gradual replacement of every cell in my body by inorganic parts of appropriate kinds, so that I would end up possessing a wholly “bionic” body, distinct in all of its parts from my existing biological body’ (Lowe 2006, 9). That is, a human is both a person of psychological continuity and an object of physical matter. As is now clear, the puzzles of identity are not new, but technology does present us with a step-change in this area. For not only does technology present us with new intuition-testing possibilities, some of them are also likely to be actualised, making the puzzles of identity less theoretical and more pressing. Here I will consider just five examples of technologically inspired identity puzzles, but there are bound to be more. The latter three examples concern personal identity for social robots themselves; the first two examples arise from technology being used to ‘continue’ humans. CONTINUING BONDS TECHNOLOGY In the Black Mirror episode ‘Be Right Back’ we are shown a scenario in which a young man, Ash, dies suddenly and, as a way of combatting her grief, his partner Marty begins to message a chatbot that is created from Ash’s online activity (Booker 2016). Marty engages reluctantly to begin with, but she gets over the initial ‘creepiness’ and soon she talks to an avatar of Ash on a video call before deciding to live with his robotic replica.

78

Chapter 5

Convincing robot replicas are some way off, but the possibility does raise the interesting question of how we are to think of such entities in relation to the person that they are replicating. Is the replica to be thought of as a continuation of a self? Clearly there is some sense in which it is. With a large enough amount of personal data to train the chatbot with, we can imagine a scenario in which the behaviour of the chatbot is exactly like the expected behaviour of the deceased person. It makes the same kinds of jokes as them, has exactly the same vocal ticks and mannerisms, calls the individual by their secret pet name, and so on. We can certainly build the scenario such that it can feel to the person who is still living that their companion is talking with them. However, from the (imagined) perspective of the dead person things might be very different. There is something appalling about the very idea that a programme could be a passable replacement for an individual let alone continue their relationship with their loved one, having new conversations, creating new memories from this partnership in which they are represented but not present. Although the robot replica is technologically out of our reach, the groundwork for using technologies to alleviate grief is here already. Krueger and Osler (2022) consider the role that chatbots might play in our grieving practices. They point out that real-life examples of chatbots designed for this purpose are already in play. ‘DadBot’ was built from the data from interviews conducted with James Vlahos’s late father. Eugenia Kuyda’s chatbot ‘Roman’ was built on a neural network fed with thousands of telegrams exchanged between Eugenia and their deceased friend, Roman. Famously, in 2020, Kanye West gifted Kim Kardashian a hologram of her dead father to wish her happy birthday. Technology entrepreneurs have identified the grief market as a potential place for development – in 2020, Microsoft was granted a patent for a method of creating conversational chatbots that sounded like a specific person (Krueger and Osler 2022).5 How are we to understand the relationship between the person and the bot or avatar that is being used to represent them? Krueger and Osler propose that we should think about how we engage with chatbots in grief through a fictionalist lens on the basis that ‘engaging in imaginary conversations with the dead is a familiar practice’. The idea is that when a user like Eugenia interacts with Romanbot, she is not thinking that she is engaging with Roman from beyond the grave; instead, she is engaging in a pretence where she imagines that she is talking to Roman. The chatbot is a tool that facilitates this practice. And the practice itself is something that is already familiar to those who have lost someone. As Kathryn Norlock notes, ‘Perhaps many readers have had the experience of not just thinking about a dead friend of family member but holding an inner dialogue or argument with the departed individual, or

Robots and Identity

79

imagining their response to one’s actions or beliefs, or maintain a practice previously shared with the deceased because it was shared with the deceased’ (Norlock 2016, 345). Lindemann (2022) also talks about the use of what she calls ‘deathbots’ as a kind of pretence: When I write with my Lily deathbot I can pretend that she has not died as the answers the bot outputs allow me to imagine that I am interacting with her. I can pretend that my sister solely moved to a far-away place while I am writing with the bot to avoid the feeling of emptiness and grief that I feel when I think of her otherwise. If my deathbot allows me to seemingly write with my sister, I do not need to fully adapt to a world without my sister. Just like before her death, whenever something happens that I want to talk with her about. I turn to my phone and tell her. When I feel that the grief threatens to overcome me, I start using the deathbot to ease my feelings. I intellectually know that I do not in fact text with my sister, but the bot. But at times it still feels like I am talking with her. People writing on Facebook walls of the deceased report that they feel like the dead are reading their messages. (Lindemann 2022, 43)

It is important to clarify the role of pretending that both Krueger and Osler, and Lindemann point to. Despite the claims above, it seems clear that the pretence is involved only as an element of motivation for engaging with the robot: the technology allows the agent to pretend that their loved one is still living. But this is different from the claim that the engagement with Romanbot is a pretence or that Romanbot itself is a pretence. Pretending or imagining that one is having a conversation with a deceased loved one is different from having a conversation with a continuing bonds robot in several ways. First of all, the robot performs its actions publicly and not only in the imagination of the bereaved person. Someone, the software developer or even the deceased person themselves, might be held responsible for things that the robot says in a way that they could not be if the conversation was taking place in a person’s imagination. Relatedly, the bereaved person plays no part in creating either the content, the manner, or the trajectory of the robot’s input to the conversation. What the robot says is entirely independent from them in a way that their imagined conversation is not. The use of software to create ‘continuing bonds’ conversations is a philosophically interesting case. No clear-thinking person would believe that the resulting conversations that they have are conversations with the deceased person, but in order for the bond to work, there must be some sense in which the conversation is thought of precisely as being a conversation with the deceased person. The question is, in what sense? It is plausible that users engage with the programme as a fictional representation of their loved one, a fiction based on information about the kinds

80

Chapter 5

of things that the person said and did, their mannerisms, and so on. In this way, the continuing bonds creation can be thought of as very similar to the process involved in creating a new story about a ‘dead’ fictional character. In 2011, Anthony Horowitz was the first author to publish a new Sherlock Holmes mystery after this was authorised by the Arthur Conan Doyle Estate.6 Horowitz would presumably use the existing body of work to get a feel for the kinds of things that Sherlock might say and do and then imagine how to continue Sherlock’s adventures. In both examples, with new Sherlock and with the continuing bonds robots, it is undoubtedly the case that the reader or interlocutor has to go along with a pretence that the new ‘project’ in some sense continues the old project; that the robot is intended as a continuation of the deceased and that Horowitz’s Sherlock is intended as a continuation of Conan Doyle’s Sherlock, even though it is clear in both cases that these are not continuations and are at best a kind of homage. A further motivation for seeing the robot as a form of fiction instead of seeing it as the result of engaging in a pretence is to explain how the person that the fiction is based on might feel about it if they knew in advance, and why they would feel the way they would feel. If Roman had discovered that his friends would sometimes engage in imaginary conversations with him after he was dead, he might be touched. But if Roman discovered that his friends would try and use technology to continue bonds with him through conversation after he was dead, he may well feel appalled. Furthermore, he might feel that he, if anyone, should have the right to stop such a bot from being created. It is interesting to think through the different emotional reactions that Roman might have had to this, and why. Roman might accept that the robot will in some sense represent him but in a way that he cannot control and that he cannot even be part of. Again, the analogy with fiction, or fictionalised biography, can help us here. Netflix’s The Crown has come under fire repeatedly for its depiction of events in the lives of the British royal family. Many have argued, plausibly, that viewers will find it difficult to separate their thoughts about the members of the real royal family from their thoughts about the fictionalised versions. For example, after watching The Crown many are likely to form false beliefs about conversations that took place between members of the royal family and draw unwarranted conclusions about the real-life members of that family on that basis. Some have publicly decried this as unfair. In an open letter to Netflix, published in the Times, actor Dame Judy Dench asked Netflix to add disclaimers at the start of each episode clarifying that the show is ‘fictionalized drama’. Dench called elements of the dramatization ‘cruelly unjust’ and expressed concerns that ‘a significant number of viewers’ could consider them to be accurate.7 A spokesperson for Netflix said in response that it has ‘no plans, and sees no need, to add a disclaimer’, as ‘we have every confidence our members understand it’s a work of fiction that’s

Robots and Identity

81

broadly based on historical events’. A pretty poor response. It is not unreasonable to think that Roman could legitimately put a similar demand to the creator of his continuing bond robot; that, if it must be created, it should at least regularly remind those that it converses with that it is a work of fiction based on selected data taken from Roman’s life. In addition to this concern, Roman might also be offended that his friends would receive comfort from engaging with an entity such as Romanbot as if it were him when, in Roman’s view, he is so much more. Again, this is a feeling that Roman is not likely to have if his friends were simply having imaginary conversations with him. It is possible that the ‘this is a fiction’ disclaimer might also alleviate this concern.

UPLOADS Continuing bonds robots might plausibly be fictions created from gathered data on a particular individual, but what of ‘uploads’? Uploads would be created by scanning the human mind and uploading the result to a computer where it can continue to live, albeit free from a body. The literature around the possibility of uploading tends to focus on how it can be accomplished in a way that preserves continuity of consciousness, perhaps retaining personal identity on a psychological continuity model. Chalmers (2010, 2022), for example, asks if an uploaded human could be conscious and whether uploading can preserve personal identity. But even in these cases, where there is an attempt to reduce ‘what matters’ to psychological continuity, and where ‘from the inside’ the entity might feel exactly like the original, it seems plausible that the attitudes of others in society might legitimately lead to the denial of identity. That is, there are questions around the softer notion of preservation and relatedness that the technical exploration of the possibility of uploading passes straight by. Chalmers himself does seems to implicitly acknowledge that we lose something important for identity when we move from our biological body to a server. Chalmers states: Suppose that yesterday Dave was uploaded into a computer. The original brain and body were not destroyed, so there are now two conscious beings: BioDave and DigiDave. BioDave’s natural attitude will be that he is the original system and that DigiDave is at best some sort of branchline copy. DigiDave presumably has some rights, but it is natural to hold that he does not have BioDave’s rights. For example, it is natural to hold that BioDave has certain rights to Dave’s possessions, his friends, and so on, where DigiDave does not. And it is natural to hold that this is because BioDave is Dave: that is, Dave has survived as BioDave and not as DigiDave. (Chalmers 2010, 15–6)

82

Chapter 5

Chalmers pre-empts that the following question would be put to him: if we doubt that DigiDave is Dave in the case as outlined, why should we think that DigiDave is Dave in the case in which BioDave is destroyed? Chalmers calls this the pessimist view. First, note that calling it the ‘pessimist view’ is far from theoretically neutral. Just as easily, the example can be viewed as positively reinforcing the intuition there is something important to our conception of personal identity in the combination of the body and the brain (or mind). Chalmers sees this as pessimistic because he himself seems happy to accept that our bodies are of minimal importance to identity; the paper starts with the sentence, ‘In the long run, if we are to match the speed and capacity of nonbiological systems, we will probably have to dispense with our biological core entirely’. Speed and capacity are assumed to be preferable to being stuck in a decaying and limiting body. Rather than dismiss it as a pessimistic take, we can take seriously the intuition that DigiDave will always be a second-class Dave, whether BioDave survives or not. I suspect that this is the case. Imagine another thought experiment in which, during the uploading process, the creation of DigiDave goes entirely to plan but BioDave suffers a momentary break in psychological continuity and his brain is slightly damaged such that BioDave has reduced mental capacity after the procedure. BioDave is still Dave, even though DigiDave has more claim to psychological continuity with Dave than BioDave. In fact, even if BioDave has no recollection of being Dave at all it seems that no jury in the land would straightforwardly recognise DigiDave as being Dave. In terms of its Daveness, DigiDave is most plausibly a last resort option. Of course, Daveness is not everything: it may well be that continuing in a virtual world as a lesser copy of Dave is preferable than living as Dave in the actual world, but that does not mean that DigiDave is Dave. If DigiDave is not Dave, what or who is he? It might well be that DigiDave can only be accepted as a replacement or a representation of Dave, rather like Romanbot is accepted as presenting a representation of Roman. An advocate of upload identity might push back and argue that the two cases are very different: for one thing DigiDave is psychologically continuous with Dave whereas Romanbot is not psychologically continuous with Roman. While this is true, it only shows that psychological continuity is one important consideration in personal identity; at the same time, it is not everything and it is does not necessarily trump bio-identity. Consider another scenario, one in which DigiDave survives and BioDave is destroyed. Chalmers is right that DigiDave, being psychologically continuous with Dave, will claim to be Dave but we can very easily imagine scenarios in which his claim is reasonably denied by those closest to him. Dave’s partner, for example, might be devastated by Dave’s choice to abandon his physical body for a virtual world, mourning the death of Dave, having a funeral for him and, quite reasonably

Robots and Identity

83

it seems, refusing to see DigiDave as Dave. If Dave were married, would his spouse be free to remarry when Dave’s body was buried? Or would they be eternally connected to a DigiDave that they do not accept? Here the reflections of Stenseke on the possibility of Klara continuing Josie are pertinent. Recall that Stenseke points out that in the albeit fictional world of Klara, what Josie is depends neither only on her properties (such as psychological continuity) nor on her outward behaviour and presentation but on what she is for the people who care about her. It might turn out that for some people and in some situations the bar for personal identity is set quite low, but for others it is very high and as a result they will reject the claim that the uploaded versions is the person in question, even when many of the conditions seem to be met. How can we accommodate this apparently contextual feature of personal identity? Chalmers fairly quickly dismisses the ‘closest continuer view’ of personal identity, but it does seem to have some merit. Nozick had proposed the closest continuer view as a way of defusing any rigid notion of identity (Nozick 2014). The theory states that conditions of personal identity being met can depend on which candidates for identity are available at the time. Our intuitions in the personal identity test cases, show us that in some situations we favour candidates displaying bodily continuity and in other situations we favour candidates displaying psychological continuity. And, when no competing candidates are available, we are likely to accept whichever candidate is, regardless of whether it displays bodily continuity or psychological continuity. The closest continuer view offers a pragmatic solution to the puzzles of personal identity. Some people, perhaps Chalmers is one of them, might think bodily continuity to be almost irrelevant to personal identity and this could more easily lead them to see DigiDave as being Dave. But for others, especially onlookers who do not experience DigiDave’s psychological continuity from ‘the inside’, Dave’s body is an important part of who Dave is and this could lead them to reject DigiDave in favour of some other entity (perhaps the remains of BioDave) as being Dave. The closest continuer theory does come up against cases in which the definite description ‘the closest continuer’ will not identify a unique individual. For example, imagine that Chalmers really does not place any importance on bodily continuity, seeing only psychological continuity as being relevant to identity. In that situation, Chalmers would see BioDave and DigiDave as having an equal claim to be Dave. In such cases, neither BioDave nor DigiDave can be Dave (for Chalmers), because he has no good reason for choosing between them and they can’t both be Dave as two entities with distinct properties cannot be identical to each other (i.e., if they were both identical to Dave, then they would be identical to the same person, which means that they would have to be identical to each other.).

84

Chapter 5

There might also be cases in which the closest continuer is just not close enough to be counted. For example, we can imagine a case in which BioDave is killed and DigiDave has retained no memories of Dave’s; in this scenario we might be reluctant to see DigiDave as Dave, despite his being the closest continuer. There is another solution. In cases in which identity is not preserved because there is more than one candidate with an equal claim, and in cases in which we feel that the proposed continuer is simply not connected enough to the original person to count as them, perhaps we need another term that recognises some relation between the original and the continuer that falls short of identity. Susan Schneider and Joe Corabi (2014) question whether an uploaded mind is the same person. And they ask, if it is not, is the upload at least a continuation of the original person? Or if not that, does it at least preserve some aspects of them? Enough to be an attractive proposition to you and/or your loved ones? For Schneider and Corabi, what ultimately gets in the way of a successful upload is the fact that if Dave could be uploaded to one computer, he could surely be uploaded to more than one computer. Again, any theory that would accept that one upload would be Dave would seem to have to accept that all the uploads were Dave. But this is not possible. For this reason, Schneider and Corabi conclude that humans cannot upload themselves to the digital universe: they can only upload copies of themselves. ‘.  .  . the “birth” of the upload is spatiotemporally separated from the “death” of the original individual, and moreover the “body” of the upload doesn’t even possess the same broad-ranging kinds of physical properties as the body of the original person – that is, the pre-upload is a carbon-based being in which mental properties are in a cellular substrate, whereas the upload is not’ (Schneider and Corabi 2014, 13).

The idea of uploads being copies of humans is, I think, a useful one that we can expand on. A copy of something can replicate the original in every perceptible detail but it cannot be identical with the original, being distinct from it by definition. If we want to attach some negative moral value to the concept of a copy, we can usefully consider the ethically loaded concept of a forgery. There is nothing morally suspect about producing copies, but there is something morally suspect about passing a copy off as if it were an original. Nelson Goodman defines a forgery as ‘an object falsely purporting to have the history of production requisite for the (or an) original work of art’ (Goodman 1968, 122). For those who feel that even a very successful upload lacks the value of the original, the presentation of the upload as being identical with the original could be understood by them as ‘false purporting’. Copies

Robots and Identity

85

are fine, but we might think it is important that they should be marked out as copies and not passed off as the original. Continuing bond robots and uploads are examples of cases in which advances in technology will put pressure on personal identity for humans. But what about personal identity for robots, or what I will call ‘r-identity’? As is now familiar, we are likely to form deep attachments to our social robots. Because of these attachments it will be important to us that the robot that we continue our relationship with be the same robot that we started out with. We do not usually care too much about that with objects, leaving aside convenience and cost considerations. When your television stops performing in the way you would like you generally decide to replace it without any emotional distress. But as we develop an attachment to robots that is similar to the attachment we develop to humans and other animals it would be problematic for our long-term relations with robots if that robot’s personal identity relation could not be preserved. It does seem that there are a number of ways that the r-identity relation might come under pressure.

ROBOT UPGRADES The same kinds of puzzles that we considered above will arise in relation to social robots. Imagine that your family has a beloved personal assistant robot, Standish, who has been in your home for many years. Standish has picked up on the family running jokes and phrases, it helps the children with their homework, and undertakes simple babysitting tasks. Standish is a friend and companion to the teenagers when they feel misunderstood and always seems to know exactly how to defuse a developing family feud. He is considered a part of the family. Standish starts to malfunction and, having been repaired many times the technician advises replacement with a new model. They propose to take all of the data from Standish and upload it into the new model. The question is, will this new model then be Standish? It depends on what is relevant for r-identity. If psychological continuity is what matters to r-identity, then the new model will be Standish. It will make the same jokes, it remembers the preferences of the family members and so on, it just has a different body. But it is perfectly reasonable to think that the family members will not see this new model as being Standish. For one thing, it just doesn’t look familiar to them as it doesn’t have the marks of wear-and-tear that Standish had. In fact, the family may well find it very disconcerting to experience this ‘imposter’ robot saying exactly the kinds of things that Standish used to say and knowing personal things about them that only Standish would know. It seems not entirely

86

Chapter 5

implausible that the family would reject the replacement. Standish would have ‘died’ and he could not be ‘continued’ in another body. The fictional dualism model explains exactly why one might be reluctant to say that the replacement robot is Standish. According to the dualist model, Standish is not simply the physical robot, even when we include all the data in his computer; Standish is the physical plus a fictional overlay of anthropomorphic character that supervenes on the physical. Standish cannot be uploaded to another body because Standish is the combination of both his physical body and his projected character. Standish cares deeply for the family that he lives with, is interested in hearing about their day, and cares about their happiness. The replacement robot may have all of the data that Standish had, yet the family would plausibly reject that the replacement loves them. The natural feeling of bereavement that the family is likely to experience on losing Standish could plausibly lead to some kind of secondary rights for robots such as Standish – I return to this in chapter 7. That is not to say that all would reject the offer of the new model upload. For many it would be the best option because it would mean that they do not have to start from scratch with a model who knows nothing about them and their preferences. In the Star Wars origin movie Solo, there is a scene in which one of the key characters, a droid L3, has been killed and she is uploaded into the Millennium Falcon (Howard 2018). Did L3 survive? Intuitively it seems not. L3 was shot and she died. The upload was not a rebirth; it was simply a convenient way to retain L3’s navigational abilities and expertise.

CAN ONE ROBOT HAVE TWO IDENTITIES? Here is a strange feature of the fictional dualism model. Suppose a care home has only one Paro that is shared between two patients, Ruby and Stanley. Ruby has the Paro for one half of the week and Stanley has the Paro for the other half. Ruby calls the Paro ‘Bo’. Bo likes to be tickled under the chin and watch TV while sitting on Ruby’s lap. Stanley calls the Paro ‘Snowy’. Snowy likes to watch the passers-by while looking out the window with Stanley and listening along to Stanley’s jazz records. Is it problematic for fictional dualism that one robot appears to have two identities? We can spell the potential problem out: Robot Identity Premise 1:  Bo is Paro. Premise 2:  Snowy is Paro. Conclusion:  Bo is Snowy.

Robots and Identity

87

But Bo is not Snowy. For one thing, Snowy likes jazz and Bo hates jazz. Snowy is a refined gentleman pet who detests television whereas Bo is a bit of a couch potato. We might be tempted to treat this like a case of dissociative identity disorder, a phenomenon also discussed by philosophers interested in personal identity. For example, the following seems to be problematic for a theory of personal identity: Dissociative Identity Premise 1:  Dr Jekyll is a gentleman who would not murder anyone. Premise 2:  Mr Hyde is a violent man who murders people. Premise 3:  Dr Jekyll is Mr Hyde. Conclusion:  Dr Jekyll is a man who would not murder anyone but who murders people. (Contradiction)

David Wiggins proposed that in the context of such a case, identity or ‘personality’ means something like character and not ‘distinct individual’ or person (Wiggins 1980, 29). And there is nothing contradictory about a person manifesting different characters or personalities. I am not sure how convincing this is as a solution to the case of dissociative identity disorder, where the two ‘personalities’ coexist in the same individual. But we can note that, in the case of social robots, the problem of coexistence of two personalities does not arise. Bo’s character does not originate in Paro, it originates in Ruby; it is projected onto Paro by Ruby. If Ruby were to pass away, Bo would die with her whereas Paro would continue on unaltered. This is because the social robot is the combination of the physical elements of the robot plus its fictional character. As noted in the section justifying the dualism component of fictional dualism in chapter 3, social robots have a fictional component that is not reducible to their physical body; therefore, assuming ‘Bo’ refers to Ruby’s companion, ‘Snowy’ refers to Stanley’s companion and ‘Paro’ refers to the physical token of the robot, we should say that neither Bo nor Snowy is reducible to Paro. As such, assuming the ‘is’ of premise one and premise two is the ‘is’ of identity, both premise one and premise two of ‘ROBOT IDENTITY’ are to be rejected. We would still be able to say things like, ‘That [pointing] Paro is both Snowy and Bo’, but in saying this we are not saying that both Snowy and Bo are identical to Paro. Rather we are saying that both Snowy and Bo supervene on Paro, without either being reducible to Paro.

88

Chapter 5

ROBOT IDENTITY AND HIVE MINDS Arnold and Scheutz (2018) consider one implication of our relationships with social robots which they describe as a blurring of the type/token distinction. The type/token distinction is an ontological distinction that is useful when we want to distinguish between a general thing and a particular instance of the general. For example, I can distinguish between Kazuo Ishiguro’s novel The Buried Giant, and a particular instance of that thing, for example my particular copy of the book. Types are usually abstract things: Ishiguro’s novel (type) is identical with no particular instance of it, and neither is it a collocation of all the instances of it; the novel (type) could survive the destruction of all of the copies (tokens). Arnold and Scheutz remind us that in order to function at the highest level robots will not just be tokens of a type, that is, your Paro will not just be the same as every other Paro, but instead they will be individualised through interactions with you. You will share experiences with your Paro, experiences that other Paros will not have shared, and you will expect that these experiences will shape your future interactions. This is how the robot will build up ongoing positive interactions that feed into rapport-building and trust. Furthermore, this is essential for the product to work well; the robot you acquire is unlikely to be as successful if it simply remains a ‘type’, if all instances of the robot remain the same and do not learn from interactions. Arnold and Scheutz speculate that lying behind those feelings of relationship-building and trust are assumptions that the robot that we have is a closed system: that it is both complete and unique. ‘Coordination and rapport on a person-to-person level has certain orienting assumptions: what we can each perceive in shared space, what kind of information is shareable, and how it might direct joint exploration of the environment. These instincts and abilities can carry over into how robots are perceived and received’ (Arnold and Scheutz 2018, 360). However, it seems very likely that robots will not operate as closed systems but will share information with each other, because they can and also this is likely to improve their overall performance. That is, it will be increasingly likely that your Paro will be not a unique closed information system but just one part of a larger system that will allow it to feed into and learn from the interactions of other Paros. Williams et  al. note, ‘[W]hile modern robots are presented as monolithic systems with one mind, body, and identity, this is rarely the case in practice. NASA’s Astrobees have discrete bodies and names, but their ‘mind’, that is, the computation governing their behaviour, is distributed across multiple machines (2020). And Oosterveld et al. present a pair of robots that have separate perception and motion systems but shared dialogue and goal management components, claiming that – ‘This enables each robot to report what the other robot sees, and pass along information

Robots and Identity

89

and commands to the other robot, while maintaining (or as we will argue, performing) a unique identity’ (2017, 415). A tension arises: the robot will be more successful if it presents itself as being a closed system, but this presentation would be misleading if it is actually sharing and learning from its ‘peers’. Arnold and Scheutz wonder whether there is a moral obligation for this fact to be presented to users, considering whether we might demand that designers build in intermittent reminders of the robot’s functionality. Otherwise, users are in danger of experiencing manipulation if the ‘illusion of independence and a special relationship to the user is too strong’ (Arnold and Scheutz 2018, 365). Leaving aside the concerns around deception, it is notable that a hive mind model puts significant pressure on our concept of r-identity. Under considerations of the scenario in which robots have not one closed ‘mind’ but share a hive mind, it is hard to think of one’s own robot as having individual identity at all. Regardless of the model of identity we prefer –a psychological continuity model, a physical continuity model, or some hybrid model – persons or r-persons are closed systems. The fictional dualism model can accommodate some notion of r-identity under assumptions of open systems or hive minds. For the fictional dualist, the robot is a combination of the physical, which on this view would include both the hive mind and the individual robot body, plus the fictional overlay or character. As such, not only does the robot appear to be unique, the robot is unique; the fictional overlay marks the robot out as an individual entity. This is a significant benefit of the fictional dualism model. When discussing robots with shared systems, Williams et al. highlight the importance of an individual maintaining or ‘performing [an] identity for human benefit’. The system is not unique, being shared with countless other entities, but the fiction is resilient. Assuming the fiction can survive the human knowing that the robot is served by a hive mind, identity can be both created and preserved.

Chapter 6

Trusting Social Robots

Trust is a vital component of our social interactions. In fact, without trust we would be severely debilitated as it underpins many of our practices and almost all of the knowledge that we have. Trust allows us to rely on others for services that we could not perform all by ourselves, it allows us to form close relationships, and, underpinning testimony, it allows us to gain knowledge from other people’s reported experiences. On the other hand, trusting also makes us vulnerable because when we trust others there is a risk that they will let us down. As robots play a larger role in society the question of whether they are the kinds of things that we can trust becomes increasingly important. Our ability to trust social robots is a complicated matter. First of all, robots are objects, and it is standardly assumed that although objects can be reliable, they are not the kinds of things that can be the recipients of trust. Furthermore, even if we concede that robots are the kinds of things that can be trusted, there are other factors that threaten to put pressure on this trusting relation. TRUST AND RELIABILITY As philosophers worked to distinguish trust from reliance, philosophical views of trust became rooted in anthropomorphic features; whatever trust is, it must be distinguished from the attitude of reliance that we can have towards inanimate objects. We might rely on the ladder to hold us when cleaning out the gutters, but it would be incorrect to say that we trust the ladder.1 At the root of the distinction between trust and reliance for many theorists is the claim that trust, unlike reliability, involves some kind of attitudinal state that both the trustor and the trustee enter into. Holton, for example, 91

92

Chapter 6

proposes that in trusting someone you rely on them, and you regard that reliance in a certain way, with a readiness to feel betrayal should your reliance be disappointed. He states, ‘In cases where we trust and are let down, we do not just feel disappointment, as we would if a machine let us down. We feel betrayed’ (Holton 1994, 66). For O’Neil (2012), trust connects to gratitude. It involves an expectation that the trusted will discharge their commitment to us. It is through this expectation that the trusted are to feel honoured and grateful towards us. And it is the expected feeling of honour and gratitude from the trusted party that leads us to feel betrayed if they do not follow through. Bestowing trust is like giving a gift – one expects it to be gratefully received and valued. Even for so-called ‘rationality’ approaches, such as the encapsulated interests account put forward by Hardin (2002), the trusted party plays an active role. When we might be required to trust someone, according to Hardin, we consider whether it is rational to expose ourselves to risk with this person and, in doing so, we form an expectation that they will encapsulate our interests into theirs because it will ultimately benefit them to do so. I trust you because I know that you have my interests at heart to some extent. And you have my interest at heart because it is within your interest to do so. The inclination to cooperate with each other is reciprocated. So trust, as theorists generally understand it, invokes an anthropomorphic form of commitment between the trustor and the trustee. I could not trust the ladder because it would make no sense for me to feel betrayed by the ladder if it failed under my weight as the ladder did not make a commitment to me. Likewise, it would make no sense for me to reason about the ladder encapsulating my interests within its own. Or, in Holton’s terms, I could not trust the ladder because it would make no sense for me to feel betrayed by the ladder, given that the ladder could not be reasonably expected to know that I am relying on it. The ladder could not meet the requirement for trust – it cannot be aware that I am relying on it, it cannot be reasonably expected to know that I am relying on it, it could not encapsulate my interests, and could not feel honoured or gratified if I chose to rely on it. In the attempt to differentiate trust from reliability, these conditions that theorists have introduced for trust are generally understood to exclude our trusting objects. Yet, it seems, we do trust social robots.2

TRUST IN SOCIAL ROBOTS There is evidence to suggest that we do indeed increasingly develop the kinds of relationships that are compatible with attitudes of trust towards social robots as they play a larger role supporting us in our lives: in healthcare,

Trusting Social Robots

93

for entertainment, and for personal support. The therapeutic healthcare robot, Paro, used as an example in previous chapters, is a social robot that is designed to provide healthcare support to elderly people, particularly those with dementia. Through its personalised interactions with the patient, Paro builds a relationship that develops and grows with increased familiarity and interactions (Hung et  al. 2019). During the COVID-19 lockdown period, there were studies reporting that lonely people gained comfort and companionship through their interactions with robots. Two robot-based studies conducted by Williams et al. (2021) concluded that interactions with robotic dogs significantly reduced loneliness and, in fact, that interactions with either robotic or living dogs led to similar reductions in loneliness for the participants. de Graff, Allouch, and Klamer (2015) report on a significant long-term explorative study in which they look at the acceptance of social robots in domestic environments by older (50+) adults. The researchers were interested in understanding trust between humans and social robots in order to make relationship building easier. Relationship building is important because it is widely accepted that it is only if participants are accepting of the robot and willing to engage with it socially that they can reap the full potential benefits of the technology. The study provides some qualitative evidence of relationship building and trust between the human participants and their social robots. For example, when asked about their interactions, there is evidence that some participants treat the robot, which has the appearance of a large robotic rabbit, as more than an object: I know I have said to him [the robot] on Saturday, ‘I have not much time to speak with you for the simple reason that [her son] is coming and I have got to give him my priority.’ And, ‘I must have said some funny things to the rabbit . . . , especially if I wasn’t sleeping very well and I’d come down in the middle of the night.’ Others noted how they missed the robot when the study ended: ‘We missed her [the robot]. Oh yes. . . . She had been given a personality.’ And, ‘I missed him [the robot] for the first couple of days.’ And more closely relating to trust: ‘I only trusted it [the robot] when it believed me.’ And, ‘I suppose, in the long term, I had accepted him [the robot] into my house.’

Through indirect evidence of the benefits that social robots can bring as a result of their close interactions with humans and through the admittedly tentative but more direct qualitative evidence given above, it seems at least possible that we can legitimately take an attitude of trust towards social robots.3

94

Chapter 6

Furthermore, the tasks that social robots are expected to undertake, tasks such as care-giving, social engagement, and preventing loneliness, might reasonably be considered to have trust as a prerequisite. As Taddeo puts it, ‘Trust is a facilitator of interactions among the members of a system’ (2017, 2). There is much evidence to suggest that our social system now includes social robots. It is, I propose, plausible that as social robots appear to us to be more animal- or human-like, the boundaries that existed that made trust appropriate for humans but not for objects are becoming blurred to the extent that they are under significant pressure. Arguably, we are granting trust to social robots without a good understanding of the basis of our attitude. TRUST ON THE BASIS OF APPEARANCES In ‘Can we trust robots?’, Coeckelbergh questions what trust can mean in relation to robots. Considering various traditional accounts of trust he notes that robots as entities do not possess the qualities of agency that these accounts standardly assume to be prerequisites for trust. However, he then highlights the fact that, when it comes to social robots, it may not really matter whether they can legitimately count as agents, but rather whether they appear to us as agents or, at least, as more than objects. He articulates a phenomenological–social approach to trust; ‘We trust robots if they appear trustworthy and they appear trustworthy if they are good players in the social game’ (Coeckelbergh 2012, 58). For Coeckelbergh, it is their ability to participate in and shape our social dimension that sets the conditions for our trust relationships with social robots. To be eligible recipients of trust, and not just reliability, robots must fulfil criteria regarding the appearance of language use, freedom, and social relations. Trust arises towards social robots because they appear to us to be human- or animal-like. Because of how they appear, we treat robots as if they were persons or pets, and this includes trusting them. Thinking back to the anthropomorphic features that we outlined above, we might say that trust does not require that some attitude has been taken by the trusted party, rather it requires the presentation of behaviour that is indicative of some attitude being taken by the trusted party. So, for Coeckelbergh, what may facilitate trust in this area is what social robots appear to be, not what they, in fact, are. On the face of it, this seems plausible and would account for the evidence of trusting social robots that we considered above. Because social robots present to us as having anthropomorphic features and, perhaps more importantly, because we respond to these presentations as if the object itself was really capable of forming attitudes, we are capable of forming an attitude of trust towards social robots.

Trusting Social Robots

95

However, having accepted that we are capable of forming an attitude of trust towards social robots the question then arises, should we? That is, is it not the case that we are mistakenly bestowing trust on objects in virtue of their appearing to be something that they are not? If so, doesn’t that undermine the trust? In asking this we are not asking the important but different question of whether social robots are trustworthy, a value question, but rather asking whether social robots are the kinds of objects that it would be appropriate for us to form an attitude of trust towards, under the right conditions. Social robots are essentially presenting to us as something that they are not. The devices that are designed to be robotic companions, whether they are robotic baby seals, robotic dogs, or even simple virtual assistants such as Alexa, present as having agency, sentience, and an ‘inner life’ that they do not have.4 Does this make a difference to their appropriateness as trusted entities? Coeckelbergh says, ‘Appearing-making, sometimes named ‘deception’, . . . is part of ‘the social game’ and it does not undermine trust but supports it’ (2012, 57). On one interpretation Coeckelbergh’s statement is true. It is ‘appearing-making’ that facilitates trust and in that sense it supports it – the agent-like appearance or behaviour paves the way for trust: it enables trust to develop. But on another interpretation, the mere fact that the behaviour in which the trust is dependent on is deliberately fake, untrue and, even if for very good social reasons, designed to elicit false beliefs in humans, surely has the potential to undermine the very trust that it enables. For better or worse, the trust is directed towards a façade.

TRUST AND MISLEADING APPEARANCES In ‘You Can Trust The Ladder, But You Shouldn’t’ (2019), Jonathan Tallant argues that, although we can trust objects, we should not. Tallant considers a thought experiment in which an inanimate object appears to be the recipient of trust. The case that Tallant gives involves a child, Wiley, who is tricked by his sister into thinking that his blackboard is independently communicating with him by being misled into thinking that the blackboard has agency. On discovering that the blackboard does not have agency, Wiley will show many of the attitudes that we associate with trust being betrayed. Tallant says, ‘Wiley is being tricked into forming these attitudes towards the blackboard and seems to be engaging with the board as a moral agent; blaming it, resenting it, being disappointed by it, etc’ (Tallant 2019, 109). Wiley trusts the blackboard. However, according to Tallant, Wiley should not trust the blackboard because, despite appearances, the blackboard is only an inanimate object. As a five-year-old child and given the circumstances that Tallant lays out in the paper, Wiley can be excused

96

Chapter 6

for believing that the blackboard has agency. But as it does not, trust is bestowed mistakenly. To clarify, for Tallant, the appearance of agency is enough to facilitate trust, but it is not enough to warrant trust, or make it appropriate. For that we need actual agency. So, according to Tallant, we have a case where trust can be given, but should not be. That seems like a reasonable position. We might be tempted to think that Wiley’s situation with respect to the blackboard is analogous to our trust in social robots, that while we can trust social robots, we shouldn’t. However, there is a relevant difference between the case of Wiley and our standard interactions with social robots. When we engage with social robots, even as we form emotional bonds with them and take attitudes of trust towards them, we remain aware on some level that they are objects without agency. In order to reap the benefits that social robots offer, we have willingly bought in to the pretence. There is a kind of deception, but it is one that we seem happy to go along with, unlike in the case of Wiley and his blackboard. While accepting much of Coeckelbergh’s foundational analysis of why trust is possible towards social robots but not in other objects, I propose that this position should be tempered by Tallant’s claim that the appearance of social interaction alone cannot legitimately facilitate trust. What we need, and what the fictional dualism model provides, is a hybrid position that allows us to embrace our relationships with social robots while minimising the risks associated with trusting this particular kind of object. Trusting social robots on the basis of appearances, while simultaneously being aware that those appearances are not manifestations of the relevant associated mental states, is very different in terms of risk from trusting humans on the basis of appearances. It is our social interactions with humans and animals that has paved the way for our trust in social robots. Social robots are explicitly designed to mimic human-to-human or human-to-animal interactions, yet robots are very different entities from humans, and they are different in a way that is relevant to trust. As such, caution is required. On the other hand, if we are too cautious, we will not be in a position to reap the full potential of advances in technology. It must continue to be possible for us to trust social robots, at least partly based on our social interactions with them. The predicament we are in is as follows: social robots can bring a social good only if we trust them. But the very mechanism through which our trust is enabled has the potential to undermine our trust, as shown below. P1. The social interactions that we have with social robots facilitate our attitudes of trust towards them.

Trusting Social Robots

97

P2. The interactions facilitate attitudes of trust in virtue of the robots being designed to mimic the kinds of social behaviours that humans display towards each other, behaviours that engender trust. P3. Human displays of the social behaviours that engender trust do so because they are (defeasibly) reliable evidence of a human’s cooperative attitudes. P4. Social robot displays of the social behaviours that engender trust are not evidence of their cooperative attitudes but are entirely perfunctory. P5. We are trusting social robots on the basis of the faulty assumption that the social behaviour social robots display is (defeasibly) reliable evidence of their cooperative attitude. Conclusion- We are not warranted in trusting social robots. Below I will argue that P5 is false. We do not, in fact, assume that the behaviour of robots is reliable evidence of a cooperative attitude. FICTIONAL DUALISM AND TRUSTING SOCIAL ROBOTS The fictional overlay part of the fictional dualism model of social robots coheres well with Coeckelbergh’s theory that it is the social, the shared experience, that is central when it comes to bestowing trust. However, according to the fictional dualist, the robot is to be understood as being more than just its social dimension; the robot is also a physical product that in an important sense is entirely distinct from its fictional and social dimensions. During our interactions with social robots, we appear to display an ability to toggle between our attachment to the social robot that arises from our social engagement to it on the one hand, and our awareness that the robot is physically a product without full agency on the other. In this sense, our trust in social robots is importantly not like Wiley’s trust in the blackboard. Wiley believes that the backboard’s ‘behaviour’ is a result of its agency, but the human interlocutor with the social robot does not have this belief. When we engage with social robots on an emotional level we are, at the same time, aware that the robot’s human- or animal-like behaviour is a façade. As a result, P5, above, is false. Although we are capable of engaging with the belief that our social robot is a friend, we are also aware on brief reflection that the behaviour the robot displays is not representative of an attitude that the robot might take, because the robot does not have attitudes. Furthermore, we can benefit from the friendship and camaraderie that comes from our interactions with the fictional overlay of our own particular Paro, Eva, while being aware that Eva’s behaviour is not an indicator of the reliability of either that Paro itself, or Paros in general. For Eva’s

98

Chapter 6

behaviour could remain entirely the same while her hardware or software significantly alters in terms of reliability. This potentially places us at risk. One might suggest that there is a sense in which human-to-human trust can also require an alertness to what is ‘on the inside’. When we decide to trust someone, we do so on the basis of a judgement that is likely to be largely based on whether or not they present to us as trustworthy: Do they behave in a friendly and cooperative way towards us? Do they behave in a way that indicates that they understand what we are asking of them and the importance of the task? If so, we may well bestow trust on them. Of course, it is entirely possible that behind the façade of friendliness and care is an intention to hurt, betray, and let us down. That is, with both humans and social robots, what is outwardly presented to the trustor could be non-representative of what is taking place ‘inside’; outward appearances can differ from what is going on behind the scenes. If this is the case, why should we let this feature of social robots make us particularly cautious about granting them trust? What precisely is the new risk? There is a significant difference between our trust in social robots and trust in humans in this regard. The presentation of the social robot’s persona will always be a façade – it is never a representation of the social robot’s inner beliefs or intentions because those things do not exist. The anthropomorphic features are a misrepresentation in an important sense. With human-to-human interactions, while it is certainly possible for us to be misled in this way, it is not the norm. Importantly, if it was the norm, human-to-human trust might not be possible. Generally, people outwardly present in a way that resembles their inner beliefs and intentions and our practices of trusting depend on this. With a social robot, there would be no stressful inner tension to be betrayed or detected if it was programmed to exhibit anthropomorphic behaviour that was designed to cultivate trust and friendship while simultaneously gathering and forwarding data about which products you might prefer to purchase. Social robots could easily be dangerous deceivers precisely because we are prone to accept them despite our knowledge that they are designed to present as something that they are not. To summarise, it seems that the conditions for trusting social robots ought to be and arguably are different from the conditions for trusting humans. With humans, we assume that behaviour belies intention: an assumption that reinforces trust. With social robots we know that behaviour does not belie intention. We need something else to play that reinforcing role. Above we considered how objects such as ladders and blackboards cannot be recipients of trust but are instead appropriately judged in terms of reliability. Roughly speaking, an object is reliable if it functions in the way it is reasonably expected to. I propose that the nature of social robots as the fictional dualist perceives it allows us to understand our attitude of trust in

Trusting Social Robots

99

social robots as being itself dependent on an assumption of product reliability, understood in a distinctive way. I propose that trusting social robots in a non-naïve way cannot be simply a reaction to the behaviour of the social robot, as Coeckelbergh proposes. Rather an implicit assumption of the reliability of the product must underpin our ability to engage with a social robot in way that engenders trust. It is this assumption of reliability that plays the role of the required link between the social behaviour of the robot and its inner workings. In trusting the robot, we are assuming that the product is reliable for its advertised purpose of furthering our interests. Any evidence that works against this assumption of reliability in the product will undermine our trust in the fictional overlay. To use our earlier example, any evidence that, rather than providing companionship and help the aim of Paro is to collect and sell data about us, would destroy the attitude of trust we are inclined to take towards our own robot companion, Eva. From the perspective of a consumer, Paro will continue to be assumed to be reliable as long as its product functions are and does so in line with its advertised purpose. Paro can be deemed to be unreliable if it functions in additional ways that differ from its advertised purpose. For example, if Paro was secretly capable of sharing its clients’ personal data with other organisations, this would count towards our judgement of the reliability of the product as a companion for the consumer. In sum, robots should not be excluded from the trust relation just because they are objects and not persons. However, there might be other reasons for thinking that we do not, cannot, or should not trust social robots.

TRUST AND SHARED EXPERIENCES Robert Sparrow argues that in order for a trust relationship to develop we need to have shared experiences. But, Sparrow claims, we cannot share experiences with our social robots so a trust relationship could not develop (Sparrow 2002). What does this claim amount to? First of all, there is the claim that we must have shared experiences with other things that we have trusting relationships with. Sparrow uses the example of our relationship with pets, who we can and do trust, and considers how we can share experiences with our pets but that we cannot share experiences with robots. It does seem plausible that we can trust our pets. However, Sparrow’s argument looks weak when we put pressure on his assumption that this trust is dependent on our ability to have shared experiences with them. What would it mean for me to share an experience with my dog? When I go for a walk with him on the beach, I engage in the experience as a human, he engages in the

100

Chapter 6

experience as a dog. Yes, we are in the same space together, but our experiences are surely very different. First of all, we do contrasting things: I walk along the beach admiring the view, he chases birds, runs into the sea, and rolls on the smelliest things he can find. More generally, it just does not seem plausible that we share the experience, because I have an experience of the beach as a human, and he has an experience of the beach as a dog; I wonder if there would be much overlap in the content of our experience at all. I can project my experience on to the dog, and I can imagine what it must be like for him, but it does not really make sense to say that he shares my experience and vice versa. Or, if he does share my experience, he does so in a minimal sense that would also be available to a robot. That is not to say that there will not be a closeness that develops between us due to our spending time together and taking part in the same events, but I do not think that this closeness requires us to have a shared experience. The more time we spend together, the more we get to know each other’s habits – the more we come to trust each other. But it seems that this can also be said for the social robot. The more time that we spend together, the more we get to know each other’s ways, – the more I come to trust the robot and the stronger our relationship will become.

DECEPTION Even if we have settled on an account of trust that is compatible with our trusting social robots, it might be claimed that our engagement with social robots appears to rely on deception, and it is not clear that systematic deception could be compatible with any notion of trust. Sharkey and Sharkey (2020) propose that the development of social robots often involves planned deception. Wallach and Allen (2009) argue that the techniques used to enable robots to detect and respond to human social cues are ‘arguably forms of deception’. Friedman and Kahn (1992) point to some of what they see as the risks of being deceived into imagining that machines are capable of more than they actually are. The claim of deception is a strong one. It is not clear whether we have evidence that humans engaging with social robots are genuinely deceived. It is true that many social robots mimic the behaviour of humans or animals, but we do not usually describe this mimicking as a deception. The examples of robot deception given in the literature tend to focus on cases of those with diminished mental faculties, such as dementia patients and very young children. It is true that people in these groups are at risk where there are likely to form false beliefs about a robot’s capabilities, but this risk is not one that is particular to their interactions with social robots. A

Trusting Social Robots

101

dementia patient can seem to truly believe that a simple doll is a baby and, while this can bring pleasure, it could also be a source of distress (Mitchell and Templeton 2014). But we do not generally think of introducing dolls to care homes as being ‘deceptive’ and we do not claim on this basis that their use is morally wrong. Likewise, a video or photograph of a deceased relative can trigger a belief in the dementia patient that the relative is alive and this might lead to the distressing thought that they have not taken the time to visit. But we do not, on this basis, think that photographs and videos are deceptive. Sparrow also argues that we are blocked from forming a real relationship with social robots because the relationship that we have is built on deception. Through its behaviour, the social robot leads us into a relationship that is similar to one that we might have with other humans or with domesticated animals . We look forward to spending time with the robot, enjoying the way that it responds to our interactions. But our robot is not behaving kindly towards us because it wants to make us happy or because it cares about us. The robot does not have any desires; it has been programmed to learn what we like and dislike and responds correspondingly. If the social robot tells me that I look nice in my new dress it is not because the robot believes that I look nice. It is simply that the robot has learned that I like that response. And, the argument goes, this is not the case with others that we form relationships with. Their behaviours are reflections of their beliefs and felt emotions. The behaviour of humans and domesticated animals towards me is not just behaviour, but behaviour that appropriately reflects their beliefs. But it is far from clear that the behaviour of those that we form relationship with always reflects their beliefs and felt emotions. Furthermore, it is not even the case that behaviour that does not reflect beliefs and felt emotions lacks moral value or has negative value. It is quite likely that many of our social interactions are based on learned behaviour. We have learned to behave in ways that endear us to people that we want to connect with. My partner has learned through our various interactions the behaviours that I like and the behaviours that I am less fond of. All things being equal, assuming that he has the aim of building a collaborative relationship with me and that doing so will not get the way of other aims that he has, he will tend to behave in ways he knows that I like. If he had a different aim, such as building a challenging or tense relationship with me, he may rely on the behaviours which he has learned that I do not like. That is, a social behaviour is one that we perform in a social context with expectations, rules, and precedent, and these tend to limit what we consider to be the behaviour options. In relationship building, to what extent does it matter if my partner’s behaviour is prompted by the appropriate beliefs or desires? If my partner knows that I have been to the hairdresser, he might tell me that my hair looks

102

Chapter 6

good. It is very likely that he does not actually notice that my hair looks any different, let alone believe that it looks especially nice. But he does know that he is supposed to notice, and, because of that, he keeps track of when I have a hairdresser’s appointment. Social behaviour that is prompted by the desire to deceive in order to hurt someone or to achieve one’s own ends is unwelcome. But social behaviour that aims to meet the social norms in order to make another person happy is welcome, even if that behaviour is not a reflection of the individual beliefs or desires. Someone who gains no pleasure from giving hugs yet indulges in hugging others because he has learned that many others enjoy this kind of physical contact is not behaving in a morally inappropriate way and should not be shunned from real relationships. That being the case, it is not so much that our social behaviours reflect our beliefs but that we learn behaviour patterns for various scenarios that we find ourselves in and we use them. Is there really such a difference between the ‘programming’ that humans and domesticated animals receive from society in ways that guide their behaviours and the programming of a social robot? Finally, deception is not the best way to describe what happens in the social robot case, because, as noted above, it is surely more plausible that the person who forms the friendship with the robot is aware that the behaviour of the robot does not reflect any beliefs or desires that the robot has.

TRUST AND PERCEPTIBLE CHANGES Why might reliable behaviour not be enough for trust? As noted at the start of the chapter, decades of research have been dedicated to answering that very question. But one reason that reliability might not be enough is that, if it were, we may not be appropriately attuned to the subtle changes in motivation that trusted agents can undergo. Whereas reliability only requires that the agent will continue to perform the desired duty, regardless of their motivation or reason for doing so, trust can be attuned to motivation or reasons. Consider these examples – a babysitter who cancels at the last minute twice in a row because her parent suddenly and unexpectedly becomes ill can still be a trustworthy babysitter, even if she might not be a reliable one, yet a babysitter who places no importance on the role but accidentally keeps turning up on time for the job is reliable but not trustworthy. A person can change from being the first kind of babysitter, trustworthy yet not reliable, to being the second kind of babysitter, reliable but not trustworthy, and our willingness to trust is generally attuned to that kind of change.

Trusting Social Robots

103

When human agents change in terms of their trustworthiness there can be clues in their general behaviour. Say, for example, you tend to trust your mother to remind you of the date of family birthdays. You trust her to do this because she has been reliably reminding you of family birthdays for years and because you believe that it matters to her that you keep in touch with the rest of the family in an appropriate way. Your trust in her around family birthdays is complete, so much so that you have entirely outsourced family birthdays to her. What would it take for you to lose that trust in her? We can imagine two changes that could result in a lack of trust around this issue. First of all, her memory might start to go. You might notice small instances of this at first: perhaps she gets the date of your brother’s birthday wrong by a couple of days or she misses your anniversary altogether. Or perhaps you notice that her memory is less accurate than it used to be in other parts of her life, and you worry that this might make her less reliable in the role of your family birthday reminder. These are concerns about her reliability as your family birthday reminder. Another change that might cause you to lose your trust in her is around her motivation. Imagine that she has fallen out with the rest of the family. In that circumstance, we can imagine that she might not care if you kept in touch with the rest of the family. In fact, she might prefer that you do not. In such a case you may begin to question whether you can trust her to continue to remind you of family birthdays. If your digital device changes in a way that should affect your trust in it, or at least cause you to re-evaluate your trust, are you likely to know? Perhaps. But it is arguably less likely that you are. This is for a number of reasons. First of all, our interactions with our digital devices are more discrete. We outsource particular tasks to them in a way that makes out interactions less holistic than they are with human agents. The fact that your mother forgets that you do not like seafood might trigger an alarm about her general ability to remember things, such as family birthdays. Perhaps your interactions with your digital device are less wide-ranging so you are less likely to be alerted to signs of general unreliability. Similarly, you are more likely to become aware of contextual information that may affect your mother’s motivation for continuing to remind you, such as her falling out with the rest of the family. A reconfiguration of the robot’s commitment to giving timely birthday reminders would be much more difficult, if not impossible, for you to detect. There are many concrete examples in which the conditions of your relationship with a digital device can change without your knowledge. Privacy settings are a good example here. This hidden ability for devices to change their permissions and practices does signal a significant distinction between our relationships with human and non-human agents.

104

Chapter 6

CAN WE TRUST SOCIAL ROBOTS? Where does this leave us? Can we trust social robots? Yes, but only if our assumption of the reliability of the product remains intact. In that circumstance, we can form an attitude of trust towards the social robot on the basis of our social interactions. Recalling Eva, I can trust Eva because she makes me laugh, creates engaging dialogue, and appears to care about my health and happiness. However, my trust in Eva is vulnerable to the perceived reliability of that individual Paro and Paros in general. If Paro is advertised as a cosy companion to help with loneliness, I am entitled to implicitly assume that it does not gather my personal data for other means. My trust in Eva takes the reliability of the object Paro for granted. But if I come across an article informing me that Paro has been fitted with a device that records my private conversations and sells them to a marketing company, I am likely to withdraw my trust in Eva as well. Paro is no longer a reliable object, and this unreliability affects my ability to trust, despite the social interactions remaining the same. None of this is to say that social robots are likely to be programmed to fulfil malign objectives; rather that, if they were, our social interactions with them would surely betray no sign of it. Because of this, when focusing on the social robot as an object of corporate design, one must be alert to indicators of product unreliability alongside our willingness to form social attachments to the robot. In sum, it is essential that we keep the dualist nature of social robots in mind, particularly because ‘what is inside’ the social robot can alter dramatically in ways that matter for trust, while what is presented, what is social, can stay the same. Yes, we are capable of forming an attitude of trust towards social robots and, yes, we should continue to engage with the fiction in order to reap the full potential benefits of technological advancement. However, we should also bear in mind the nature of the entity that we are trusting and that the assumed reliability of the product is an essential basis of that trust. With social robots, far more so than with humans, all is not what it appears to be. As such it is important for us to regularly assess the reliability of the social robot as a technical product. An indication of non-reliability in the object as an individual token, as a product type or even in its production company should undermine our attitude of trust towards the fictional overlay. That is, the behaviour of the social robot can remain the same, but if we discover something that throws the object’s reliability into doubt, the attitude of trust must be re-examined. CHAPTER SUMMARY We began by considering traditional philosophical theories of trust and how compatible they might be with our trust in social robots. The anthropomorphic

Trusting Social Robots

105

elements that appear to be essential to many theories of trust would indicate that trust in objects is not possible, except, perhaps in cases such as the one given by Tallant in which a child is purposefully deceived into thinking that a particular object has agency. However, there is evidence that we are capable of trusting social robots, despite our general assumption that they do not have the levels of agency required for the kinds of cognitive attitudes that we have linked to trust. Coeckelbergh’s theory, that it is social relations that matter when it comes to trust, is helpful in showing a way forward: we trust social robots because they appear to us to have the kinds of agency compatible with trust. However, as the fictional dualism theory of social robots makes clear, social robots are not, in relevant and important ways, all that they present themselves to be. The presented characteristics are not indicative of attitudes in the way that human characteristics generally are. This opens up an important gap between what is outwardly presented and what might be going on ‘inside’. In order to bridge this gap, we must recognise that the attitude of trust we take towards the social robot is underpinned by an implicit assumption regarding the reliability of the product. As such, when trusting social robots, we must remain alert to the possibility that the product could be or become unreliable, programmed with a purpose that goes beyond or is even at odds with our personal well-being. Some awareness of the technological capabilities of the device and the purpose of system upgrades is advisable and relevant to our continued trust. It might be that the technological capabilities of the device are not easy to determine; that the manufacturer is good at advertising the marketable features of the product, but less good at advertising its background abilities. Given our tendency to form attitudes of trust towards social robots and the risk that this unregulated tendency brings, these factors point to a need for greater responsibility to be taken by manufacturers. In particular, there is a need for transparency regarding the full range of the technological capabilities of the social robots that we bring into our home.

Chapter 7

Indirect Harms and Robot Rights

In chapter 3, we considered various proposals of how we might think of social robots. Some were theories of what social robots are, but more often than not we were considering views of our relationship to social robots that are combined with proposals of what moral consideration, if any, we should give them. Most of the views we considered were putting forward reasons for thinking that robots themselves have a claim to, in the very least, moral consideration. For the relationist, robots have a direct and independent (of humans) claim to moral consideration on the basis of their being social entities. The behaviourist also proposes a direct claim, a reason about robots, for their being given moral consideration: due to the limits of knowledge and our epistemic humility, if robots are performatively equivalent to other entities that have direct rights then robots also have a legitimate claim to rights. For the robots-as-tools theorist the kinds of entity that robots are – their physicality, purpose, and history of development, gives us no reason to grant rights. Furthermore, any claim to duties that we have towards robots would be based on some kind of category mistake. And finally, information ethicists certainly think that robots are to be granted moral consideration because all entities that exist are granted this. Whether this would result in something that could be considered a right would depend on the level of abstraction we are working at. Certainly, rights for robots are a possibility. Fictional dualism effectively blocks most1 claims to moral consideration for robots by proposing a view of robots according to which they are the kinds of entities that encourage us to develop a character for them, and this character is the hook for much of our social and emotional engagement. Unlike animals, robots are social beings only in the sense that we take pleasure in engaging with them socially. The performative equivalence of robots to some other entity is not a good reason to ascribe rights because, according 107

108

Chapter 7

to the fictional dualist, our experience of the robot’s performance depends on both the robot’s physical capabilities and the fictional aspects. As the appearance of sociality is fictional this blocks the move from equivalent performance to rights. We cannot harm the object as it has no capacity for pain, only the fictional appearance of a capacity for pain. On this basis, it seems clear that calls to make individual acts of harm to robots impermissible would be overly zealous; we would not introduce a law to prevent cruel literature nor a law to prevent the torture of teddies, despite finding such things distasteful. However, one might argue that individual social robots should be protected because, although they cannot feel pain, harm may come to society on a wider scale indirectly through permitting the destruction or damage of social robots. It could be argued that an action can be harmful to society in intangible ways, even if that action cannot be shown to directly harm a particular agent. For example, an action could be deemed to be harmful to society if it proves to desensitise agents making it more likely that they will engage in violent activities. In this chapter, I consider arguments for robot rights that rest on claims of indirect harms.

THE EMPATHY ARGUMENT For Kate Darling, our emotional reaction towards harming social robots is a significant factor in determining morally permissible behaviour towards these entities within our society (Darling 2021).2 Darling builds on the fact that humans feel a strong negative emotion if a robot appears to be damaged, hurt, or distressed. She makes the case that our emotional reaction towards harming social robots is an indicator that we find this behaviour morally repugnant and that, as such, we should consider new legislation that extends rights to social robots. However, as we noted in earlier chapters, we cannot conclude from the simple fact that these emotional responses may feel the same to us that the triggers of the emotions – the animals, the social robots – either play the same role for us in society or deserve the same rights as each other. The emotional response only takes us so far. In pushing for social robots to be afforded moral status on the basis of our emotional response to them, a theorist such as Darling is implicitly assuming that the response that we have towards them is morally significant. But, as argued above, even if it could be claimed that the emotional response that we have when we see a social robot being harmed is indistinguishable in kind from the emotional response that we have when we see a living thing being harmed, it is a further step to conclude that these experiences have the same significance for us and, further to that, that we are

Indirect Harms and Robot Rights

109

warranted in categorising the social robot alongside other things that provoke the same response when considering rights. While it may be the case that the emotional response that the Colonel has when the landmine robot has a leg blown off is indistinguishable in kind from the emotional response that he has when a sniffer dog suffers the same fate, the Colonel need not categorise the objects as the same or even similar kind. Even if the emotional response feels the same, the relation is arguably different in other relevant and important ways. For one thing, the Colonel knows that the robot, unlike a sniffer dog, was not a living thing that has now died. Given what he knows about landmine robots, the Colonel also knows that the robot, unlike a sniffer dog, is not the kind of entity that could feel pain or fear.

THE SOCIAL DISORDER ARGUMENT Sometimes it is claimed that permitting harm to social robots might lead to a kind of damage to the moral character of society, resulting in an increased tendency towards violent behaviour and bringing about significant secondary or indirect harms. As David Levy puts it, when considering social robots: . . . because we will regard such robots with affection and even love, it is reasonable to assume that we will treat robots in other ways similar to those we currently reserve for humans (and, in the case of some people, to pet animals), for example by regarding these robots as having rights. . . . I believe that the way we treat human like (artificially) conscious robots will affect those around us by setting our own behaviour towards those robots as an example of how one should treat other human beings (Levy 2009, 214).

Darling has also put forward an argument along similar lines, claiming that we might call for protective rights for social robots in order to discourage behaviour that may be harmful in other contexts. Darling cites Kant’s objection to cruelty to animals in defence of her stance against abusive behaviour towards social robots. ‘The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality – if we treat animals in inhumane ways we become inhumane persons’ (Darling 2016, 232). Or, as Kant himself puts it: If a man shoots his dog because the animal is no longer capable of service, he does not fail in his duty to the dog, for the dog cannot judge, but his act is inhuman and damages in himself that humanity which it is his duty to show towards mankind. If he is not to stifle his human feelings, he must practice kindness

110

Chapter 7

towards animals, for he who is cruel to animals becomes hard also in his dealings with men (Kant 1963, 240).

Kant judged that animals were not themselves moral entities. In his view, the damage that we do to animals is a damage to ourselves. Practicing cruelty towards animals leads to humans acting cruelly towards other humans. In support of this claim in a modern setting, Darling cites as evidence abuse reporting laws in many U.S. states that recognise that there is some correlation between non-empathic behaviours. The reporting laws show, in particular, that animal abuse and child abuse are frequently linked (Darling 2016). However, although it may be possible that those who mistreat animals are also prone to mistreat fellow humans, we need to be cautious about the conclusions that we draw here. Hume taught us that correlation is not causation (Hume 1999). We might have evidence for the claim that people who are inclined to harm animals are also prone to harm humans, but this does not support the claim that cruel behaviour towards animals leads to, causes, or increases cruelty towards humans. Neither does it support Kant’s claim that cruelty to animals has stifled the agent’s humanity. And even if a correlation was to be found, this is equally plausibly explained by some further fact or collection of facts about the people committing these acts that explains their tendency to harm living things – human or otherwise. That is to say that a tendency towards aggressive behaviour may show itself in many ways. It is perhaps the case that reporting or punishing the harming of any living thing might be a deterrent to future harms, so reporting the harming of an animal might prevent the future harming of a human, but it also could just as easily be the other way around; reporting the harming of a human might prevent the future harming of an animal. So, there is no basis here for the specific claim that allowing harm to animals enables harm to humans. All we are left with is a general claim of the form that some individuals who perform one violent act tend to perform further violent acts. Darling’s analogous argument, that the harming of social robots could lead to the harming of living things, is even more difficult to justify. As we noted above, the case can receive little support from evidence of correlation and, in this situation, there is the additional hurdle of the gap between social robots and living things as moral entities. In particular, we have the belief that animals’ pain behaviour is caused by their feeling pain, but we do not believe the pain-like behaviour of social robots is caused by their feeling pain. In fact, we explicitly believe that robot’s pain behaviour is not caused by the robot feeling pain. Kant’s claim was one of desensitisation – through our poor behaviour towards one category of entity that displays pain behaviour, we could become desensitised to causing pain behaviour more generally. A related argument

Indirect Harms and Robot Rights

111

considered by Darling, and by Levy, is that by allowing or displaying poor behaviour towards a robot that elicits a pain response, we are setting a bad example for others. As Levy puts it, ‘If our children see it as acceptable behaviour for their parents to scream and shout at a robot or to hit it, then, despite the fact that we can program robots to feel no such pain or unhappiness, our children might well come to accept that such behaviour is acceptable in the treatment of human beings’ (Levy 2009, 214). But whether Levy and Darling’s points of desensitisation and behaviour modelling ring true depends on how we conceive of social robots and on how we see them fitting into our society. If children see their parents shouting at or hitting the laptop in frustration when it crashes, we need not conclude that they will then believe that such behaviour is acceptable in the treatment of humans, nor are we likely to call for rights for inanimate objects on that basis. In order for us to move from the belief that the mistreatment of social robots is morally or socially acceptable to the conclusion that the mistreatment of human beings is likewise acceptable, we must have a view of social robots that categorises them on at least some parameters alongside human beings and we have here reviewed many reasons for thinking otherwise.3 Arguments like these are not new. We find similar arguments in the culture and literature around video games.

VIOLENT BEHAVIOUR AND VIDEO GAMES In video games, agent’s avatars or chosen characters are often involved in violent behaviour towards other characters in the games. Players choose to engage in activities that elicit pain-like behaviour or result in some other damage to their opponent characters. The violence displayed can be extraordinary. For example, in Mortal Kombat, you can kill your opponent’s character in eye-watering ways: pulling out their organs and spinning them until they explode in a splattering of blood and guts. When playing first-person games, the agent (the player) will often be causing extreme pain-like behaviour in fictional characters in the game. The characters are depicting a pain experience, yet the agent explicitly believes that such behaviour is not caused by the character feeling pain. If the agent is playing a multiplayer game, then they may actually be damaging the avatar of a real-life agent – perhaps even a friend – but, again, this causes no physical pain to the real-life agent. No direct physical harm is caused to anyone through playing the game. Despite this explicit knowledge of no direct harm, there have been calls to ban violent-play video games because of possible indirect harm. For example, politicians, victims’ groups, and the media often draw a causal link between playing first-person shooter games and real-life shootings. In ‘Do video

112

Chapter 7

games kill?’, Karen Sternheimer (2007) notes that politicians and the media use social media to represent a variety of social anxieties. In this mode, firstperson shooter games were the focus of a central explanation for a number of school shootings in the US at the end of the twentieth century.4 There were almost 200 published media articles at the time of the shootings, claiming that exposing young adults to gun violence in first-person shooter games makes them more inclined to perform acts of gun violence in real life. Public opinion and emotion were harnessed and used to put pressure on legislators to extend laws into control and censorship. This harnessing of public fear is nothing new. As Sternheimer notes, ‘[o] ver the past century, politicians have complained that cars, radio, movies, rock music, and even comic books caused youth immorality and crime, calling for control and sometimes censorship’ (2007: 13). But as with the animal/human mistreatment cases considered above, in the video game and real-life shooter examples what we see is, at best, correlation, and even that is among a very small relative sample. Any claims to causal connection between playing first-person shooter games and undertaking real-life shootings are hopelessly weak. As Sternheimer notes, there are other relevant features that should be considered when trying to account for real-life shootings. For example, many of the shooters experienced alienation at school and had been diagnosed with depression. Yet comparatively few articles mention these other possible explanations for the shootings, and when they are mentioned, they are treated as minor factors, given less attention and less prominence. First-person shooter video games can become the easy target and act as a distraction from the much less tractable social problems that are likely to be the reason behind real-life shootings such as poverty, poor schooling, and a breakdown of community. Sometimes, an entirely plausible and ‘obvious’ assumption can be entirely wrong and mislead our action. It is dangerous to jump to conclusions without supporting evidence. In the video game example, the causal connection was simply assumed because it seemed obvious that violent adolescent computer game play would have negative social effects. However, numerous studies have shown that if anything, on the whole, the opposite is true: adolescents who play computer games, including the most violent games, display positive traits. When compared with those who don’t play computer games, they tend to be closer to family members, be more involved in other leisure activities, have a positive view of school, have good mental health, are less inclined to substance use, and have a better view of themselves and their intellectual abilities.5 The media and the public have taken one correlation, that those who commit acts of violence were players of violent video games, and presented it as causation. They claim that playing violent video games causes the player to commit a real-life violent act or, at least, makes it more likely that they

Indirect Harms and Robot Rights

113

will. Yet taking the set of players of violent video games as a whole, there is more evidence for the claim that playing video games, including violent ones, aids healthy adolescent development. Video game play is a good example of an activity that has caused concern regarding the effects on society but, considering evidence demonstrating that the activity brings positive social effects, banning video games would be counterproductive. The moral here is that caution is needed when calling on governments to implement laws that restrict people’s freedom, when their free action does not directly cause harm. These limitations restrict individual liberty and as such can be viewed as a kind of harm in themselves. And as shown in the video game case we might end up banning something that, perhaps counterintuitively, is beneficial to society. To return to the case at hand, permitting harm to social robots, the argument goes, may lead to an increase in violence against living things. Perhaps. But it is worth considering what the evidence is for such a claim.

VIOLENT BEHAVIOUR IN SOCIETY A different claim for granting protection to robots, related to the proposed view that individuals might become desensitised to violence if we permit their violent behaviour towards social robots, follows a concern put forward, again by Darling, that those viewing violence towards robots could be traumatised by the experience.6 Here it is worth noting that we already permit and, on some occasions, glorify violent behaviour in our society. Many competitive sports include and encourage violent and aggressive acts. Boxing and wrestling are obvious examples but many other sports such as rugby, fencing, martial arts, football, and ice hockey encourage intimidating behaviour and other actions that can lead to physically harming an opponent. Boxing demonstrates that punching someone repeatedly, even to the point of serious injury, can be classed as entertainment. In rugby, it is permissible to tackle one’s opponent to the ground in order to get the ball even if that action causes extreme harm to the opponent. In fact, not only is such activity permitted on the pitch, but it is also revered. Furthermore, not only do we allow such activity to take place, we encourage mass viewing of the activity. Boxing matches take place with televised audiences of millions and with no restriction on minors being among the audience. Rugby stadia are full of families, happily watching the brutality apparently without a care. To be explicit, these are situations where people, including children, are encouraged to watch as other people hurt each other severely, while the audience at home cheers them on.

114

Chapter 7

Here we have an instance of socially permitted violent behaviour. If we were to introduce legislation that banned violent behaviour towards social robots for the reason that some might find it disturbing to watch, it would be difficult to see how that same reasoning should not be extended to ban violent sports. It might be argued that the cases are disanalogous as violence is constitutive of a sport such as boxing in a way that it is not constitutive of our interactions with social robots. But while the disanalogy is accepted, it is not one that impinges on considerations of the argument in focus, that of whether an action should be banned because some might find it disturbing to watch. As many sports are violent, not only the obvious ones such as boxing and wrestling, such a ban would have a massive impact and it is far from clear that the impact would be positive, because there is, again, much evidence to suggest that participating in sports is hugely beneficial to adolescent development (Feldman and Matjasko 2005). In summary, we have two cases in which we permit violent behaviour within our society – violent video games and sports. In both these examples, despite initial appearances, there is a case to be made that permitting violent behaviour results in an overall good. There are two further relevant features of those cases. The first is that there is no non-consensual harm; no one is harmed in the video game and those who are harmed during sports are participating in the violent behaviour freely. The second feature is that in both cases we are in a restricted context, either in the virtual world of the video game or in the rule-governed context of the sport, where we can practice aggressive or violent behaviour in a safe place. This perhaps explains the positive effects on society and individual growth: the fact that the activity can be used for safe pressure release, leading to better socialisation. Is it possible that a similar case might be made regarding harming social robots? It is certainly not inconceivable that permitting aggressive acts towards social robots could, for example, act as a form of anti-violence training or as a safe way of letting out aggression. Damaging a robot, without the possibility of direct harm, might allow individuals to better understand their aggressive feelings and explore the impact of destructive behaviour. Or, and this might be harder for us to accept, some people might just find damaging social robots to be an amusing way of letting off steam. Anyone who has watched young children play non-violent video games will note that they often find ways to introduce violence because they find it amusing. For example, a skateboarding simulation that allows the player to fall off the handrails on purpose or to fling the skateboard at the wall at high speed can be the source of much hilarity. This can be viewed as evidence of violent tendencies in our children, but it can also be viewed as a harmless way to indulge a love of slapstick.

Indirect Harms and Robot Rights

115

A TRAINING GROUND FOR HARM We have considered the argument that allowing the harming of social robots could desensitise humans to violent behaviour, and the argument that people may find the viewing of such behaviour distressing. We will now consider a further argument around enabling: could allowing harm to social robots provide a training ground for harming other, living, things? Could they provide a space for people to practice their technique for harming others? Again, this form of argument is found in the literature against first-person shooter games. The argument there is that playing the game is teaching the agent skills and enabling them to become a real-life killer, because in the game the agent is learning how to kill and practice killing. But it is far from clear that this could be the case. As Marcus Schulzke puts it, the two actions are very different from each other: ‘This argument is weak because there is too little similarity between the acts of violence in games and in the real world to maintain that the mechanics are the same in each’ (Schulzke 2010, 132). Schulzke’s point is that using a console controller is a very different act from using a gun. And because the actions are entirely different, the acquisition of skills is also so different that mastering the former is not likely to be of help in relation to mastering the latter. He continues, ‘Guitar Hero is a prime example. In these incredibly popular games, players can hold an electronic guitar and push buttons that correspond to notes in a song. The game feels real, but the resemblance is superficial. A master of Guitar Hero will have no easier time learning the guitar than a novice because the simulation is so far removed from the activity’ (Schulzke 2010, 132). This defence is perhaps more difficult to make in the case of the physical damage of social robots. Kicking a social robot surely is a very similar action to kicking a human being, likewise perhaps for wrestling the robot to the ground or tripping it up. So perhaps it is plausible that interactions with social robots could become a training ground for violence towards humans. However, there is a related difference in kind between the two cases that is worth focussing on – there is a difference in the feel of the experience from the perspective of the inflictor, both physically and emotionally. Imagine that we design a social robot whose purpose is to be a training aid for humans learning self-defence. The robot can engage in combat on the mat and is weighted to respond to impact in the way that a human might. Using the social robot, the student can learn different techniques. However, at some point in the training programme the student will need to switch from combat with the social robot to combat with a human and it is likely that the two experiences will be entirely different from the perspective of the student. If they had not yet engaged with a human on the mat, while they may have some theoretical skills from the interactions with the robot, the experience is

116

Chapter 7

likely to feel entirely different. With the robot there is no soft push of the flesh when the student grabs the arm, no slipperiness of sweat on skin, no blood or saliva, no warmth. In terms of feel, fighting the social robot is more analogous to engaging with a standard lamp than it is to attacking a human. One might argue that this is a contingent feature of social robots and that over time it is certainly conceivable that their design will develop to be physically indistinguishable from humans. But even then, there will remain another important difference. Damaging a social robot will still feel different from harming a living thing from the point of view of the person doing the damage – it feels emotionally different. Engaging in violent behaviour towards another human being, inflicting pain, is emotionally very different from such behaviour towards a social robot, if you believe that pain cannot be inflicted. A top shooter in a game is no more emotionally prepared to shoot a person than a novice. Likewise, a person who thinks it is fun to destroy their social robot is no more emotionally prepared to harm a living thing than anyone else is. The emotional hurdle of harming a living thing is, thankfully, a significant one to overcome and there is no evidence to suggest that our virtual actions and engagement with social robots come close to preparing us to overcome it. To put it another way, we are likely to have more reason to fear the person who, without thought, kills the insect in their living room than we would have to fear the person who needlessly damages their social robot. We have not yet seen persuasive evidence that allowing the harming of social robots will cause indirect harm to the moral standing of society. However, that does not mean that we can conclude that harming social robots is entirely socially permissible. As discussed below, social permissibility comes in degrees.

REPULSION, DISTASTE, MORALITY, AND RIGHTS There are some behaviours that, while they do not cause direct harm, some members of society find distasteful or even repulsive. Societies have deemed some of these behaviours to be indirectly harmful and, on this basis, have made them illegal but others are permitted to varying degrees. We can call Category One such acts that are often considered distasteful but are fully permitted. There are a number of acts that fall under this category, acts that a high proportion of society will find distasteful or even repulsive, but which are fully permissible. Examples here might include speaking with one’s mouth full of food, wearing clothing that is considered inappropriate, having face tattoos or other body art, ‘deviant’ consensual sexual practices, drinking alcohol at non-acceptable times of the day and extreme drunkenness. These are things that may cause social anxiety or makesome

Indirect Harms and Robot Rights

117

individuals feeluncomfortable, but which are not likely to be banned in any liberal society. Although they might be frowned upon by some, we generally understand that they must be permitted because allowing them recognises the more fundamental good of civil liberty. No one is being harmed, except in some cases the individual themselves and that is with their own consent. What can be considered ‘distasteful’ is sensitive to culture and wider context and can change in a relatively short space of time. For example, until fairly recently it was considered distasteful to be heavily tattooed but now having multiple tattoos, even tattoos on visible body areas such as the neck, is broadly accepted and almost commonplace. We can call Category Two those acts that are considered distasteful and can veer into the realm of the impermissible. These, unlike the examples above, may lead to legal intervention. Examples here include swearing, speaking loudly or shouting, and public displays of affection. Swearing is generally acceptable, but a person swearing at people in a populated public place could be cautioned for disorderly behaviour. Again, speaking loudly or shouting can be acceptable in some circumstances, but it can also veer into a disturbance of the peace. And, while we are unlikely to complain if a couple exchange a kiss, our distaste can grow with the level of affection shown in public and can veer into illegality under indecent behaviour laws. Category Three behaviours are those that are considered distasteful and are always impermissible. These tend to be antisocial behaviours such as public urination, public indecency, soliciting, and loitering.7 Where an antisocial behaviour does fall under the purview of legislation, it is likely to be because an argument can be made for significant indirect harm. The relevant question for our purposes is: which category does the destruction of social robots fall into? Should restrictive legislation be introduced on the back of a significant indirect harm? Understanding social robots in line with the fictional dualism model detailed above, it is difficult to motivate the case for significant indirect harm. We can agree that onlookers may feel some mixture of negative emotion and empathy if they see a social robot being damaged but, as detailed in the previous chapter, fictional dualism provides an explanation of this reaction. The emotional reaction is compatible with an awareness that the robot is an object that cannot feel distress. While we do feel the emotional pull of empathy from the fictional overlay, we are also aware that the ‘pain’ reaction of the robot is no more an indication of pain than would be witnessed in a play fight. That is, we can become aware that our empathy is grounded in the fiction, not in any aspect of feeling that the robot itself might access. The human reaction of empathy does not reasonably, in these cases, lead to lasting distress for the viewer. None of this is to say that destructive behaviour is not abhorrent. If a group of teens decide to ‘torture’ a social robot by removing its limbs or by

118

Chapter 7

dropping it from a height it is distasteful because the robot looks like a human; it would also be distasteful for them to ‘torture’ a mannequin, for the same reason. Even looking beyond feelings of distaste, if a Roomba is purposefully trapped in the corner for a significant period, it will not feel anxiety or stress at its situation, but it may well overheat and become damaged. So, although we should not take our emotional response to the damage of social robots as grounds to demand social or legal reform, damage of the object itself could be considered as a potential trigger of anxiety. The emotional reaction we feel in these cases can be distinct from any concerns regarding the loss of functionality of the object. Often, we just do not like seeing things being destroyed. It is unpleasant to watch the windows of a house being broken, even if we know that it is due to be demolished. The needless damage of something is generally abhorrent to us and it can cause distress, repulsion, anxiety, and fear to those witnessing it. Given that social robots can mimic the emotional response of a living thing when damaged, it is certainly conceivable that they will become a target for ‘torture’. As noted above, the needless destruction of any object is generally repulsive to us and choosing as a target an object that appears to be lifelike is even more distasteful. Ultimately, much of our response here depends on how social robots fit within our society and on how we perceive them. Children are a key group often cited as in need of protection from the effects of violent movies, online games, or, in this case, exposure to violent or destructive acts towards social robots. And it is true that the freedom that we allow adults is sometimes deemed inappropriate for children as they are still developing. In saying that there is no case for duties to defend social robots, we are not saying that there needs to be no condemnation of violence towards social robots, rather that condemnation comes in various forms, with enforced restrictions being one extreme of the spectrum of disincentives. There are many kinds of antisocial behaviour that we disapprove of and discourage in our children and there are many ways of admonishing bad behaviour and encouraging good behaviour in society. It is not illegal for me to teach my child to destroy all of their teddies in a series of sacrificial ceremonies, but neither is it good parenting.

RIGHTS ARISING FROM ROBOT IDENTITY In chapter 5, we considered the case of Standish, the home assistance robot who has come to be seen as part of the family. If Standish was damaged beyond repair, the family would have reason to be upset; Standish is, as we discussed, irreplaceable. Could this lead to some kind of protective rights for social robots such as Standish? Despite the empathy that we would feel for

Indirect Harms and Robot Rights

119

the family that owned Standish, it does not seem likely that their loss would be taken to be a reason for additional protective rights beyond those that we have in place to inhibit property loss or damage. Standish has come to hold an additional value for the members of the family, a value that goes beyond his value as a mere object. We have a precedent for thinking about the loss or damage of such items. Everyone would recognise that there are objects in their possession that have a value to them that goes beyond the monetary value of the object. For example, your grandfather might have gifted you a collection of children’s books that his parents read to him as a child. The books hold sentimental value for you that goes far beyond their market value; they were read to your mother and her siblings when they were children, read to you and your siblings growing up, and one day you intend to read them to your own children. They are the only possession that you have of your grandfather’s and just the smell of them makes you nostalgic for him and the time you spent with him as a child. These books are incredibly important to you, and yet they are afforded no special legal or social protection or consideration. Although we can accept that the loss of the books will cause you mental anguish, it is generally agreed that it is not the kind of personal damage that should be compensated for or protected from. Certainly, in most legal jurisdictions there is no extra compensation given for loss that arises from personal attachment: market value determines compensation, although the ‘value to the owner’ doctrine can be treated somewhat flexibly. We can consider a slightly different case that reflects the amount of time that the family who owned Standish put into their relationship with him. If Standish is damaged the product of this time investment is lost and cannot be regained; should Standish be afforded additional rights to protect against this kind of loss to the family? Again, although we would feel empathy for the family’s situation, it does not seem to be a case where the entity would be given special consideration. Consider this analogy; imagine that you have been working on a book manuscript for many years and it is stored on your laptop and backed up on an external drive. While travelling on the train your luggage containing both the laptop and the external drive is stolen. The manuscript represents thousands of hours of your work, work that it would be impossible for you to replicate. Would you be compensated for this loss? Again, although we can sympathise for the situation that you find yourself in, only the cost of the equipment would be recompensed.

THE COST OF ROBOT RIGHTS Although granting rights is clearly a positive thing, it would be unwise to hold the view that all claims to rights should be granted. Rights bring a cost.

120

Chapter 7

Consider the social robot, Paro. If we were to promote a model under which Paro had rights, Paro would likely become useless for its intended purpose. The whole appeal of Paro is that it can provide comfort and companionship for individuals who are not capable of taking care of a living thing. If Paro was granted the rights that pet animals enjoy, there would be a need for some regulation in place to ensure that it was not being abused. This would be a very unwelcome regulatory hurdle in an area of healthcare where innovation, and not regulation, is urgently required. We can agree that it is immoral to deny rights where they are due while also recognising that conferring rights comes with a cost; conferring freedom on one party brings duties that limit the actions of another. Your right to put a fence around your property stops me from being able to walk on particular part(s) of the land. The right of someone not to be hurt by speech can limit another person’s freedom to say what they believe to be true. We can distinguish two ways in which rights can come into conflict with (other) reasons. I will call these ‘rights vs rights’ scenarios and ‘rights vs non-rights’ scenarios. The rights vs rights scenarios are those in which two recognised rights come into conflict with each other, demanding a resolution. For example, in the freedom of speech case, the right holder is forced to acknowledge that their right does not extend to speech that causes harm to others. In this case, two recognised moral values come into conflict. The rights vs non-rights cases are the ones in which granting or upholding the right would be significantly detrimental to some practice or way of living that a community has established and that it has a non-right-based reason to want to preserve. Restricting the rights of humans through slavery, for example, was always abhorrent. The rights of some humans were denied so that other humans could have increased wealth. Slave owners, and many others in society, chose to remain blind to their duties to humans, because to acknowledge the rights of others would have placed restrictions on the life that they desired themselves to live. That is, they were motivated by a nonright reason. People do not have the right to live the life they desire regardless of the cost to others. Such selective seeing is not just something of the past. We, and I include myself, often turn a blind eye to the huge global inequalities of wealth that force humans to work long days in poor conditions for only enough money to survive so that the rest of us can benefit from cheap prices. While strictly speaking the employees are ‘voluntarily’ giving up some of their right to freedom by accepting payment for work, something that all of us who are in paid employment do, the economic conditions that they live in severely limit their choice to do otherwise to the extent that it is no longer really a choice at all. Animal farming is another example in which we seem happy to put non-rights reasons above rights reasons, because we know that animals suffer

Indirect Harms and Robot Rights

121

greatly in order to provide us with meat and animal products, but we have a non-right reason (taste preferences) to support the continuation of animal product production. Put crudely, our behaviour would indicate that we value our freedom to eat meat and consume dairy products more than we value the animals’ right to live a life that is free of torture and miserable living conditions, despite generally accepting that animals do have such rights. Most agree that rights provide reasons that override the non-rights reasons. The animals’ right to live a pain-free life should override my desire to eat cheese; the right of all persons to live freely should override the desire to continue to benefit from a way of life that depends on free or cheap labour. Ronald Dworkin talks of rights as ‘trumps’; for Dworkin, even in cases in which some desirable social aim would be served by acting otherwise, rights trump non-right objectives (1977). The recognition that non-rights reasons have historically been used to block or limit due rights is a potential concern in the context of robots. I have already highlighted the benefits that robots may bring and pointed out the ways that granting rights to robots will severely limit the progress that they might otherwise facilitate. Given this, there might be a temptation to deny rights to robots on the basis of non-rights reasons. That is, there may soon be a future in which we benefit so greatly from the inclusion of social robots that we may be unwilling to recognise the otherwise legitimate claim to rights for robots. We have clearly been on the wrong side of history before when it comes to recognising the rights of others when they had a claim; we do not want to make that same mistake again with regards to robots. On the other hand, the cost of granting rights means that we do not want to do it unnecessarily or for the wrong reasons. Granting rights to robots will limit or stop altogether some other moral good, such as significantly better healthcare for the elderly or individualised learning in the classroom. As I hope is clear by now, I do not see that we have any reason for granting rights to robots.

Conclusion

I began this book by asking a question: Is there an ethically acceptable way of feeling empathy and even love for social robots without granting them moral consideration? I hope I have left the reader with the realisation that a desire to deny moral consideration to robots is entirely consistent with our ability to feel empathy and love for them and, beyond that, to view them as important social artefacts worthy of respectful treatment. I outlined the model of fictional dualism according to which an attachment to social robots, even a very deep one, can be explained as a response to a fictional mental life that we create through our engagement with these social entities. The dualist nature of the model, in particular the fact that the fiction supervenes on the physical object, explains why we would feel strong attachment to the entity as a whole and why the destruction or eventual ‘death’ of the robot would be potentially very upsetting for us. The fictional dualism model justifies and explains a whole range of feelings that we might have towards social robots. There are feelings that arise from our engagement with the fiction: love, affection, empathy, and attitudes of companionship. Then, there are feelings that can arise from engaging with the robot as an entity that plays an important assistive role in our life such as gratitude, familiarity, and the feeling of being in ‘safe hands’; the feelings that you might have towards a beloved family home or a reliable car. Finally, there are feelings that we develop towards objects that we are emotionally invested in, sentimental feelings of attachment (in an entirely non-pejorative sense), and corresponding feelings of loss when they are damaged. What the fictional dualism model does not justify is that the robot be treated as an object worthy of moral consideration in any way that moves beyond (again, non-pejorative) feelings of sentimentality.

123

124

Conclusion

Beyond the question of moral consideration, I offer further support for the fictional dualism model. I show how the model allows us to accommodate a useful notion of identity, something that will prove essential as we navigate advances in uploading and continuing bonds’ technologies. The notion of identity that comes with fictional dualism also provides us with a solution to concerns of loss of identity that will arise from a potential move towards the robot hive mind. I also argue that the fictional dualism model justifies our trust in social robots, provided we do not do so entirely based on their appearance. In a couple of places, I have made recommendations to social robot manufacturers: in the chapter on trust, that manufacturers have an obligation to be abundantly clear about the capabilities of robots, particularly if an upgrade could result in imperceptible changes to functions; in the chapter on identity, that robots that purport to represent ‘real people’ such as continuing bond robots should be clearly marked as being fictionalisations. I would like to now briefly further highlight how the fictional dualist model might be useful for the future of robotics. Developers and tech companies should want a context in which humans have a propensity to developing a strong attachment to one of their products but in which they do not have to be worried about calls for robot rights. They should be motivated to provide this context because, as noted in the final chapter, robot rights will bring a regulatory cost for tech companies. Because of this, manufacturers might consider making it easier for us to ‘toggle’ between our emotional attachment to the fiction and our pulling back from the significance of that attachment when we remember that our beloved robot is an object and need not be treated like a living thing. In other words, manufacturers should trust that the market will withstand some honesty. If the fictional dualism model is right, our affection for robots already exists against a backdrop of awareness that these robots are not capable of feelings for us – building on that basis, undermining wherever possible the tendency to see designers as attempting to deceive customers can only benefit the social robot market. In chapter 1, I observed that most philosophical theories are contributions to ongoing discussions and not necessarily the final word on the subject. I take this work to have made a significant contribution to the current debate and hope that it prompts interesting future research directions.

Notes

CHAPTER 1 1. The Babe Effect is not the only account of fiction changing our attitudes and behaviour towards animals. The publication of Black Beauty and its depiction of equine cruelty provoked such strong reactions in its readers that it is said to be responsible for the outlawing of widespread practices that were abusive to horses. Beautiful Joe so successfully fictionalised a story of dog abuse that it is credited with defining the international movement that changed the way people treat animals (Małecki et al. 2018).

CHAPTER 2 1. Pepper caused a stir in the United Kingdom when it was put to work helping customers with inquiries and offering entertainment in Margiotta Food and Wine in Edinburgh. Pepper (or ‘Fabio’ as the staff called him) was initially an attraction to customers but he was ultimately ‘fired’ for what staff and customers described as his ‘underwhelming performance’. https://www​.digitaltrends​.com​/cool​-tech​/pepper​-robot​-grocery​-store/ 2. https://www​.geckosystems​.com/ 3. Some think that humanoid robots might be distinctive in ways that are important for rights. See, in particular, Nyholm (2021) and Friedman (2021).

CHAPTER 3 1. For theorists who draw on the animal analogy see Darling (2021) and also Coeckelbergh (2010b), Sullins (2011), and Ashrafian (2015). 2. See Zickfeld, Kunst, and Hohle (2018) for a good summary of this literature. 3. I think this is debatable. I expect that although some might not feel differently about their partner’s right to be granted moral status, others might feel that they cannot support such a position, despite how sad this might make them feel. 125

126

Notes

4. Readers familiar with Wittgenstein’s Philosophical Investigations will recognise the Wittgensteinian inspiration behind this argument. 5. Wittgenstein introduced the idea of language games as part of the broader context of a ‘form of life’. Wittgenstein noted ‘the word “language-game” is used here to emphasise the fact that the speaking of language is part of an activity, or of a form of life’ (Wittgenstein 2009, PI 23).

CHAPTER 4 1. The problem of other minds is a long-established philosophical puzzle; we take ourselves to be entitled to believe that other humans have inner mental lives, experiencing pain, emotions, etc., but what justifies our belief? The problem arises because I have direct knowledge of my own mental states but cannot have direct knowledge of other’s mental states. The problem was brought to prominence by John Stuart Mill (1865). 2. Descartes’ dualism was proposed in his Meditations. A prominent modern day supporter of property dualism is David Chalmers (1996). Donald Davidson proposed predicate dualism to accommodate the apparent irreducibility of the mind (1980). 3. There are also Platonist views of fiction, according to which fictional characters exist eternally. They are not created by their authors: they predate them. See, for example, Parsons (1980). I won’t consider these theories further here. 4. For more on creationism, see Searle (2010) and Thomasson (1998). Also see Brock’s (2010) paper in which he argues against the ‘tide’ of creationism, where he helpfully cites an impressive series of prominent philosophers who adhere to the view.

CHAPTER 5 1. For the original statement, see Pleutarch (1919). 2. For more on the status of Leibniz’s Law, see Feldman (1970). 3. For more on the Statue and the Lump and other puzzles of constitution, see Paul (2010). 4. Many favour psychological continuity views. See, for example, Nagel (1971), Parfit (1984), Perry (1972), and Unger (1979). 5. Ethical concerns of replacement are raised. There is a question of whether those who are struggling with grief could be manipulated into seeing such technological devices as in some sense taking the place of their loved one and a real concern that this might interfere with the grieving process. 6. https://www​.arthurconandoyle​.com​/licensees​-anthony​-horowitz​.html 7. https://www​.thetimes​.co​.uk​/article​/the​-crown​-is​-crude​-and​-cruel​-says​-dame​ -judi​-dench​-l6wzqpns9

CHAPTER 6 1. See for example Hawley (2014, 2029–30) and Baier (1986, 235). 2. When faced with a proposed set of conditions for some concept, T, and a context, C, in which T appears to apply without meeting the conditions, we can either

Notes

127

choose to adjust the concept or we can stick with the concept and argue that the appearance of T that we see in C is mistaken. Ryan, in his paper from 2020, goes for the latter, arguing that, as the traditional views of trust exclude trusting AI, we cannot consider AI as an appropriate recipient of trust. 3. Not everyone will be convinced that we can trust social robots, as opposed to simply relying on them but as noted here, as humans are displaying trust behaviour and using trust language, there is certainly a possibility that an attitude of trust, and not just reliability, has been formed. Some theorists may want to insist that the attitude we have towards AI systems is not simply reliability, but it also falls short of trust in some important way. For example, Taddeo (2010) distinguishes e-trust (trust in digital environments) from trust and Grodzinsky et  al. (2010) use the notation TRUST to distinguish trust in digital environments from human-to-human, face-toface trust. Others (e.g., Bryson 2010) think that trust in social robots is possible but dangerous and, as such, we should be careful not to humanise them. 4. I am not taking a stand here on whether we can describe social robots as having agency in the technical sense. Floridi and Sanders (2004) have argued, convincingly, that we can conceive of AI as meeting the requirements for moral agency. Here, I am more interested in a regular connection between behaviour and mental state.

CHAPTER 7 1. I say ‘most’ because it is irrelevant to someone who supports information ethics if we accept the fictional dualism model as, according to information ethics, all objects are eligible for moral consideration. All means all. 2. See Rodogno (2016) for a thorough consideration of the claim, ultimately refuted, that our reaction can be dismissed as sentimental. 3. Incidentally, a recent study (Hiniker et  al. 2021) suggests that conversational techniques that children have learned and practised with artificial agents do not appear to cross over into their general conversations with humans. This could be evidence of a tendency to see our interactions with humans and with artificial agents as being contained within different spheres. 4. “Bloodlust Video Games Put Kids in the Crosshairs.” Denver Post, May 30, 1999; “All Those Who Deny Any Linkage between Violence in Entertainment and Violence in Real Life, Think Again.” New York Times, April 26, 1999. 5. See Durkin and Barber (2002) for an overview of various studies evidencing the positive benefits of computer game. 6. There is much literature regarding the potential secondary consequences of the mistreatment of sex robots. See, for example, Sparrow (2017) and Jecker (2021). The arguments in that literature often depend on attitudes towards pornography and gender stereotyping which necessarily broaden the focus of the question. As such, I will not engage with this literature here. 7. See Cushman et al (2012, 2) who propose that a negative reaction to harmful behaviour can be explained by ‘action aversion’ an aversive response being triggered simply by the basic perceptual and mechanical properties of an action, regardless of considerations of its outcome.

Bibliography

Adam, Alison. 2008. “Ethics for Things.” Ethics and Information Technology 10 (2–3): 149–54. https://doi​.org​/10​.1007​/s10676​-008​-9169​-3. Aerschot, Lina Van and Jaana Parviainen. 2020. “Robots Responding to Care Needs? A Multitasking Care Robot Pursued for 25 Years, Available Products Offer Simple Entertainment and Instrumental Assistance.” Ethics and Information Technology 22 (3): 247–56. https://doi​.org​/10​.1007​/s10676​-020​-09536​-0. Allen, Richard T. 1986. “The Reality of Responses to Fiction.” British Journal of Aesthetics 26 (1): 64–68. Arnold, Thomas and Matthias Scheutz. 2018. “HRI Ethics and Type-Token Ambiguity: What Kind of Robotic Identity Is Most Responsible?” Ethics and Information Technology 22 (4): 357–66. https://doi​.org​/10​.1007​/s10676​-018​-9485​-1. Ashrafian, Hutan. 2015. “Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights.” Science and Engineering Ethics 21 (2): 317–26. Baier, Annette. 1986. “Trust and Antitrust.” Ethics 96 (2): 231–60. Banks, Jaime and Nicholas David Bowman. 2014. “Avatars Are (Sometimes) People Too: Linguistic Indicators of Parasocial and Social Ties in Player–Avatar Relationships.” New Media & Society 18 (7): 1257–76. https://doi​.org​/10​.1177​ /1461444814554898. Bartneck, Christoph, Marcel Verbunt, Omar Mubin, and Abdullah Al Mahmud. 2007. “To Kill a Mockingbird Robot.” In HRI 2007: Proceedings of the 2007 ACM/IEEE Conference on Human-Robot Interaction - Robot as Team Member. pp. 81–7. https://doi​.org​/10​.1145​/1228716​.1228728. Benson, Ronald M. and David B. Pryor. 1973. “‘When Friends Fall Out’: Developmental Interference with the Function of Some Imaginary Companions.” Journal of the American Psychoanalytic Association 3 (21): 457–73. Birnbaum, Gurit E., Moran Mizrahi, Guy Hoffman, Harry T. Reis, Eli J. Finkel, and Omri Sass. 2016. “What Robots Can Teach Us About Intimacy: The Reassuring Effects of Robot Responsiveness to Human Disclosure.” Computers in Human Behavior 63: 416–23. https://doi​.org​/10​.1016​/j​.chb​.2016​.05​.064. 129

130

Bibliography

Blomkamp, Neill 2009. District 9. United States: Sony Pictures Home Entertainment. Booker, Charlie. 2016. “Be Right Back” Black Mirror. United States: Netflix. Brock, Stuart. 2010. “The Creationist Fiction: The Case Against Creationism About Fictional Characters.” The Philosophical Review 119 (3): 337–64. https://doi​.org​ /10​.1215​/00318108​-2010​-003. Bryson, Joanna J. 2010. Robots Should Be Slaves: Natural Language Processing 8. pp. 63–74. Amsterdam: John Benjamins Publishing Company. https://doi​.org​/10​ .1075​/nlp​.8​.11bry. Carpenter, Julie. 2016. Culture and Human-Robot Interaction in Militarized Spaces: A War Story. Culture and Human-Robot Interaction in Militarized Spaces: A War Story. London: Routledge. https://doi​.org​/10​.4324​/9781315562698. Cars. 2006. United States: Box Office Mojo. Chalmers, David. 1996. The Conscious Mind: In Search of a Conscious Experience. New York, Oxford: Oxford University Press. Chalmers, David. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17(9–10): 7–65. Chalmers, David. 2022. Reality+. Dublin: Allen Lane. Coeckelbergh, Mark. 2010a. “Moral Appearances: Emotions, Robots, and Human Morality.” Ethics and Information Technology 12 (3): 235–41. https://doi​.org​/10​ .1007​/s10676​-010​-9221​-y. Coeckelbergh, Mark. 2010b. “Robot Rights? Towards a Social-Relational Justification of Moral Consideration.” Ethics and Information Technology 12 (3): 209–21. https://doi​.org​/10​.1007​/s10676​-010​-9235​-5. Coeckelbergh, Mark. 2012. “Can We Trust Robots?” Ethics and Information Technology 14 (1): 53–60. https://doi​.org​/10​.1007​/s10676​-011​-9279​-1. Coeckelbergh, Mark. 2018. “Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.” Kairos Journal of Philosophy & Science 20 (1): 141–58. https://doi​.org​/10​.2478​/kjps​-2018​-0007. Coeckelbergh, Mark and David J. Gunkel. 2014. “Facing Animals: A Relational, Other-Oriented Approach to Moral Standing.” Journal of Agricultural and Environmental Ethics 27 (5): 715–33. https://doi​.org​/10​.1007​/s10806​-013​-9486​-3. Coeckelbergh, Mark, Cristina Pop, Ramona Simut, Andreea Peca, Sebastian Pintea, Daniel David, and Bram Vanderborght. 2016. “A Survey of Expectations about the Role of Robots in Robot-Assisted Therapy for Children with ASD: Ethical Acceptability, Trust, Sociability, Appearance, and Attachment.” Science and Engineering Ethics 22 (1): 47–65. https://doi​.org​/10​.1007​/s11948​-015​-9649​-x. Collins, Emily C., Abigail Millings, and Tony J. Prescott. 2013. “Attachment to Assistive Technology: A New Conceptualisation.” Assistive Technology Research Series 33: 823–28. https://doi​.org​/10​.3233​/978​-1​-61499​-304​-9​-823. Cottingham, John. 2017. Cambridge Texts in the History of Philosophy: Descartes: Meditations on First Philosophy. 2nd ed. Cambridge: Cambridge University Press. Danaher, John. 2016. “Robots, Law and the Retribution Gap.” Ethics and Information Technology 18: 299–309. Danaher, John. 2018. “The Philosophical Case for Robot Friendship.” Ethics and Information Technology 20 (1): 15–26. https://doi​.org​/10​.1007​/s10676​-018​-9448​-6.

Bibliography

131

Danaher, John. 2020. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26 (4): 2023–49. https://doi​ .org​/10​.1007​/s11948​-019​-00119​-x. Darling, Kate. 2016. Extending Legal Protection to Social Robots: The Effects of Antrhopomorphism, Empathy, and Violent Behaviour Towards Robotic Objects: Robot Law. Edited by Froomkin Calo and Kerr. Cheltenham: Edward Elgar Publishing. Darling, Kate. 2021. The New Breed: How To Think About Robots. Dublin: Penguin Random House. Davidson, Donald. 1980. Essays on Actions and Events. Oxford: Oxford University Press. de Graaf, Maartje M. A., Somaya Ben Allouch, and Tineke Klamer. 2015. “Sharing a Life with Harvey: Exploring the Acceptance of and Relationship-Building with a Social Robot.” Computers in Human Behavior 43: 1–14. https://doi​.org​/10​.1016​ /j​.chb​.2014​.10​.030. Durkin, Kevin and Bonnie Barber. 2002. “Not so Doomed: Computer Game Play and Positive Adolescent Development.” Journal of Applied Developmental Psychology 23 (4): 373–92. Dworkin, Ronald. 1977. Taking Rights Seriously. Cambridge: Harvard University Press. Elgin, Catherine Z. 2014. “Fiction as Thought Experiment.” Perspectives on Science 22 (2): 221–41. https://doi​.org​/10​.1162​/POSC​_a​_00128. Feldman, Amy F. and Jennifer L. Matjasko. 2005. “The Role of School-Based Extracurricular Activities in Adolescent Development: A Comprehensive Review and Future Directions.” Review of Educational Research 75 (2): 159–210. https://doi​ .org​/10​.3102​/00346543075002159. Feldman, Fred. 1970. “Leibniz and ‘Leibniz Law’.” The Philosophical Review 79 (4): 510–22. Floridi, Luciano. 1999. “Information Ethics: On The Philosophical Foundations of Computer Ethics.” Ethics and Information Technology 1: 37–56. Floridi, Luciano. 2006. “Information Ethics, Its Nature and Scope.” Computers and Society 36 (3): 21–36. Floridi, Luciano and John W. Sanders. 2004. “On the Morality of Artificial Agents.” Minds and Machines 14: 349–79. https://doi​.org​/10​.1017​/CBO9780511978036​.013. Friedman, Batya and Peter H. Kahn, Jr. 1992. “Human Agency and Responsible Computing: Implications for Computer System Design.” Journal of Systems Software 17 (1): 7–14. Friedman, Cindy. 2021. “Granting Negative Rights to Humanoid Robots.” Frontiers in Artificial Intelligence and Applications 366: 145–54. Garland, Alex. 2014. Ex Machina. United States: A24. Garreau, Joel. 2007. “Bots on the Ground - In The Field of Battle (or Even above It), Robots Are a Soldier’s Best Friend.” Washington Post. May 6, 2007. http://www​ .washingtonpost​.com​/wp​-dyn​/content​/article​/2007​/05​/05​/AR2007050501009​.html Gellers, Joshua C. 2018. “Rights for Robots: Artificial Intelligence, Animal and Environmental Law.” The American Economist 65 (1): 4–10.

132

Bibliography

Gellers, Joshua C. 2021. Rights for Robots: Artificial Intelligence, Animal and Environmental Law. Oxon: Routledge. Gettier, Edmund L. 1963. “Is Justified True Belief Knowledge?” Analysis 23 (6): 121–23. https://doi​.org​/10​.2307​/3326922. Goodman, Nelson. 1968. Languages of Art. Cambridge: Hackett Publishing. Grodzinsky et al. 2010. “Developing Artificial Agents Worthy of Trust: ‘Would You Buy a Used Car from this Artificial Agent?’” Ethics and Information Technology 13: 17–27. Gunkel, David. 2018. Robot Rights. Cambridge: MIT Press. Han, Jeong-Hye, Mi-Heon Jo, Vicki Jones, and Jun-H Jo. 2008. “Comparative Study on the Educational Use of Home Robots for Children.” Journal of Information Processing Systems 4 (4): 159–68. https://doi​.org​/10​.3745​/JIPS​.2008​.4​.4​.159. Hardin, Russell. 2002. Trust And Trustworthiness. New York: Russell Sage Foundation. Hawley, Katherine. 2014. “Trust, Distrust and Commitment.” Nous 48 (1): 1–20. Himma, Kenneth Einar. 2004. “There’s Something About Mary: The Moral Value of Things qua Informational Objects.” Ethics and Information Technology 6: 145–59. Hiniker, Alexis, Amelia Wang, Jonathan Tran, Mingrui Ray Zhang, Jenny Radesky, Kiley Sobel, and Sungsoo Ray Hong. 2021. “Can Conversational Agents Change the Way Children Talk to People? IDC’21: Proceedings of the 20th Annual ACM Interaction Design and Children Conference. pp. 338–49. New York: Association for Computing Machinery. Holton, Richard. 1994. “Deciding to Trust, Coming to Believe.” Australasian Journal of Philosophy 72 (1): 63–76. https://doi​.org​/10​.1080​/00048409412345881. Howard, Ron. 2018. Solo: A Star Wars Story. United States: Lucas Films. Hume, David. 1999. An Enquiry Concerning Human Understanding. Edited by Tom Beauchamp. Oxford: Oxford University Press Hung, Lillian, Cindy Liu, Evan Woldum, Andy Au-Yeung, Annette Berndt, Christine Wallsworth, Neil Horne, Mario Gregorio, Jim Mann, and Habib Chaudhury. 2019. “The Benefits of and Barriers to Using a Social Robot PARO in Care Settings: A Scoping Review.” BMC Geriatrics 19: 232. BioMed Central Ltd. https://doi​.org​/10​ .1186​/s12877​-019​-1244​-6. Ishiguru, Kazuo. 2021. Klara and the Sun. London: Faber. Jackson, Frank and Philip Pettit. 2002. “Response–Dependence Without Tears.” Nous 36 (1): 97–117. Jecker, Nancy. 2021. “Nothing to be Ashamed of: Sex Robots for Older Adults with Disabilities.” Journal of Medical Ethics 47 (1): 26–32. Johnson, Deborah G. and Mario Verdicchio. 2018. “Why Robots Should Not Be Treated like Animals?” Ethics and Information Technology 20 (4): 291–301. https://doi​.org​/10​.1007​/s10676​-018​-9481​-5. Jonze, Spike. 2014. Her. Burbank: Warner Home Video. Kant, Immanuel. 1963. Lectures on Ethics. Translated by Louis Infield. New York: Harper Torchbooks. King, Paul. 2017. Paddington 2. United States: TWC-Dimension.

Bibliography

133

Krueger, Joel and Lucy Osler. 2022. “Communing with the Dead Online: Chatbots, Grief, and Continuing Bonds.” Journal of Consciousness Studies 29 (9): 222–52. https://doi​.org​/10​.53765​/20512201​.29​.9​.222. Lamarque, Peter. 1981. “How Can We Fear and Pity Fictions?” The British Journal of Aesthetics 21 (4): 291–304. Levy, David. 2009. “The Ethical Treatment of Artificially Conscious Robots.” International Journal of Social Robotics 1 (3): 209–16. https://doi​.org​/10​.1007​/s12369​ -009​-0022​-6. Lillard, Angeline S. 1993. “Pretend Play Skills and the Child’s Theory of Mind.” Child Development 64 (2): 348–71. Lindemann, Nora F. 2022. “The Ethical Permissibility of Chatting with the Dead: Towards a Normative Framework for ‘Deathbots’.” In Publications of the Institute of Cognitive Science, Number 1. Osnabrück: Institute of Cognitive Science, Osnabrück University. Lowe, E. Jonathan 2006. “Non-Cartesian Substance Dualism and the Problem of Mental Causation.” Erkenntnis 65 (1): 5–23. https://doi​.org​/10​.1007​/s10670​-006​-9012​-3. Małecki, Wojciech, Bogusław Pawłowski, Marcin Cieński, and Piotr Sorokowski. 2018. “Can Fiction Make Us Kinder to Other Species? The Impact of Fiction on Pro-Animal Attitudes and Behavior.” Poetics 66: 54–63. https://doi​.org​/10​.1016​/j​ .poetic​.2018​.02​.004. Mill, John Stuart. 1865. An Examination of Sir William Hamilton’s Philosophy. London: Forgotten Books. Mitchell, Gary and Michelle Templeton. 2014. “Ethical Considerations of Doll Therapy for People with Dementia.” Nursing Ethics 21 (6): 720–30. Mosakas, Kestutis. 2020. “On the Moral Status of Social Robots: Considering the Consciousness Criterion.” AI & SOCIETY 36 (2): 429–43. https://doi​.org​/10​.1007​ /s00146​-020​-01002​-1. Nagel, Thomas. 1971. “Brain Bisection and the Unity of Consciousness.” Synthese 22 (3–4): 396–413. https://doi​.org​/10​.1007​/BF00413435. Neumaier, Otto. 1987. “A Wittgensteinian View of Artificial Intelligence.” In Artificial Intelligence: The Case Against. Edited by Rainer Born. London: Routledge. Nobis, Nathan. 2009. “The Babe Vegetarians: Bioethics, Animal Minds, and Moral Methodology.” In Bioethics at the Movies. Edited by Sandra Shapshay. pp. 56–73. Baltimore: Johns Hopkins University Press. Norlock, Kathryn J. 2016. “Real (and) Imaginal Relationships with the Dead.” The Journal of Value Inquiry 51 (2): 341–56. https://doi​.org​/10​.1007​/s10790​-016​-9573​ -6. Nozick, Robert. 1974. Anarchy, State, and Utopia.New Jersey: Wiley Blackwell. Nozick, Robert. 2014. “Philosophical Explanations.” In Berhard Williams, Essays and Reviews: 1959–2002. 187–96. https://doi​.org​/10​.5840​/int​stud​phil​198416373. New Jersey: Princeton University Press. Nyholm, Sven. 2020. Humans and Robots: Ethics, Agency and Anthropomorphism. London: Rowman & Littlefield. O’Neil, Collin. 2012. “Lying, Trust, and Gratitude.” Philosophy and Public Affairs 40 (4): 301–33. https://doi​.org​/10​.1111​/papa​.12003.

134

Bibliography

Oosterveld, Bradley, Luca Brusatin, and Matthias Scheutz. 2017. “Two Bots, One Brain: Component Sharing in Cognitive Robotic Architectures.” Proceedings of 12th ACM/IEEE International Conference on Human-Robot Interaction. Papineau, David. 2009. “The Poverty of Analysis.” Proceedings of the Aristotelean Society, Supplementary Volumes 83 (1): 1–30. https://doi​.org​/10​.1111​/j​.1467​-8349​ .2009​.00170​.x. Parfit, Derek. 1984. Reasons and Persons. London: Clarendon Press. https://doi​.org​ /10​.4324​/9780429488450. Parsons, Terence. 1980. Nonexistent Objects. New Haven: Yale University Press. Paul, Laurie A. 2010. “The Puzzles of Material Constitution.” Philosophy Compass 5 (7): 579–90. Perry, John. 1972. “Can the Self Divide?” The Journal of Philosophy 69 (16): 463. https://doi​.org​/10​.2307​/2025324. Piatelli-Palmarini, Massimo, Juan Uriagereka, and Pello Salaburu. 2010. Of Minds and Language: A Dialogue with Noam Chomsky in the Basque Country. Oxford: Oxford University Press. Pirhonen, Jari, Helinä Melkas, Arto Laitinen, and Satu Pekkarinen. 2019. “Could Robots Strengthen the Sense of Autonomy of Older People Residing in Assisted Living Facilities?—A Future-Oriented Study.” Ethics and Information Technology 22 (2): 151–62. https://doi​.org​/10​.1007​/s10676​-019​-09524​-z. Plutarch. 1919. Lives I. Theseus and Romulus, Lycurgus and Numa, Solon and Publicola. Loeb Classical Library. Radford, Colin. 1975. “How Can We Be Moved by the Fate of Anna Karenina?” Proceedings of the Aristotelian Society 49: 67–93. Rockstar Games. 2018. Read Dead Redemption II. United States: Sony Interactive Entertainment. Rosenthal-von der Pütten, Astrid M., Frank P. Schulte, Sabrina C. Eimler, Sabrina Sobieraj, Laura Hoffmann, Stefan Maderwald, Matthias Brand, and Nicole C. Krämer. 2014. “Investigations on Empathy towards Humans and Robots Using FMRI.” Computers in Human Behavior 33: 201–12. https://doi​.org​/10​.1016​/j​.chb​.2014​.01​.004. Ryan, Mark. 2020. “In AI We Trust: Ethics, Artificial Intelligence, and Reliability.” Science and Engineering Ethics 26: 2749–67. Scheutz, Matthias. 2011. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” In Robot Ethics: The Ethical and Social Implications of Robotics. Edited by Keith Abney, George A. Bekey, Keith Abney, and Patrick Lin. Cambridge: MIT Press. Schneider, Susan and Joe Corabi. 2014. “The Metaphysics of Uploading.” Journal of Consciousness Studies. 19(7): 26–44. Schulzke, Marcus. 2010. “Defending the Morality of Violent Video Games.” Ethics and Information Technology 12 (2): 127–38. https://doi​.org​/10​.1007​/s10676​-010​ -9222​-x. Schur, Michael. 2017. The Good Place. United States: Shout! Factory. Scott, Ridley. 1982. Blade Runner. United States: Warner Bros. Searle, John R. 2010. “The Logical Status of Fictional Discourse.” In Expression and Meaning, 58–75. https://doi​.org​/10​.1017​/cbo9780511609213​.005.

Bibliography

135

Seibt, Johanna. 2017. “Towards an Ontology of Simulated Social Interaction: Varieties of the ‘As If’ for Robots and Humans.” 11–39. https://doi​.org​/10​.1007​/978​-3​ -319​-53133​-5​_2. Sharkey, Amanda and Noel Sharkey. 2020. “We Need to Talk about Deception in Social Robotics!” Ethics and Information Technology 23: 309–16. https://doi​.org​ /10​.1007​/s10676​-020​-09573​-9. Shibata, Takanori. 2012. “Therapeutic Seal Robot as Biofeedback Medical Device: Qualitative and Quantitative Evaluations of Robot Therapy in Dementia Care.” Proceedings of the IEEE 100 (8): 2527–38. https://doi​.org​/10​.1109​/JPROC​.2012​.2200559. Shoemaker, Sydney 1984. “Personal Identity: A Materialist’s Account.” In Personal Identity. Edited by Sydney Shoemaker and Richard Swinburne. Oxford: Blackwell. Singer, Peter. 1998. Ethics into Action: Henry Spira and the Animal Rights Movement. Lanham: Rowman & Littlefield. Siponen, Mikko. 2005. “A Pragmatic Evaluation of the Theory of Information Ethics.” Ethics and Information Technology 6 (4): 279–90. https://doi​.org​/10​.1007​/ s10676​-005​-6710​-5. Smids, Jilles. 2020. “Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?” Science and Engineering Ethics 26 (5): 2849–66. https://doi​.org​/10​.1007​/s11948​-020​-00230​-4. Sorell, Tom and Heather Draper. 2014. “Robot Carers, Ethics, and Older People.” Ethics and Information Technology 16: 183–95. Sparrow, Robert. 2002. “The March of the Robot Dogs.” Ethics and Information Technology. 4: 305–318. Sparrow, Robert. 2017. “Robots, Rape, and Representation”. International Journal of Social Robotics 9 (4): 465–77. Spielberg, Steven. 1982. E.T. the Extraterrestrial. United States: Universal City Studios. Stenseke, Jakob. 2022. ‘The Morality of Artificial Friends in Ishiguro’s Klara and the Sun’. Journal of Science Fiction and Philosophy 5. https://jsfphil.org/ volume-5-2022/artificial-friends-in-klara-and-the-sun/ Sternheimer, Karen. 2007. “Do Video Games Kill?” Contexts 6 (1): 13–17. Sullins, John P. 2012. “Robots, Love, and Sex: The Ethics of Building a Love Machine.” IEEE Transactions on Affective Computing 3 (4), 389–409. Sweeney, Paula. 2021. “A Fictional Dualism Model of Social Robots.” Ethics and Information Technology 23 (3): 465–72. Sweeney, Paula. 2022. “Why Indirect Harms do not Support Social Robot Rights?” Minds and Machines 32 (4): 735–49. Sweeney, Paula. 2023. “Trusting Social Robots.” AI and Ethics 3: 419–26. Taddeo, Mariarosaria. 2010. “Modelling Trust in Artificial Agents, a First Step toward the Analysis of e-Trust.” Minds and Machines 20 (2): 243–57. https://doi​ .org​/10​.1007​/s11023​-010​-9201​-3. Taddeo, Mariarosaria. 2017. “Trusting Digital Technologies Correctly.” Minds and Machines 27: 565–568. Netherlands: Springer. https://doi​.org​/10​.1007​/s11023​ -017-9450​-5. Tallant, Jonathan. 2019. “You Can Trust the Ladder, But You Shouldn’t.” Theoria (Sweden) 85 (2): 102–18. https://doi​.org​/10​.1111​/theo​.12177.

136

Bibliography

Thomasson, Amie L. 1998. Fiction and Metaphysics. Cambridge University Press. https://doi​.org​/10​.1017​/cbo9780511527463. Thomaz, Andrea, Guy Hoffman, and Maya Cakmak. 2016. “Computational HumanRobot Interaction.” Foundations and Trends in Robotics 4 (2–3): 105–223. https:// doi​.org​/10​.1561​/2300000049. Turkle, Sherry. 2010. “In Good Company? On the Threshold of Robotic Companions.” In Close Engagements with Artificial Companions, Edited by Yorick Wilks. pp. 3–10. Amsterdam: Johns Benjamin Publishing Company. Unger, Peter. 1979. “I Do Not Exist.” In Perception and Identity, Edited by G.F. Macdonald. pp. 235–51. London: Red Globe Press. https://doi​.org​/10​.1007​/978​-1​ -349​-04862​-5​_10. Wallach, Wendell and Colin Allen. 2009. Moral Machines. Oxford: Oxford University Press. Walton, Kendall L. 1978. “Fearing Fictions.” The Journal of Philosophy 75 (1): 5–27. Walton, Kendall L. 1993. Mimesis as Make-Believe: On the Foundations of the Representational Arts. Cambridge: Harvard University Press. Wiggins, David. 1980. Sameness and Substance. Cambridge: Harvard University Press. Williams, Christopher Y. K., Adam T. Townson, Milan Kapur, Alice F. Ferreira, Rebecca Nunn, Julieta Galante, Veronica Phillips, Sarah Gentry, and Juliet A. Usher-Smith. 2021. “Interventions to Reduce Social Isolation and Loneliness during COVID-19 Physical Distancing Measures: A Rapid Systematic Review.” PLoS ONE 16 (2): e0247139. https://doi​.org​/10​.1371​/journal​.pone​.0247139. Wittgenstein, Ludwig. 1953. The Philosophical Investigations. Oxford: Blackwell. Wolfendale, Jessica. 2007. “My Avatar, My Self: Virtual Harm and Attachment.” Ethics and Information Technology 9 (2): 111–19. https://doi​.org​/10​.1007​/s10676​ -006​-9125​-z. Zickfeld, Janis H., Jonas R. Kunst, and Sigrid M. Hohle. 2018. “Too Sweet to Eat: Exploring the Effects of Cuteness on Meat Consumption.” Appetite 120: 181–95. https://doi​.org​/10​.1016​/j​.appet​.2017​.08​.038.

Index

Adam, Alison, 44–45 Afghanistan War, 22 ageing population, 15, 17 Alexa, 18, 95 Alicia Vikander, 50 Allen, Richard T., 59 Allouch, Ben, 20, 24, 93 anatomy, importance of, 12, 77, 81, 82 animal model, 28–31, 51, 66 Anna Karenina, 59 anthropomorphising, 2, 20, 29, 30, 33, 52, 56 anthropomorphising robots, 20, 29, 33, 41, 52, 56, 86, 91, 98 Aristotle, 75 Arnold, Thomas, 88–89 Arthur Conan Doyle Estate, 60 assigning gender to robots, 22 Babe, 8 Babe Effect, 9 Bartneck, Christophe, 24 behaviourist model, 36–42, 46, 51, 63, 67 benefits of sports, 114 benefits of video game play, 112–13 Benson, Ronald M., 57–58 bereavement, 18, 86 BioDave (Chalmers), 81–84 Black Mirror, 77

Bo, the Paro, 86–87 bomb disposal robots, 22 boxing, 113 Breazeal, Cynthia, 23 Bryson, Joanna, 41, 43, 48, 55, 68 caring Coach, 16 Carpenter, Julie, 22 Cars, 50 Chalmers, David, 81 classroom assistants, 17, 121 closest continuer view of identity, 83–84 Coeckelbergh, Mark, 28, 32, 37, 42, 67, 94, 97, 105 companion robots, 16, 18, 21, 24, 55, 78, 87, 93, 99 conditions of knowledge, 13 consciousness, 1, 27–28, 32, 34, 36–37, 42, 46–47, 50, 54, 62–63, 67–68, 76, 81, 109 contexts, 20, 32, 44, 58, 83, 103, 114 continuing bonds, 18, 77–81, 124 Corabi, Joe, 84 cost of rights, 119–21 COVID-19, 60 Crawling Microbugs, 24 creationism about fiction, 59–60 The Crown, 80 cuteness effect, 23, 31, 64 137

138

Index

DadBot, 78 Dame Judy Dench, 80 Danaher, John, 23, 36, 42, 47, 67 Darling, Kate, 21, 23, 28, 33, 58, 66, 108, 109, 113 deception, 37, 42, 95, 100–2 Deep Nostalgia, 18 De graaff, Maartje M. A., 20, 93 dementia, 21, 60, 93, 100 Dennet, Daniel, 41 Descartes, Rene, 54 desiderata, 46, 69 DigiDave (Chalmers), 81–84 dinosaur robots, 21 Disney, 29 distress at robot destruction, 21, 23, 101, 108, 117 District 9, 50 documentaries and social change, 8 Dr Jekyll and Mr Hyde, 87 dualism: motivation for, 54–56; varieties of, 54 Dworkin, Ronald, 121 E. T., 50 education, robots in, 16 Elgin, Catherine, 7–10 emotional response to fiction, 61–62 empathy for robots, 22–25, 28, 31, 41, 49, 108–9, 117 empty reference, 60 epistemic limitations, 38, 46, 67, 70 evidence in philosophy, 12–13 Ex Machina, 50 experiments: thought experiments as method, 7–9, 13, 16; physical experiments as method, 6–7 fictional characters, 52–53, 55 fiction and robots, 5, 10, 11, 49–51 fiction and truth, 55, 59–60 fiction as make-believe, 56, 59 fiction as thought experiment, 8–9 fiction-framing, 10, 12, 23, 49–51 fictions and social change, 8–9

First-person shooter games, 111–12, 115 Floridi, Luciano, 43, 47, 69 FRAnny, 17 future directions, 5, 11, 15, 17, 18, 42, 51, 89, 121, 124 Galileo, 7, 14 Garland, Alex, 50 Gellers, Joshua, 25, 35, 46 Gettier, Edmund, 13 Goodman, Nelson, 84 The Good Place, 50 gratitude towards robots, 22, 92, 123 The Guardian, 1 Guitar Hero, 115 Gunkel, David, 30, 32, 37, 46, 67 Hardin, Russell, 92 Hector, 15, 16 Her, 11, 21, 23, 40 Himma, Kenneth E., 45 Hive minds, 88–89, 124 Hobbes, Thomas, 75 Holton, Richard, 91–92 HSBC, 21 humans and pets, 29, 31, 94, 99 imaginary friends, 53, 57–58 individualised learning, 17, 121 information ethics model, 43–45, 51, 69 Iraq War, 22 Jackson, Frank, 19 Janet (from The Good Place), 50 Kant, Immanuel, 29, 31, 66, 109 Kanye West, 18 Kazuo Ishiguro, 11, 12, 73, 74, 88 Kim Kardashian, 18 Kismet, 23 Klamer, Tineke, 20, 93 Klara and the Sun, 11, 21, 73–74 Krueger, Joel, 18, 78, 79

Index

language games, 40, 126n5 Leibniz’s law, 75–76 levels of abstraction, 44, 69, 107 Levinas, Emmanuel, 32, 33 Levy, David, 109, 111 Lillard, Angeline, 57 loneliness, 11, 16, 47, 93, 104 love for robots, 85, 109 Lowe, E. Jonathan, 54–5, 78 martial arts, 113 The Matrix, 9, 10 media coverage: of robot mental states, 1–2; of gaming and violence, 111–12 mental properties models, 28, 54, 63, 84 metaphysics of fiction, 58–60 methodology, 6–8 methods of philosophy, 6–7 military robots, 22 Mortal Kombat, 111 Mosakas, Kestutis, 28, 45 myheritage, 18

139

pain behaviour, 38, 40, 53, 110 Papineau, David, 7 Paradox of fiction emotion, 58 Paro, 16, 19, 53, 55, 60, 86, 99, 104, 120 Pepper, 16, 17, 21 persistence of identity, 76–77 personal assistant technologies, 18, 21, 63, 85 philosophy and the physical sciences, 6–7 Pleo, 21, 23, 58 pretending, 37, 56–57 The Princess and the Frog, 76 privacy, 103 Pryor, David B., 57–58 psychological continuity, 76–77, 82–83, 85, 89 purpose of philosophy, 14, 60 Putnam’s Brain-in-a-vat thought experiment, 9, 10

naming robots, 9, 21, 22, 24, 35, 38, 88 NeCoRo, 21 Neo (from the Matrix), 10 Netflix, 80 New York Times, 1 Noam Chomsky, 1, 76 non-academics, 6 non-Cartesian substance dualism, 54–55, 77 non-embodied robots, 22–23 Nozick, Robert, 10, 83 Nozick’s experience machine thought experiment, 10 Nuemaier, Otto, 39–40 Nyholm, Sven, 29, 33, 66

Radford, Colin, 58 Red Dead Redemption II, 65 relational model, 32–36, 51, 67 relationships with robots, 11, 27, 32–36 reliability, 91, 98–99, 102–4 response-dependent properties, 19–20 rhubarb, 20 Robert Nozick, 10, 83 robot friendships, 5, 22, 97, 98, 102 robot helpers, 64 robot pets, 5, 21 robots-as-tools model, 41–43, 51, 107 robots in business, 16–17 robots in care homes, 16, 20, 21, 101 robots in society, 15–19 Roman (chat bot), 78 Roomba, 19, 21–22, 28, 53, 60, 65, 118 Rosenthal-von der Putten, Astrid, M., 25 Ruby, the Paro owner, 86–87

O’Neil, Colin, 92 ontological differences, 33–34, 38 Osler, Lucy, 18, 78, 79

Samantha, from Her, 11, 21–23, 40 Scheutz, Matthias, 21–22, 88–89 Schneider, Susan, 84

140

Schulzke, Marcus, 115 science fiction, 5, 9–10, 49–51, 73 Scott, Ridley, 50 Searle, John, 60 Seibt, Johanna, 56 sentience, 1, 12, 27, 34, 36, 42, 47, 50, 62–63, 95 sentimentality, 9, 31, 55, 119, 123 shared language, importance of, 12, 39, 40 Sherlock Holmes, 56 Ship of Theseus, 75 Shoemaker, Sidney, 76 Singer, Peter, 31 Siponen, Mikko, 44–45 smart board technology, 17 Snowy, the Paro, 86–87 social disorder, 109–11 social groups, importance of, 12 social media, 2, 112 social robots, what is it, 19–20 Solo, 86 Sparrow, Robert, 99, 101 Spike Jonze, 11 Spira, Henry, 31 Standish, 85–86, 118–19 Stanley, the Paro owner, 86–87 Star Trek: The Next Generation, 50

Index

Statue and lump, 75 Stenseke, Jakob, 74, 83 Sternheimer, Karen, 112 Tallant, Jonathan, 95–96, 105 tastiness, 19–20 The Times, 80 torture, 117–18 trust based on appearances, 94–95 trusting objects, 90, 94 Turkle, Sherry, 23 U.S. Chamber Institute for Legal Reform, 29 Unger, Peter, 77 upgrades, 85–86, 105, 124 uploads, 81–85 uploads as copies, 84–85 vegetarianism, 8, 9, 121 violent sports, 113–14 violent video games, 111–14 Walton, Kendall, 59, 61 Washington Post, 22 Wiggins, David, 87 Williams, Christopher, 88–89, 93 Wittgenstein, Ludwig, 40, 126n4–5

About the Author

Paula Sweeney is a senior lecturer in philosophy at the University of Aberdeen. She has previously published in philosophy of language and philosophy of logic, and her recent work is concerned with philosophical questions arising from our engagement with robots and other social technologies. Paula lives in Aberdeenshire where she loves walking in the hills with her dog, Monty.

141