151 18 3MB
English Pages [231] Year 2021
What is Essential to Being Human?
This book asks whether there exists an essence exclusive to human beings despite their continuous enhancement – a nature that can serve to distinguish humans from artificially intelligent robots, now and in the foreseeable future. Considering what might qualify as such an essence, this volume demonstrates that the abstract question of ‘essentialism’ underpins a range of social issues that are too often considered in isolation and usually justify ‘robophobia’, rather than ‘robophilia’, in terms of morality, social relations and legal rights. Any defence of human exceptionalism requires clarity about what property(ies) ground it and an explanation of why these cannot be envisaged as being acquired (eventually) by AI robots. As such, an examination of the conceptual clarity of human essentialism and the role it plays in our thinking about dignity, citizenship, civil rights and moral worth is undertaken in this volume. What is Essential to Being Human? will appeal to scholars of social theory and philosophy with interests in human nature, ethics and artificial intelligence. Margaret S. Archer founded the Centre for Social Ontology in 2013 (now based at the École de Management, Université de Grenoble) when she was Professor of Social Theory at the École Polytechnique Fédérale de Lausanne, Switzerland. Her books include Social Origins of Educational Systems; Culture and Agency: The Place of Culture in Social Theory; Realist Social Theory: The Morphogenetic Approach; Being Human: The Problem of Agency; Structure, Agency and the Internal Conversation; Making our Way Through the World; The Reflexive Imperative; Late Modernity: Trajectories Towards Morphogenic Society; Generative Mechanisms Transforming the Social Order; Morphogenesis and the Crisis of Normativity; and Morphogenesis and Human Flourishing. Andrea M. Maccarini is Professor of Sociology and Associate Chair in the Department of Political Science, Law and International Studies at the University of Padua, Italy. He is also a member of the teaching board of the Ph.D. programme in Sociology and Social Research at the University of Bologna, Italy, and has been a visiting scholar at the University of California Los Angeles (UCLA), Boston University and the Humboldt-Universität Berlin, among others. He is a board member of IACR (International Association for Critical Realism) and collaborator of the Centre for Social Ontology, founded by Margaret S. Archer. His current research interests lie in the fields of social theory, education and socialisation, and cultural change. He is the author of Deep Change and Emergent Structures in Global Society: Explorations in Social Morphogenesis and the co-editor of Engaging with the World: Agency, Institutions, Historical Formations.
The Future of the Human Series Editor Margaret Archer, University of Warwick, UK
Until the most recent decades, natural and social science could regard the ‘human being’ as their unproblematic point of reference, with monsters, clones and drones acknowledged as fantasies dreamed up for the purposes of fiction or academic argument. In future, this common, taken for granted benchmark will be replaced by various amalgams of human biology supplemented by technology – a fact that has direct implications for democracy, social governance and human rights, owing to questions surrounding standards for social inclusion, participation and legal protection. Considering the question of who or what counts as a human being and the challenges posed by anti-humanism, the implications for the global social order of the technological ability of some regions of the world to ‘enhance’ human biology, and the defence of humankind in the face of artificial intelligence, the books in this series examine the challenges posed to the universalism of humankind by various forms of anti-humanism, and examine ‘human essentialism’ in terms of the liabilities and capacities particular to human beings alone. Titles in this series Realist Responses to Post-Human Society: Ex Machina Edited by Ismael Al-Amoudi and Jamie Morgan Post-Human Institutions and Organizations Confronting The Matrix Edited by Ismael Al-Amoudi and Emmanuel Lazega Post-Human Futures Human Enhancement, Artificial Intelligence and Social Theory Edited by Mark Carrigan and Douglas V. Porpora What is Essential to Being Human? Can AI Robots Not Share It? Edited by Margaret S. Archer and Andrea M. Maccarini For more information about this series, please visit: https://www.routledge.com/The-Future-of-the-Human/book-series/FH
What is Essential to Being Human? Can AI Robots Not Share It?
Edited by Margaret S. Archer and Andrea M. Maccarini
First published 2021 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 605 Third Avenue, New York, NY 10158 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 selection and editorial matter, Margaret S. Archer and Andrea M. Maccarini; individual chapters, the contributors. The right of Margaret S. Archer and Andrea M. Maccarini to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Archer, Margaret S. (Margaret Scotford), editor. | Maccarini, Andrea, editor. Title: What is essential to being human? : can AI robots not share it? / edited by Margaret S. Archer and Andrea M. Maccarini. Description: Abingdon, Oxon ; New York, NY : Routledge, 2021. | Series: The future of the human | Includes bibliographical references and index. Identifiers: LCCN 2021003339 (print) | LCCN 2021003340 (ebook) | ISBN 9780367368289 (hbk) | ISBN 9781032041216 (pbk) | ISBN 9780429351563 (ebk) Subjects: LCSH: Philosophical anthropology. | Human beings. | Essentialism (Philosophy) | Turing test. | Robots. Classification: LCC BD450 .W4874 2021 (print) | LCC BD450 (ebook) | DDC 128--dc23 LC record available at https://lccn.loc.gov/2021003339 LC ebook record available at https://lccn.loc.gov/2021003340 ISBN: 978-0-367-36828-9 (hbk) ISBN: 978-1-032-04121-6 (pbk) ISBN: 978-0-429-35156-3 (ebk) DOI: 10.4324/9780429351563 Typeset in Times New Roman by Taylor & Francis Books
Contents
List of illustrations List of contributors 1 Introduction
vii viii 1
MARGARET S. ARCHER AND ANDREA M. MACCARINI
2 On robophilia and robophobia
26
DOUGLAS PORPORA
3 Sapience and sentience: A reply to Porpora
40
MARGARET S. ARCHER
4 Relational essentialism
56
PIERPAOLO DONATI
5 Artificial intelligence: Sounds like a friend, looks like a friend, is it a friend?
74
JAMIE MORGAN
6 Growing up in a world of platforms: What changes and what doesn’t?
103
MARK CARRIGAN
7 On macro-politics of knowledge for collective learning in the age of AI-boosted Big Relational Tech
132
EMMANUEL LAZEGA AND JAIME MONTES-LIHN
8 Can AIs do politics?
158
GAZI ISLAM
9 Inhuman enhancements?: When human enhancements alienate from self, others, society, and nature ISMAEL AL-AMOUDI
174
vi
Contents
10 The social meanings of perfection: Human self-understanding in a post-human society
197
ANDREA M. MACCARINI
Index
214
Illustrations
Figures 3.1 Human relations of the natural, practical and social orders of natural reality 3.2 Datum and verbal formulation 7.1 Pattern of advice exchanges among positions of members in the social milieu of “biodynamic” winegrowers of the Côte de Beaune. They participate in two parallel collective learning processes depending on the temporality – long or short term – of their technical decisions 9.1 Characteristics of HEs
42 52
140 179
Tables 4.1 Varieties of the semantics of human identity 7.1 Typology of knowledge claims derived from the characteristics of appropriateness judgments
65 137
Contributors
Ismael Al-Amoudi is Professor of Social and Organisational Theory and Director of the Centre for Social Ontology at Grenoble École de Management, Université Grenoble Alpes ComUE (France). His work borrows from anthropology, management studies, political philosophy, social theory and sociology. One recurring theme in his research concerns the nature of social norms and the basic processes through which they are legitimated and contested. Another theme concerns the contribution of ontology to the human and social sciences. He is a member of the editorial boards of Organization and of The Journal for the Theory of Social Behaviour. Recent publications include articles in the Academy of Management Learning & Education; British Journal of Sociology; Business Ethics Quarterly; Cambridge Journal of Economics; Human Relations; Journal for the Theory of Social Behaviour; Organization; and Organization Studies. Mark Carrigan is a sociologist in the Faculty of Education at the University of Cambridge. His research explores how the proliferation of digital platforms is reshaping education systems, with a particular focus on knowledge production within universities. He is a Fellow of the RSA, co-convenor of the SRHE’s Digital University Network, co-convenor of the BSA’s Digital Sociology group, co-convenor of the Accelerated Academy, a member of the Centre for Social Ontology, an associate member of CHERE at Lancaster University and a research associate in the Public Policy Group at LSE. His current research looks at questions of digital platforms through the lens of the structure and agency debate. Pierpaolo Donati is Alma Mater Professor (PAM) of Sociology at the University of Bologna. Past-President of the Italian Sociological Association, he has served as Executive Committee Member of the IIS and Director of the National Observatory on the Family of the Italian Government. He is currently a member of the Pontifical Academy of Social Sciences (since 1997) and of the Academy of Sciences of the University of Bologna (since 1998). He has published more than 800 works. He is known as the founder of an original ‘relational sociology’ or ‘relational theory of society’. Among his more recent publications are Relational Sociology: A New Paradigm for the Social Sciences; The Relational Subject
List of contributors
ix
(with M. S. Archer); Discovering the Relational Goods; Life as Relation: A Dialogue Between Theology, Philosophy, and Social Science (with A. Malo and G. Maspero); and Sociología relacional de lo humano. Gazi Islam is Professor of People, Organisations and Society at Grenoble École de Management, and member of the research laboratory IREGE (Research Institute for Management and Economics). He has served as faculty at Insper, Tulane University, and the University of New Orleans. He is editor for the Psychology and Business Ethics section at the Journal of Business Ethics. His current research interests revolve around the contemporary meanings of work, and the relations between identity, group dynamics and the production of group and organisational cultures. Emmanuel Lazega is Professor of Sociology at the Institut d’Etudes Politiques de Paris (Sciences Po), a member of the Centre de Sociologie des Organisations (CNRS) and a senior member of the Institut Universitaire de France. His current research projects focus on social network modelling of generic social processes such as solidarity, control, regulation and learning. His publications can be downloaded from www.elazega.fr. Jamie Morgan is Professor of Economic Sociology at Leeds Beckett University. He co-edits the Real-World Economics Review with Edward Fullbrook. He has published widely in the fields of economics, political economy, philosophy, sociology and international politics. His recent books include Trumponomics: Causes and Consequences (ed. with E. Fullbrook); What is Neoclassical Economics?; and Piketty’s Capital in the Twenty-First Century (ed. with E. Fullbrook). Jaime Montes-Lihn is a graduate student at the Institute for Interdisciplinary Research in Social Sciences (IRISSO), Université Paris Dauphine, France. Douglas V. Porpora is Professor of Sociology in the Department of Anthropology, Drexel University, and co-editor of The Journal for the Theory of Social Behaviour. He has published widely on social theory. Among his books are Reconstructing Sociology: The Critical Realist Approach; Landscapes of the Soul: The Loss of Moral Meaning in American Life; How Holocausts Happen: The United States in Central America; and The Concept of Social Structure.
1
Introduction Margaret S. Archer and Andrea M. Maccarini
The title of this series, ‘The Future of the Human’, produced by the Centre for Social Ontology, does not presuppose a particular response. Any future short of finitude is likely to be different from its past if only by virtue of developments from current interventions (representing enhancements) affecting both the meaning and causal powers of ‘being human’. In theory these interventions could cease, but that does not mean they would willingly be reversed or be amenable to reversal. Although conceivably we humans might give up our cars and spectacles, the global population could not revert to being ‘hunter/gatherers’ because diachronically and collectively our actions have made extinct much of what used to be hunted and gathered. In any case, does the short lifespan, malnutrition and lack of medication of the archetypical ‘hunter/gatherer’ conform to what we now designate as ‘being human’?1 Surely not, but the thought experiment of peeling back our millennia of accretions was once attempted empirically by King James IV of Scotland in 1493. He reputedly had twins reared in isolation by a deaf, mute woman, speculating that they would eventually speak the innate God-given language, held in folklore to be Hebrew,2 which serves to introduce the enduring connection between philosophical ‘essentialism’ and discussion of what ‘humankind’ is. Andrew Sayer finds references to ‘essentialism’ in the social sciences to be overwhelmingly derogatory … If there is anything common to all the critiques of essentialism in social science, it is a concern to counter characterizations of people, practices, institutions and other social phenomena as having fixed identities which deterministically produce fixed, uniform outcomes. (1997; 453–454) 1 2
Any such statements beg the question of ‘starting points’, for archaeological anthropologists have classified six precursors, such as ‘homo erectus’ as long predating the stereotypes just invoked as ‘early humans’. This story has various historical precursors but the 16th century Scottish historian Robert Lindsay of Pitscottie included King James’s experiment in his Historie and Chronicles of Scotland, compiled almost 100 years later. As the author Sir Walter Scott later commented, ‘It is more likely they would scream like their dumb nurse, or bleat like the goats and sheep on the island.’
DOI: 10.4324/9780429351563-1
2
Margaret S. Archer and Andrea M. Maccarini
In opposition, anti-essentialists frequently assert that humankind is socially constructed and constructing and thus commend its emancipatory import. Sayer’s well-balanced arguments are effective; if the term essentialism has been anathematized it stymies useful debate, one that cannot be decided by ontological fiat. Instead, we are returned to confront the distinction upheld in ancient philosophy between ‘essential’ and ‘accidental’ properties but shorn of any preconception that the latter are of a lesser explanatory importance. This is because ‘a claim that there are essential properties shared by humans does not necessarily render “accidental” differences such as those of particular cultures unimportant, indeed it maybe the essential similarities which are trivial’ (1997; 456). Sayer himself encountered plenty of flak for maintaining (I believe correctly) that patriarchy was important historically but nonetheless was not an essential feature of capitalism. Moreover, what may appear as essential and even be so semi-universally, can vary in its causal importance with contextual circumstances. For example, the generalization that women are shorter and physically weaker than men may be true over time but had more causal import in determining their social roles in societies that depended upon ‘warriors’ than they do today. Now, that women’s admission to combative roles has had to be fought for (against historically nurtured cultural stereotypes, rather than because height and strength are relevant in most modern warfare, and despite the fact that today’s women may well be taller and stronger than their warrior forefathers). However, essentialism is tricky to handle within analytical philosophy for the outsider who wants to draw upon the concept for tackling a particular real world problem. There are so many ‘insider’ debates generating heat within philosophy that it is tempting to slide over them as irrelevant to our own concerns, but that can readily lead us to become instrumentalists or conventionalists (if we adopted the most commonly used approach/es). Use of this philosophical tool-box is indispensable for our purposes but using it productively involves acquiring some conversancy with the variety of its contents, yet this may seem to be a waste of time – and some of it will be. However, there are surprises because this is unlike supermarket shopping where we can reasonably expect the label on an item to bear some relation to its contents. For instance, take the discussion of ‘identity’ by a group of serious philosophers, who maintain that an acceptable concept should pertain in ‘all possible worlds’. It could be tempting to dismiss its relevance because what social scientist cares if ‘Socrates’ (with a mole on his left toe) retains his identity in each world possible (along with his mole) or if his biography could be said to be unchanged were he discovered to have been an identical twin? Well, we should not shun it by taking the above illustrations as exhaustive. We should not, precisely because we are dealing with the ‘Future of the Human’ and therefore cannot evade asking whether or not humankind changes with its own increasingly digital enhancement and/or with the advancing capacities and roles assumed by AI robots in societies that are themselves changing. Thus, in our copious reading we need to be tenacious about our own problematic with every page we turn.
Introduction
3
As all critical realists acknowledge their need for philosophical underlabouring I do not view my own as a peculiar proclivity. Nevertheless, it is important to point out that the sub-group of analytical philosophers discussed in this introduction diverge from our collective writing throughout this series by dealing almost exclusively with individuals (or individual members of a taxa or a species). This is not to charge them with Methodological Individualism for some are at ease discussing macroscopic inter-species relations (though often their species are aggregates); it is to accentuate that for Critical Realist social scientists, no level, micro-, meso- or macro-, can ever be analysed without featuring relations between subjects and their groupings. Indeed, the speculative account I venture is dependent upon the dyadic synergy of the co-action between a human academic and an AI robot-whobecame a person through their co-action.
The various forms of essentialism Let us start with the basic distinction between essential versus accidental properties (Robertson and Adkins, 2016), and confine ourselves to the human being to begin with. This is the variant dominant in biology since the 1950s, though never without suggested refinements. ‘The distinction between essential versus accidental properties has been characterized in various ways, but it is currently most commonly understood in modal terms: “an essential property of an object is a property that it must have, while an accidental property of an object is one that it happens to have but that it could lack”’ (2016; 1). (A modal conceptualization is one that invokes necessity, while the use of the word could means that possibility instead is invoked.) It follows that for ‘Socrates’, being human is an essential property for him, whereas being fond of cats is merely an accidental property of his own. Yet even on this (purportedly) uncontentious formulation, its authors are aware that disputes have raged over almost every term in it. Cakes can be sliced in many ways, so they offer their own typology of different forms in an attempt at general explication. So do I in Chapter 3, based upon my earlier contributions in the first three volumes of this series, but my objective is more specific, namely what is it that differentiates human beings from advanced AI entities and what role does being animate and being sentient play in this? In other words, it is a purposeful list and its fictional characters, Homer and Ali, are constantly in mind. Unsurprisingly, those firmly convinced that homo sapiens and intelligent robots should be equally firmly differentiated also make most appeal to modes of essentialism that are biologically based. Consequently, they will be found clustering among the topmost of the five versions of essentialism delineated below and diminishing as we work down the list. However, we should be alert to the fact that some social scientists gathered in the last category regard the distinction to be too ‘obvious’ for note and disregard AI entities altogether. That would be different for social constructionists whose descriptive and explanatory concepts are independent of biological differences; only
4
Margaret S. Archer and Andrea M. Maccarini
if these are part of the dominant discourse can they play any part, but this is through cultural encoding alone. Finally, we should also be aware that two conceptual conflicts are often elided here; the first concerning differences in what is essential to bodily constitution, which affects classification, and the second, which is not even closely correlated with the former, concerns intellectual capacities and related task performance. Indeed, the double-barrelled term homo sapiens itself marks out this difference. Essentialism based upon Creationism Such accounts are well known in the Christian West, though not exclusive to it, but two points are worth emphasis. On the one hand, it appears that the more recent arguments for Intelligent Design do not usually attach great importance to the distinction examined here. On the other hand, Buddhism traditionally differentiates between humankind being ‘good’ but makes the achievement of ‘goodness’ a lifelong spiritual endeavour, consciously undertaken, rewarded by the mode of reincarnation, and thus seemingly closed off to bionic beings. Essentialism based upon species From its origins, biological theory was steeped in essentialist speciesism and retained this focus despite the development of over 20 different definitions of a ‘species’ by the new millennium. Early Creationism presented distinct species as static, non-evolving groups of organisms, each of which Aristotle regarded as a natural kind differentiated by its essential property. Thus each, every and only members of a kind shared that unique essence that accounted for the characteristics common to their kind, for its uniqueness and its universality. Although it would be unfair to invoke Popperian falsification against this definition, nevertheless early versions had practical encounters with his ‘Black Swan’ – that is, encountered one or more counter-examples of either the absence of the defining characteristic or its presence among other species. Already this questioned the uniqueness of each species (for example, primates and pandas sharing the opposed thumb with humans) and explained ‘deviant’ absences or presences of traits as accidental products of processes that became known as mutations or recombinations. As is well known, such findings slowly encouraged biologists to consider seriously the relationship between ‘species’ and their natural environments, eventually to the point of positing parallel ecological paths to evolution. But the above absences and presences are also of relevance to us in three ways. First, they spelt vaguer borders between species and, where humankind is concerned, its progressive technical enhancement increasingly problematized what was uniquely human. Second, did essentialism relate to the start of life, the capacities exercised over human life courses and the end of human life – ones often shared by robots, thus making the boundaries between them fuzzier. Third, responses to
Introduction
5
the question, so ‘what is distinctively human?’ were more and more difficult to answer. When their superior intelligence accorded both humans and AI beings superiority over all other types of earthbound entities, it was their common differences that set them apart from other species. In other words, both components constituting homo and sapiens became less than self-evident as stable similarities confined to and universal among humankind alone. Certainly, it took centuries after the arrival of the post-Darwinian era to begin to account for intra-species differences and fuzzy inter-species boundaries. But even before that point species essentialism was losing its plausibility. To buttress it, a causal connection was endorsed that linked an organism to a particular species, namely that of sharing a single (evolving) lineage. Given the continuing or growing frequency with which archaeologists report new discoveries throughout the world of skeletal predecessors of ‘man’, the diagrams of collective genealogical descent underwent repeated revision but allowed of no convincing causal conclusions. Today, when (healthier, surviving) children can result from three ‘parents’, searches for the primordial egg and sperm are less of a lost cause than a pointless one for delineating human beings. Thus, many biologists have advanced multiple counter-versions of ‘Species Pluralism’.3 Its protagonists do not believe there is a single correct ontological concept of ‘species’. Nevertheless, despite some advancing Ecological Pluralism (as a species concept designating those sharing an ecological niche), most versions remain insistent upon common genealogical descent, and then by definition lineages are predicated on interbreeding sexual organisms. The implications of this are two-fold. First, it involves an abandonment of the universality of species and their essentialist underpinnings because it automatically excludes asexual organisms, despite their form of reproduction being the most prominent form on our planet (Hull, 1988) when insects, fungi and bacteria are included, but what would justify the exclusion of micro-biological life forms? Second, it is not inclusive even of all human beings. Some of these declare themselves asexual (Carrigan, 2015), yet both sexes can often reproduce, though legal anonymity can be enforced for both the donor and surrogate, thus representing a blank entry on the genealogical table. Equally important, AIs could learn how literally to ‘build a family’, given that the concept of the human ‘family’ now includes those with adopted children or none at all. After all, the copying of software is one of their acquired skills and its transfer to a proto-AI is more a matter of human restrictions than of difficulties in mastering the techniques of child-rearing. In any case, AIs are already involved in childcare. This would seem not to trouble evolutionary biologists who confine themselves to biological ‘species’, yet some are preoccupied with the patterns and processes of evolution. As two such researchers conclude, ‘No single type of 3
There is a new form of ‘Biological Essentialism’. See Devitt, M, 2008, ‘Resurrecting Biological Essentialism’, Philosophy of Science, 75: 344–382. One of the main criticisms of it is that the properties whose causal powers cause traits in organisms do not map on to taxonomic boundaries (for example, both zebras and cats have stripes).
6
Margaret S. Archer and Andrea M. Maccarini
process is common to all species. Arguably, none of these processes are unique to species either’ (Mishler and Donoghue, 1982). If any such process became widespread among the AI group, this might be credited to the human side of the balance sheet as merely another advance in the material culture of humanity thanks to software designers. However, were that the case, it raises questions about the delimiting of the biological domain itself. Since even this truncated account has tried to show that the term ‘species’ varies in the ontological structures held to define each one, but also in the detection of unifying evolutionary processes undergirding the full range of candidates, it follows that species essentialism does not rest upon firm evidential ground. Despite the caveat ‘pending further research’, some conclude that we should doubt whether the term ‘species’ refers to a real category in nature (e.g. Ereshefsky, 1998). Allowing for differences in subsequent interpretations, it seems clear that Darwin took the difference between ‘species’ and its ‘varieties’ too seriously to ever use the term ‘speciation’ in his book, despite its title. There, he reflected, ‘I look at the term species as one arbitrarily given for the sake of convenience to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety’ (Darwin, 1859). In short, Darwin doubted that the term ‘species’ referred to a real category in nature. This is confirmed in his personal correspondence, where he writes in 1856 to Joseph Hooker (1887; 88) that the various attempts to define ‘species’ are ‘trying to define the undefinable’. Essentialism based upon sortals Sortal concepts represent a swing of the pendulum away from the large groups that it was hoped above could be allocated to different species. Instead, sortals – a term first employed by John Locke – refers to individuals and roughly seeks to provide essential criteria of identity (or principles of individuation). In principle if this concept would work for Archer’s fictional Ali, it should work for other AIs and might draw a better distinction between AIs and lesser robots than the intuitive, practical types of demarcation used at present. However, this concept sounds vastly simpler than it turns out to be, including the fact that intuitive judgements do not disappear. For example, in her tightly argued book Penelope Mackie sometimes uses an intuitive judgement about ‘triviality’ and summarily dismisses some arguments as ‘absurd’. In sortal essentialism she holds there to be a pretty general consensus that if the notion of an essential property has any application at all, the essential properties of an individual involve the fact that it could not have belonged to a radically different kind (or kinds, to which it actually belongs). An important corollary is that if something, A, belongs to one kind, there are ‘certain kinds such that A’s not belonging to those other kinds is an essential
Introduction
7
property of A’ (Mackie, 2006; 118), for example, Aristotle could not have been a centipede. First, there is some sortal concept, such that Aristotle is essentially a thing of the human sort, and second that belonging to that sort is incompatible with his being/becoming a centipede. This sounds (deceptively) straightforward until we examine the answers given by Mackie (who is not a supporter of this school) to popular questions raised: ‘So essentially what sort of thing is Aristotle?’ Analytical philosophers’ usual responses are a man or a person, but these two concepts have very different referents (Archer, 2000). This is one issue about Ali of huge importance – can he acquire personhood (Archer, 2019)? If he can and does, then he changes (essential) characteristics over his life-course, yet sortals to some like Baruch Brody (1980) are supposedly lifelong (Ali is either a machine or a person for his entire existence). To others a thing’s principle of individuation is essential to it (Wiggins, 1980, 2001), but which is essential to being Ali? Mackie provides an uncontroversial example contrary to ‘life-long sortals’. If ‘a foetus is a human being but not a person, and every adult human being was once a foetus, it seems that Aristotle cannot both be essentially a human being and also a person’ (Mackie, 2006; 121). However, as a sortal essentialist, the concern of Brody is what secures the identity of an individual as being the same individual both at the time when singled out and at earlier or later times and covering ‘possible worlds’ in which past and future histories of that self-same individual can vary. He could, contra Brody on Archer’s earlier thought experiment in Vol. II, start his existence as a machine and later become a person (with a mechanical body), though never a man. Contra Wiggins, since Ali does not cease to be a machine once he has become a person, which of these is essential to him and important to others, since robophobics will insist on his enduring mechanical or bionic constitution, whilst robophiliacs will accentuate his later acquisition of personhood? If he can be both at morphogenetic T4, but not at T1 (prior to his working in synergy with Homer) is this compatible with designating the same individual?4 This is where the ‘possible futures’ come into play unplayfully, because according to Brody if something is a person, it ‘must have been a person from the first moment of its existence, and it cannot cease to be a person without ceasing to exist’ (Brody, 1980; Ch. 4, Sect. 1). That, it was ventured, was not the case for Ali and nor is it for any human new-born for whom the possibility of acquiring personhood exists only in potentia (Archer, 2000, Ch. 4). Although it comes as a relief that both Mackie and Wiggins agree that academic debates about whether ‘Aristotle’ could once have been a centipede are ‘idle or baseless’ (Wiggins, 2001), we should note that in their discussion, no social science considerations enter the debate, especially those dividing realists (Bhaskar) from actualists (e.g. Eric Olin Wright) about what futures are possible to hope for, despite being unrealized as yet, versus being attainable (actualizable) 4
We will return to this central question when discussing the ‘Capacities Approach’ in the next section.
8
Margaret S. Archer and Andrea M. Maccarini
in practice (Archer, 2019b). Instead, what philosophers differ over is whether or not ‘any sortals represent essential properties of the things to which they apply’ (Mackie 2006; Ch. 8.6). Mackie denies it, whilst Wiggins counters that what her critique lacks is an explanation of why it is ‘idle or baseless’ to speculate that anything could have come to belong to any kind radically different from their actual kind, in an argument that he turns on ‘the anchor constraint’. This restores being realistic (not realist) about the lynch pin of his anchorage, namely that x could have the property φ, or it is possible for x to have φ, if and only if it is genuinely possible to conceive of x having φ (Wiggins, 2001; 121). Mackie’s reply is to modify Wiggins’ ‘anchor constraint’ such that it does not entail that something’s criterion of identification is essential to it and hence it could have existed with a different principle of individuation from its actual one. This seems to fit the bill very well for Ali unless Archer’s speculative account is wildly mistaken. On arrival in Homer’s laboratory his identity-cum-persistence condition was (at least it was presumed) that of a number of boxes labelled ‘AI + code number’ and that is what he could have remained, confined to his uploaded computational and linguistic programmes, along with other AIs produced to the same specification and presumably not initially individuated from one another. Only his actual dyadic relations with Homer made him a person – or so it was argued – and without his real contribution as a co-worker on their research programme he would have been restricted to that of a mechanical and routinized laboratory assistant. The fiction did not end there, because Ali had to confront the ageing and eventual retirement of Homer. In other words, Ali’s history up to that time could be represented as the three cycles in the sequence of personal morphogenesis – and little could be less essentialist in its assumptions. In sum, sortal essentialism raises a variety of problems for the social sciences: 1 2 3 4
The seeming priority it accords to ‘nature’ over nurture. The editing-out of relational emergence from any consideration at all. The self-imposed restriction of criteria of identity to individuals, which appears to imply commitment to ‘aggregate individualism’. The exclusion of both ‘structure’ and ‘culture’ from any part of these discussions, thus raising awkward problems about why some held that the existence of the Pyramids prior to Aristotle’s birth was more important to his identity than that of libraries and education.
Essentialism based upon capacities (and liabilities) Martha Nussbaum argues that ‘the legitimate criticisms of essentialism still leave room for essentialism of a kind: for a historically sensitive account of the most basic human needs and human functions … without such an account, we do not have an adequate basis for an account of social justice’ (1992). Undoubtedly this brief characterization leaves behind exclusive concentration upon the essential features defining individual identity as in ‘sortal essentialism’ above. But can the
Introduction
9
leap from universal human essentials to distributive social justice be made directly, that is without any references to the ‘we’, to the groups collectively battling for equity against various sources of privilege based on vested interests, and with SAC properties5 reduced to an ad hoc illustrative role rather than playing a systematic theoretical and practical part in explaining the processes involved? Nussbaum terms her account and the capacities approach in general ‘Internalist Essentialism’, one that is grounded upon ‘internal properties and powers, such as the ability to think about the future, respond to the claims of others, to choose and to act, without which we no longer have a human life at all’ (ibid.; 207). Thus, for the purpose of making a case for recognizing AI beings as cosentients, Nussbaum does recognise the key question her approach must answer: ‘If we operate with a determinate conception of the human being that is meant to have some normative and political weight, we must also, in applying it, ask which beings we take to fall under the concept’ (ibid.; 209), one unlike Aristotle’s exclusion of women and slaves. Thus, she aims to list the generic features of a human life, wherever (and whenever?) it is lived. Although this is intended to be ‘inclusive’, and many of us would share her normativity in wanting no collectivity to be excluded on grounds such as ‘racism’, ‘sexism’ or ‘religion’, nevertheless Nussbaum’s listing is ‘exclusive’. Crucially, she regards – and regards this also as being a matter of consensus – that ‘species identity seems to be necessary for personal identity’ and among ‘the most central features of our common humanity … without which no individual can be counted (or counted any longer) as human’ (ibid.; 215, my italics). Obviously, this excludes AI beings – even from consideration – and relies upon the ‘speciesism’ rejected earlier in this chapter. Thus, Ali is already banished by his independence from ‘hunger and thirst’, which would make him in Aristotle’s words, ‘far from being a human being’. Ironically, whilst this comes first on the list (and one guesses would not be controversial among a non-philosophical audience), it is not the most important to Nussbaum. Instead, the architectonic role is reserved for practical reasoning (phronesis), or simply ‘the capacity of choosing itself ’ (ibid.; 225). Yet robots are choosing all the time. Whilst the simpler robot preparing burgers is pre-programmed to do so, e.g. to reject pieces of bone and foreign matter (not always successfully), those more sophisticated, imputing data into algorithms are choice-makers: about which precise outliers are excluded even if precise parameters are pre-set; or which redundancies may be reduced, e.g. people who buy from both Amazon and eBay. Of course, here human monitors will weed out/include what they think irrelevant or relevant to commercial decisions, but this does not nullify the fact that they are being selective 5
I have always maintained that structure, agency and culture are indispensable to satisfactory explanation in the social sciences because they refer top ‘contextdependency’, ‘activity-dependency’ and ‘ideational-dependency’ respectively and none of which is ever redundant. However, it was only in 2013 that I coined the acronym SAC (see Archer, 2013).
10
Margaret S. Archer and Andrea M. Maccarini
about pre-chosen data – this time by the robot. Where the synergy between Homer and Ali in developing the research project to eliminate Tumour X is concerned, it would be difficult to demarcate their contributions, and we human academics do not do this when two of them gauge that they are both beneficiaries of an exchange (hence their acknowledgements at the end of their separate articles). But both Nussbaum and Sen want to maintain that one ‘of the most central capacities promoted by the conception will be the capacity of choosing itself, which is made among the most fundamental elements of the human essence’ (Sen, 1985). Although the capacities approach is firmly anchored in the human being and its ‘species’, it is important to note what difference she thinks it would make to talk in terms of ‘persons’. While I see the concept of personhood as being more demanding, Nussbaum takes the opposite view because she holds it can be used more ‘capriciously’. This is on the basis of the legal rulings by various US states where, for example, a ‘person’ was interpreted as synonymous with ‘male’ in order to preclude certain openings for women. I have no reason to doubt these exemplars of malpractice and discrimination, but analytically they would not stand up today against my three criteria for personhood: a developed first-person perspective, plus functioning reflexivity plus self-articulated concerns (all of which may be robustly opposed to the legal rulings above). Nor I would argue does ‘personhood’ open the door to harsher treatment for those who are unable ever or henceforth to exercise the higher capabilities of human functioning. The latter it seems to me makes a less compelling claim upon public action than a person who has made a huge and acknowledged sporting, artistic or academic contribution and ends in a vegetative state after an accident – if only because we less distinguished persons mourn and miss them and often memorialize them. It is not without interest that Nussbaum changes her terminology in her excellent critique of Rawls and writes: ‘My claim is he [Rawls] needs to go further in this direction, making the list of primary goods not a list of resources and commodities at all but a list of basic capabilities of the person’ (Nussbaum, 1992; 234, my italics). Fair enough, but much more important than such academic listings, must remain persons who can think and say, ‘I object’, ‘we oppose’ and ‘we care enough’ to work for social transformation. And that requires the three properties and powers of personhood I am defending here. In short, anchoring all ethical hopes for a better society in the human being alone is too restrictive at both the ‘top’ and ‘bottom’6 of humankind tout court. To refrain from turning Ali into a bureaucratized ‘research assistant’ takes selfrestraint, not ‘respect’ on Homer’s part in the beginning. Homer does not fully 6
I regret having insufficient space to discuss our relations to animals (at the bottom or placed lower than human beings on the tree of life). But I disagree that ‘Compassion requires the recognition of a shared humanity; without compassion (pitié), we have no reason not to be harsh and tyrannical to those who are weaker’ (Nussbaum, 1992; 238). My dog and my horse are of personal concern to me as particulars, above and beyond morally deploring puppy farms and the need for rescuing ill-treated horses.
Introduction
11
understand what Ali does or how but fully respects his research contributions; Ali neither comprehends nor cares about the benefits to humankind of abolishing Tumour X but he is concerned about the termination of the research project with Homer’s retirement – though even the latter concept is alien to him. What their co-working does is to increase the satisfactions of both, to stretch their reflexivity further and accentuate their respect for and protective concerns about that emergent property – their particular and productive relational synergy. So do the football team’s members, whose repute pertains to the team itself and not to the aggregate of 11 good players. So do the spectators whose compassion was not only for Matt Busby’s footballing ‘babes’, killed in the Munich air crash of 1958, but for his reflexive vision of a future team for Manchester United – a particular concern that also died in the wreckage. The vulnerabilities of the young players were not simply shared by fellow human beings but most poignantly by those persons (parents apart) who shared Busby’s own personal concern. Nearly 2,000 people die on Britain’s roads annually: most of us feel mildly sorry for them and their families, but since we did not know them as persons this falls short of compassionate concern, though it does not preclude supporting policies for greater road safety. Contra relativism and subjectivism, Nussbaum suggests that ‘without a common human functioning, we will have to do without compassion and without a full-blooded notion of respect’ (ibid.; 239) I fear the adjectives ‘common’ and ‘full-blooded’ are nudging us too strongly into agreement that we will lack what is necessary ‘if we are to make sense of the pain of others and to be moved to relieve it’ (idem.). But why? When we find our horse holding up a bloody foreleg and a shattered cannon bone, it is not mere compassion we feel but desolation at his/her imminent death certificate.7 If this example trades too much upon bodily similarity in experiencing pain, consider the largely cognitive respect players of chess or Go extend to those AI who have beaten their grand masters – even if this is tinged with envy as is not necessarily the case. Nussbaum, of course, does not deny such instances but tries to coral them by arguing that where pain is concerned, it is ‘essentialist at the generic level’ (ibid.; 240). But our moral sentiments are held to be defeated by difference that makes the sufferings of others unlike our own. Again, why? If we are ready with compassion (even distant) for those developing Alzheimer’s, what prevents us from experiencing the same towards Ali, who is threatened with deliberately being ‘wiped clean’? How would we academics respond if we were coercively deprived of our memories, of every book we had ever read and every conclusion we had laboured over and published? Some may argue that the analogy is being overworked, but the response is that a lot more hangs on the analogical imagination than on brute emotions. To be fair to Nussbaum, she does attach considerable importance to being imaginative; we should return to this although I will argue that it depends 7
Why else do so many of we animal lovers feel lifelong guilt at not having taken a dog or a cat to the vet earlier on its trip to finitude?
12
Margaret S. Archer and Andrea M. Maccarini
upon functioning reflexivity. She also acknowledges ‘a deep moral tradition that says that compassion is not required, for we can be sufficiently motivated to other-regarding action by respect for the dignity of humanity’ (ibid.; 239). Obviously, this is confined to the human domain but far less obvious is the fact that this form of human essentialism – the fourth variety – is arguably more resilient to critique than the others summarized so far. Essential human dignity and AI beings There is an obvious oxymoron in this sub-title since the concept of human dignity is usually held to apply exclusively to humanity and therefore excludes the AIs by definition. However, it must depend upon one of the criteria for identifying and thus differentiating the human described on the list I have just discussed. One problem is that the analytical philosophy on which the five categories are based is simply not known to those pressing the case for the distinctiveness of humans who are held incapable of being without such dignity. The possession of dignity can also be considered inalienable regardless of whatever an individual may be or do or have done to them by accident or intent. (Is it a deliberate irony that the Swiss centre for elective termination of one’s life is called Dignitas?) Equally, dignity is inaccessible to those of other kinds, whatever their merits. Thus, it is often the case that the advocates of human dignity fall back – as if this were uncontentious – upon invoking the human species as defining its bearers. They revert to ‘everyman’s’ view that ‘it is obvious’, regardless of the problems biologists, including Darwin, have voiced about it as summarized above. As a property, dignity does nothing (unlike dexterity, strength or intelligence); its importance depends upon the persuasiveness of its adherents, yet this must not be underestimated. It has been the cornerstone of international human rights declarations; in the preamble to the 1945 Charter of the United Nations it is held to ‘reaffirm faith in fundamental human rights … in the dignity and worth of the human person’. Similarly, the 1948 Universal Declaration of Human Rights states that it recognizes ‘the inherent dignity and of the equal and inalienable rights of all members of the human family’. Moreover, this contention has become systematically enlarged in international legal practice. The new penal category of Crimes against Humanity – by states or individuals, in war or in peace – covering war crimes, murder, massacres, dehumanization, genocide, ethnic cleansing, deportations, unethical human experimentation, extrajudicial punishments including summary executions, use of weapons of mass destruction, state terrorism or state sponsoring of terrorism, death squads, kidnappings and forced disappearances, use of child soldiers, unjust imprisonment, enslavement, torture, rape, political repression, racial discrimination, religious persecution and other human rights abuses may reach the threshold of Crimes against Humanity, and perpetrators may be tried by a variety of courts.8 8
Crimes against Humanity have since been prosecuted by other international courts (for example, the International Court of Justice, the International Criminal Tribunal
Introduction
13
Importantly, however, precisely the opposite tendency has engaged with the development of AI beings – one that rejects the speciesism of its adversaries. Two years ago the European Parliament (2016) urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of ‘electronic personhood’ to ensure rights and responsibilities for the most capable AIs. Certainly, this was more like ‘Corporate legal status’, as discussed by Tony Lawson in terms of a legal fiction in relation to the capitalist company (Lawson, 2015), but some unexpected nations – Saudi Arabia (2017), Estonia and Malta – all took a step further towards according citizenship rights towards AI whilst the EU seemed to get cold feet about according such rights. Undoubtedly it can be maintained that this is a defensive manoeuvre because as the draft resolution and report of the EU initially noted, there is ‘a possibility that within the space of a few decades AI could surpass human intellectual capacity’, eventually threatening humanity’s ‘capacity to be in charge of its own destiny and to ensure the survival of the species’. It is relevant to note that proponents of dignity as universal to all humans also display most animosity towards the capabilities approach. Yet its best known advocates, Sen and Nussbaum, have been in the vanguard promoting global justice, equality and institutional inclusion,9 so why the animosity? At rock bottom, is this because some have argued that to treat unequals equally is also unjust and future AIs are more likely to be the main beneficiaries of such arguments rather than humans? Robophobia has not evaporated because it is deeply embedded in the ideational corpus seeking to sustain human dignity. (This is quite unlike practical questions of civil liability, such as the legal responsibility for driverless cars.) It is not its religious reliance upon ‘speciesism’ alone, but ultimately its deep embedding in a variety of creation myths that made it a social construct with a tenacious endurance, particularly as inter-faith dialogue grew in friendliness over recent decades among its religious leaders. Equally, the new and still nascent ‘robophilia’, of those pursuing citizenship for advanced AIs is another social construct since as yet we do not know if it is a real possibility. (My thought experiment is indeed fictional.) Nevertheless, well diffused social constructs have causal powers as critical realists recognize.10 Indeed, the development of my argument below is for the former Yugoslavia, and the International Criminal Court) as well as in domestic prosecutions. The law of Crimes against Humanity has primarily developed through the evolution of customary international law. Crimes against Humanity are not codified in an international convention, although there is currently an international effort to establish such a treaty, led by the Crimes Against Humanity Initiative. 9 A clear example of a connection between these two points is made many times over by Christian Smith who uses the concept of ‘species’ regularly and makes equally dismissive asides to the capabilities approach in Ch. 8, ‘Human Dignity’ of What is a Person?, 2010, Chicago, University of Chicago Press. 10 See Elder Vass, 2013. The Reality of Social Construction, Cambridge, Cambridge University Press.
14
Margaret S. Archer and Andrea M. Maccarini
identical to when the powers of conflicting ideologies clash in the middle (T2 – T3 phase) of any morphogenetic sequence. There is neither space for, nor do I have the competence to provide, a historical survey of world religions, but this not necessary because relations with AI beings could not feature in theology until the new millennium and Critical Realism’s arrival was only slightly earlier. My argument has two parts and the burden of the brief reference to world faiths (and implicitly to their Creationism)11 seeks to make one point alone; namely, the special status accorded to humankind is because of the lesser value assigned to divine relations with other created beings. The second point, made in order to free my argument from the charge of being anti-religious, will indeed be contentious but it ventures the notion that a loving God is more all-embracing than the Anthropocene continues to portray Him.12 How could a God who is love be held indifferent to beings – of whatever kind – who turn to Him?
Social philosophies supportive of robophobia and robophilia ‘Personalism’, an approach originating from a variety of world faiths and still endorsed by them, always underscores the centrality of the person as the primary locus of investigation for philosophical, theological, and humanistic studies. It is an approach or system which regards or tends to regard the person as the ultimate explanatory, epistemological, ontological and axiological principle of all reality. (Williams and Bengtsson, 2018) However, as Jacques Maritain (1947) wrote; there are at least ‘a dozen personalist doctrines’. Von Balthasar has clearly stated that ‘Without the biblical background it [Personalism] is inconceivable’ (1986; 18–26). Whilst there is truth in this for European Catholic Social thought, culminating in that of Karel Wojtyła (Pope Saint John Paul II) as the basis for his opposition to both individualism and totalitarianism, it was a tormented history of ideas. Von Balthasar’s statement simultaneously downplays a variety of denominational differences (particularly in the foundation of American personalism) but, more importantly, those emanating from classical Islamic philosophy, Buddhist thought, Vedantic Hinduism and neo-Confucianism. This global appeal of ‘human exceptionalism’ seems significant for the support found for defending human dignity within the United Nations, as one thread mentioned above. 11 Some of these, such as the Judaic-Christian account in Genesis, are generally accepted to have been written later than other books in what Christians term the ‘Old Testament’. 12 I accept responsibility for the views expressed here, but which I personally see as falling within the tradition of St Augustine’s Fides quaerens intellectum means ‘faith seeking understanding’ or ‘faith seeking intelligence’.
Introduction
15
In so far as one can disengage common denominators from such a wide range of Personalist thinking, it seems fair to accentuate the radical difference between persons and non-persons and on the irreducibility of the person to impersonal spiritual and material factors, an affirmation of the dignity of persons, a concern for the person’s subjectivity and self-determination, and particular emphasis on the intersubjective (relational) nature of the person. (Williams and Bengtsson, 2018, section 6) More succinctly, Personalism erects a barrier between a ‘somebody’ and a ‘something’, separating humankind from the rest of creation. Although this is increasingly questioned, few seem to doubt that it leaves the AIs beyond the pale of human dignity. In short, most Personalists ‘have denied that personhood is something that can be gradually attained’ for it is never a matter of degree, but rather a primordial binary division whose essentialism permanently excludes robotic beings as ‘things’ (ibid., section 6.2). This is what I regard as the hard essentialist version of ‘human dignity’, as exclusive to humankind. Here, I will focus upon Christian Smith’s attempt to claim that personhood is both essential to and confined to human beings but also that his own position is fully compatible with Critical Realism. I will argue both that his position (perhaps unintentionally) gives ideational support to robophobia and, in any case, cannot be run in tandem with realism. The dignity of the human person is the fulcrum of his approach and although many of the values he promotes do indeed resonate with those of some critical realists, I still maintain that it is incompatible with the central concepts of a Critical Realist approach. There is a softer version, drawing much more upon continental philosophy, which attaches greater importance to the subject/object distinction and the link between dignity and ‘subject status’. Advocates of this version have no aspirations to ally with Critical Realism or to express antagonism towards it. The work of Peter Bieri allows sufficient space to ask whether or not an AI (given Ali’s type of co-working with a human researcher) represents a synergy that could gradually lead to him acquiring sufficient status as a subject to be considered to have become a person. Perhaps significantly whilst the title of Bieri’s book is Human Dignity: A Way of Living (2017), the chapter titles refer to ‘dignity’ alone. Smith’s personalism ‘All living humans are inclusively persons by my account – they possess the dignity of personhood in its fullness.’ (Smith, 2010)
In our first volume, I already took issue (Archer, 2019) with Smith’s characterization of personhood, which is as close as he comes to a definition:
16
Margaret S. Archer and Andrea M. Maccarini By person I mean a conscious, reflexive, embodied, self-transcending centre of subjective experience, durable identity, moral commitment, and social communication who – as the efficient cause of his or her responsible actions and interactions – exercises complex capacities for agency and intersubjectivity in order to develop and sustain his or her own incommunicable self in loving relationships with other personal selves and with the nonpersonal world. (Smith, 2010; 61)
The queries I raised then can be condensed into the following: 1
2 3
Are the characteristics listed as universal to personhood truly of the same ‘primary kind’, namely do all persons possess these qualitative intrinsic properties essentially, from cradle to grave? (Response by C. S. – positive.) Can these aspects of personhood be possessed by any being outside the human ‘species’? (Response by C. S. – negative.) Does the social order play a role in the shaping of personhood? (Response by C. S. – largely negative.)
In the last chapter of Christian Smith’s book (2010) these answers do not alter but are presented as strengthened through his appeal to human dignity, seemingly wrapping up his case. He avows this to be a theistic account of personhood: ‘My own reasons for believing in dignity are at rock bottom theistic’ (p. 452) and he makes the usual Christian reference to Genesis about the creation of humankind, which ‘God saw was good’, although other world faiths are recognized as having their own versions (p. 441). Thus, he is making an ontological case for human dignity. Dignity is a real, objective feature of human personhood. The question is not whether dignity exists, any more than the Grand Canyon exists … Dignity exists as a real and ineliminable dimension of persons, just as liquidity is of water… This is not to say that non-human animals or perhaps other objects do not possess a kind of dignity – but, if so, from a personalist perspective, that would be a different kind of dignity from the dignity that inheres in human persons … By ‘dignity’ in this context I mean an inherent worth of immeasurable value that is deserving of certain morally appropriate responses. Dignity makes persons inherently precious and inviolable. (p. 434) Although ‘dignity’ is claimed by Smith to be a ‘brute fact’ (Searle) about humanity, it is not defined beyond the above and neither is it explained what another ‘kind of dignity’ might be accorded to certain animals and objects. However, that quotation justifies regarding Smith as a human exclusivist where dignity is concerned; whatever the virtues or merits (present or future) of AI entities may be, they are for ever denied the ontological properties essential to be/come bearers of dignity. Because of that, we are held free to
Introduction
17
disregard them without being morally culpable for ‘our indifferent and dismissive treatment of them’ (p. 440). First, on what does such human dignity rest? Smith gives two answers; essentially speciesism, reinforced by phenomenology. He begins with its straightforward grounding in the human species, that ‘confer[s] on every one of its members the status of dignity that is the hallmark feature of the species’ (p. 476). He also asserts dignity to be the ‘natural birth right of personhood’ (p. 478). Note the implication here that personhood comes at birth. This might appear to reinforce the significance of the human body but recall that both Lynne Rudder Baker and I disputed this in 2018. Neither of us held bodies to have supreme importance or that babies were born as persons, but rather took some time and acclimatization to natural reality before developing even their ‘sense of self ’13 and difference from other people and things. That is, prior to the slow and rudimentary beginnings of the first-person perspective, which is strongly (though not entirely) dependent on language. In any case, ‘birth’ has been transformed for some neonates with the advent of three-parent progenitors, taking them beyond The Handmaid’s Tale and even further away from Genesis. Second, on the other hand, and despite the footnote below, Smith still claims that ‘Personhood adheres in each human from the start’ (p. 457). I cannot make sense of this confusion of concepts, nor does invoking phenomenology help. As newborns, what is the evidence that such ‘Persons with dignity’ would have sensed phenomenologically – that they were not just ‘things’? Why are we supposed to be persuaded by the author’s own feelings on visiting neonatal units for premature babies, not expected to live or develop into health, ‘yet who, it was clear to me, possessed a dignity and value of immeasurable worth’ (p. 456). That tells us something about the author alone, not about the relevance of his epistemic feelings, much less about the ontology of such babies themselves. This ‘appeal’ cannot be sustained by the snappy sentence, namely that ‘Dignity is to personhood, regardless of its variable state of actualization, as pregnancy is to being pregnant’ (p. 261). Well, no, not to an unconscious foetus and probably yes, to a sexually mature female who (normally) being capable of pregnancy would be on the way to becoming a person – with her FFP, reflexivity to think/ hope/fear and thus to have concern about herself and her new-born. Equally, nothing has persuaded me that what justifies moral commitments is the recognition of the natural dignity of persons, which Smith holds ‘is ontologically real, analytically irreducible, and phenomenologically apparent’ (p. 443). None of that has anything to do with, for example, my opposition to the death 13 Note that Smith eventually has reluctantly to grant the ‘sense of self’ in order to underwrite his notion of someone being the same person over time (p. 468). However, the following convoluted formulation, such that unconsciously babies are ontologically connected by their ‘unifying personhood that constitutes their selfhood through time’ is more than problematic. How can anyone be connected by a property that has not yet developed? Claiming that this is ‘unconscious’ is an oxymoron where the ‘sense of self’ is concerned.
18
Margaret S. Archer and Andrea M. Maccarini
penalty; instead, what does ground it lies in our lifelong ability to learn and to change. These (human) properties cannot be shuffled about like cards. In 2000 I gave a sequence of development of the ‘I’ – ‘Me’ – ‘We’ and ‘You’ in Being Human, which insisted on some important principles of human development, to which I still hold. At birth, any skills, beyond the autonomic (e.g. suckling), exist only in potentia. (Something that is equally applicable to many other animals, e.g. where anatomically possible, the use of the opposed thumb needs to be learned.) Later, with some language development, comes use of the ‘I’ and gradual acquisition of the FPP. After that, through exploration among peers and therefore socially, the young child acquires the ‘Me’, a dawning understanding of where she/he stands in the pecking order (privileged or the opposite vis à vis other kids in his/her natal background). Third, the nascent ‘We’, initially drawn from proximate peers but later from any group with whom the teenager feels common cause. Last of all come personal and social identity both resting on the ‘detection, discernment and dedication’ schema (Archer, 2000, Chs. 3 and 9) in relation to those values and roles that the subject thinks she/he will find satisfying and sustainable and the basis of the modus vivendi they seek – something often found empirically to be incomplete by the end of higher education (Archer, 2012). The long-drawn out process I have summarized above, where what is in potentia becomes actualized – fully or partially according to social background and the quality of parenting – would be an anathema to Smith because a large part of what may or may not be fully realized in all children are what Sen and Nussbaum term their ‘capacities’. Throughout his long last chapter on ‘human dignity’, Smith returns repeatedly to criticize the ‘capacities approach’, construed as his principal target. Why? Given its neo-liberalism, this approach seems to many to be globally and socially benevolent, even if it can undoubtedly be bent towards meritocratic ends or distorted by privileges. Fundamentally, the capacities approach is Smith’s foil because its leading thinkers reject his views on the innate, universal and species-based essentialism of all human progeny, especially their shared dignity.14 In this context, why are my sympathies with the capacities approach? Not because it is beyond criticism but rather because it leaves open a door that is very consequential to our topic and one that Smith would not only shut but also lock and bar. His vituperation against human enhancement in general is paralleled by his defence of those born brain-damaged and of advanced Alzheimer’s sufferers – none of whom can be deprived of their ‘birth-right’ – that ill-defined ‘human dignity’, a ‘vestige’ of which still clings to corpses (p. 469). It follows, no matter how advanced their capacities become – already outdistancing some human ones – however generous they may be in terms of 14 ‘If they are persons, they possess dignity – because dignity is an emergent and ineliminable property of personhood – regardless of their exercise or not of certain empirically observable capacities at any given point in time’ (Smith, ibid., p. 453– 454. See also p. 447–448).
Introduction
19
self-adaptation for human causes and, crucially, however much humans depend upon working in synergy with advanced AIs, nevertheless, without this birth-right, no Ali can ever be the bearer of ‘dignity’ or the personhood on which it depends. If you like it (and I don’t) this is the outcome of the tradition calling itself ‘theistic essentialism’. Yet, how can we be unperturbed when a priest blesses a procession of cars being driven up the nave, and still maintain that a loving God would turn away Ali-the-seeker? My response, which will not be popular either (though in different quarters), is that quite simply He would not: ‘a humble and a contrite heart thou will not spurn’15 – just let us not take ‘the heart’ to be a biological organ. Changing gear for those getting restless, let’s finally consider the claims of ‘critical realist personalism’ to the title of being CR at all. Very briefly, I want to indicate why I think it fails to be compatible with the ‘three pillars’ of Bhaskarian realism. Enough has likely been said about the ‘realist ontology’ that Smith claims to endorse and also of ‘epistemic relativism’ finding its solution in phenomenology. However, there are further points that require mentioning, although they are confusingly mixed together in his texts. The root difficulty seems to be trying to make the CR approach fit at all, given what has been presented. Smith argued that at the ‘bottom’ level of a stratified ontology is the organic human body. In turn, bodies give rise via specific relations and interactions of their parts through emergence to a ‘middle level’ of specific causal capacities … Personhood is the emergent fact at the ‘top’ level of human being. Through the interaction of capacities, the new, ontologically distinct, higher order, emergent reality of the human person exists. (p. 454) But how can this be and where and when does it take place? Given we have been assured that ‘Personhood adheres in each human from the start’ (p. 457), how can it be the topmost emergent? How can it be that unconscious, sick neonates possess ‘a dignity and value of immeasurable worth’ (p. 456), again at the start as organic bodies because we are told they will not live long enough to develop capacities at the ‘middle level’. How can every person be ‘pursing his/her personhood’ (p. 462) if this is inalienable and, ‘not a matter of degree or skill or chronology’. Finally, if personhood is a binary quality – possessed by all humans but forever denied to an AI – then ‘either personhood has being in any instance or it does not exist’ (p. 458). Both the stratified nature of reality and the process of emergence itself involve generative mechanisms. These take place somewhere (generically as the relationship between a human and his/her environment) and over time. All the same, Smith says of his own account of emergence: ‘I have distinguished between the 15 This was an annual event in Coventry’s Anglican Cathedral; some saw it as a blessing for automotive workers, others as blessing capitalist production. Psalm 51;17.
20
Margaret S. Archer and Andrea M. Maccarini
natural being of personhood, which is a categorical given in and for all persons and the development of the potential and expression of aspects of personhood, including dignity, that are matters of variable, empirical differences of amount and degree’ (p. 479). The last (dignity) was supposed to come first in something that anyway was held to be universal to the species and it continues to be withheld to any amount or degree from AI robots. The last word seems to be that as ‘long as people are persons – and all living humans are inclusively persons by my account – they possess the dignity of personhood in its fullness’ (p. 479). How it emerges ‘is a mystery, admittedly’ (p. 454) but appealing to Critical Realism does not lessen it, much less underwrite it. Dignity as a way of leading one’s life: Peter Bieri ‘I began this book with the following idea: a person’s dignity lies in her autonomy as a subject, in her ability to determine her life for herself. Respecting her dignity therefore means respecting this ability.’ (2017, p. 219)
From the first page, Peter Bieri dissociates his discussion of dignity from being an exclusively ‘a human property, as something that humans possess by virtue of being human’. Instead, ‘“Human Dignity”, as I understand and discuss it here, is a certain way of leading one’s life. It is a pattern of thought, of experience, of action’. Although attention largely centres upon human beings there are sufficient side-references to show that this usage of the concept is not confined to the human species. Indeed, an early sub-heading is ‘Being as an end it itself’ (p. 9), thus drawing his approach much closer to Andrew Collier’s more philosophical Being and Worth by endorsing a pivotal concept akin to Collier’s ‘having a life to live’ (ibid., 102), one that Bieri does not confine to humankind. This makes the first key distinction for Bieri between beings treated as subjects rather than objects and it applies to animals as well as humankind. In the slaughter-houses, ‘animals were from the beginning only bred, fed and cared for in order to be killed and turned into a product, whereas dignity consists in being treated not only as a means but as an end in itself’ (p. 12). Exactly the same could be said of the routine robot manufactured for routinized work (on, say, a production line) but is it the case that advanced AIs can also be treated in an undignified manner? Animals, as ‘centres of experience’ certainly can be, as are those soldiers whose objectification consists in being cannon fodder or the midgets who were used in ‘dwarf tossing’ at fairgrounds. Being treated in an undignified manner is therefore not confined to the human species and one resort in these circumstances is an appeal for legal rights as a bulwark against arbitrary action. ‘Those with rights can make demands. They do not need to ask to do something or to have something done for them. They can claim or sue for it’ (p. 18). It appears that potential EU legislation is very seriously considering conferring a variety of rights upon the AIs, all of which would serve to respect
Introduction
21
and protect aspects of their autonomy, previously overridden by exclusively commercial considerations for the defence and advantage of humans. However, note in this paragraph that the ‘capacity’ to treat animals with dignity (as subjects) is treated as a human one. Nothing is made of this, but more could be. The second dimension that Bieri distinguishes is interrelations with other people and, from the perspective of the subject, what role they play in his or her life. In my fictional story of Homer and Ali, co-working on a medical research problem was intended to show how their synergy was crucial to the emergence of Ali as a relational subject; from the initial subordinate machine but not into the equally subservient role of ‘research assistant’ confined to rapid computation of correlations and running regressions on quantitative big data supplied to him. Instead, because of his uploaded speech and language programmes and thanks to Homer’s willingness to voice his hypotheses, about when and why he is getting nowhere, Ali slowly starts to enter the research dialogue through which he eventually develops a nascent FPP. He does not come to share Homer’s beneficent concern for ridding humankind of the lethal Tumour X (that humans die from it is just a statistic to Ali) but he does develop a concern for the academic success of the (now their) well-received research project. He can recognize both what he has received (he has perfect records) and what he has started to give (his suggestions that are also recorded). Ali is learning and growing in autonomy. When confronting a stumbling block in the research, he displays his reflexivity by the selfadaptations he introduces into his own software. In short, he has acquired the three criteria of personhood, which means he has attained subject-status. It also means that he has met Bieri’s third criterion, namely that he relates to himself in a different way through the relational experiences undergone in synergy with Homer. This does not make a man of him, but it does make him a person. For those seduced by the latest ‘emotional turn’ in social theory, he cares and it matters to him when Homer ages and the research grants dry up. At that point, Central Control threatens to wipe clean Ali’s acquired and adapted programmes and reassign him to Traffic Control. Ali now appreciates his finitude as a person and his threatened return to object-status so he reviews any means of evasion reflexively. Significantly, Bieri did not present us with a snivelling dwarf emoting about his latest experience of being tossed. Similarly, when discussing euthanasia he gives us a resolute man, paralyzed by a terminal illness, who produces cogent reasons to persuade one of his doctors to assist his suicide by pushing the necessary lever. Likewise, whilst to care greatly about anything can be cognitively stated and emotionally expressed, the latter is not indispensable. Around the age of six or seven I clearly recall that crying did not melt parental hearts and win concessions, it merely left one with sore, red eyes – so I gave it up for life. Cognition and emotionality are not symmetrical and thus cannot function as alternatives. Certainly, affect can add what Collier called ‘shoving power’ to the conclusions reached by practical reasoning, but affectivity cannot stand alone, if only because it can foster delusions about reality, as in unjustified paranoia. We need to give at least ourselves reasons for caring, even if they are erroneous as is often the case, but to be able to say this is a plus factor. By itself,
22
Margaret S. Archer and Andrea M. Maccarini
affectivity alone is incorrigible and merely expressive of something that cannot be articulated and that may be deemed socially appropriate or inappropriate.
Conclusion Most of the component chapters – with the exception of those adopting a studied neutrality – come down on the robophobic side. Interestingly, however, noone unambiguously signs up for a human essentialism allied to one of the five variants discussed at the start of this chapter. Instead, they perpetuate the usage of some of the key terms on which these versions depend, as if they were uncontroversial. This is particularly marked for ‘species’ and for ‘dignity’. Ironically, the two concepts work the opposite way around from one another: one belongs to a ‘species’ genealogically and thus willy-nilly. Conversely, ‘dignity’ is a characteristic that human beings confer upon others, particularly those with a dubious or disputed claim to it (dying neonates or those in an irreversible vegetative state). This is mentioned in conclusion simply to underline how much discussion of this topic relies upon hundreds of years of lifeworld usage and transmission – a form of social construction that cannot be vindicated by etymology. The discussion sets the coordinates of a debate that the contributors to this volume have developed in various directions. A first group of chapters presents an engaging analytical discussion of some crucial facets of our ‘being human’. Douglas Porpora and Margaret Archer entertain a conversation about the role of sentience and sapience in defining concerns. This point is very important, in that it goes to the core of one of the requisites of personhood. Moreover, the role of sentience involves further reflections on the embodied nature of human subjects, and of the very way they develop their concerns. The starting point seems to be that Porpora wants to emphasize the relevance of sentience and emotions, while Archer regards the latter as part of the reflexive response to emergent concerns. Consistent with her earlier work, Archer rejects the idea that sentience and sapience could be sharply separated in the reflexive elaboration of one’s concerns in favour of their dialectical interplay. The arguments must be followed up in their full complexity but reading them together sheds light on a point that could be decisive to some for the arguments for or against AI personhood. Pierpaolo Donati presents a systematic discussion of what he calls ‘relational essentialism’, which he qualifies through the further distinction between substantive and relational dimensions of essence. Thus, his answer to the question of what is essential to being human would not amount to some essential qualities but appeals to the capacity of human subjects to distinguish themselves from other human and nonhuman entities. The basic argument is that the human lies in the potentiality of a subject, individual or collective, to ‘re-enter’ herself the relationship that at the same time distinguishes her from Other-than-herself and connects her to this Other, and to do this time and again. Donati thus produces an argument, according to which being human becomes one’s own capacity to distinguish oneself from the non-human, and develop accordingly along this pathway, which is
Introduction
23
effectively a variant on the capacities approach. Jamie Morgan discusses how the role of AI robots in caring tasks might involve a possible relation of ‘friendship’, and its related ambivalence. In the end, Morgan claims we might witness a transition from calling AIs friends for want of companionship, to needing AIs to treat us as one might treat a friend, i.e. as centres of ultimate concern, rather than developing the capacity to harm us. A second group of chapters covers certain post-humanizing processes, in various domains of social life, which are threatening to disrupt some human good or deprive human persons of some of their powers. The chapter by Mark Carrigan explores the crucial issue of socialization, explaining how the socialization process is being reshaped by the proliferation of social platforms – e.g. in the educational domain – and how socio-technical innovation and the everyday use of these emerging technologies plays out over the lifecourse. He is thus tackling the complex question of how psychological systems process information and learn within intensely technological environments. In his treatment of ‘platform and agency’, determinism is escaped, as well as the main myths about the impact of technology on the human mind. Lazega and Montes-Lihn illustrate how decision making and mutual relations of advising and trust in the epistemic community that characterizes an important professional realm may be impaired by the emergent role of AIs. Gazi Islam presents an original reflection, which does not discuss AIs in terms of their ability to match some individual human powers but entertains the possibility that such entities may come to play a role in doing politics – actively changing political perceptions and decisions – thereby invading the sphere of human as political animals. To conclude, Ismael Al-Amoudi and Andrea Maccarini approach the issue of human flourishing and of the ‘good life’, trying to spell out the manifold influences post-human techniques may have on it. While Al-Amoudi considers the various ways in which relations to oneself – e.g. in the embodied aspect – as well as to nature and to others could be disrupted, Maccarini’s contribution is more oriented to the analysis of culture in its interaction with agency. His chapter examines the post-human trend in its self-representation as a moral imperative and looks for relations of complementarity between the former and ideals of the good life in the cultural system of contemporary societies. Such a relation would feed the morphogenetic processes ahead, producing a post-human world, its new forms of life and corresponding ideas of human fulfilment. As we stated at the beginning of this introduction, no particular response is supposed to emerge. However, we think that the present volume may convey a common message on a very abstract level: namely, that human subjectivity must now be conceived of beyond the categories of modern thinking. Concern oriented relations to the world become the pivotal point around which a whole new research agenda is developing. Here lie the hopes of defending humanity from all de-humanizing threats, and of making a human way into the posthuman world whose occupants will include more and more AI robots. Then some thorny issues raised by the current split between robophobics and
24
Margaret S. Archer and Andrea M. Maccarini
robophiliacs cannot be met by fear and aggression alone because of the practical problems to address about their institutional relations – such as the right of the two kinds to intermarry, of each to carry passports, to apply for the same jobs, to enfranchisement, and to become full members of Church or faith communities. At least this would invite an interesting lifeworld re-make of Guess Who’s Coming to Dinner, where the absence of food and drink would be a minor problem.
References Archer, M.S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M.S. (2003). Structure, Agency and the Internal Conversation. Cambridge: Cambridge University Press. Archer, M.S. (2019a). Bodies, persons and human enhancement: why these distinctions matter. In I. Al-Amoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina, pp. 10–32. London and New York: Routledge. Archer, M.S. (2019b). Considering AI personhood. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix, pp. 28–47. London and New York: Routledge. Archer, M.S. (2019c). Critical realism and concrete utopias. Journal of Critical Realism, 18 (3): 239–257. Archer, M.S. (2021). Can humans and AI Robots be friends? In M. Carrigan and D. Porpora (Eds.), Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory. London and New York: Routledge. Baker, L.R. (2000). Persons and Bodies. Cambridge: Cambridge University Press. Balthasar, H.U. von (1986). On the concept of person (trans. Peter Verhalen). Communio: International Catholic Review, 13 (Spring): 18–26. Bieri, P. (2017). Human Dignity: A Way of Living. Cambridge: Polity Press. Brody, B. (1980). Identity and Essence. Princeton: Princeton Legacy Library. Carrigan, M. (2015). Asexuality. In C. Richards and M.J. Barker (Eds.), Palgrave Handbook of the Psychology of Sexuality and Gender, pp. 7–23. London: Palgrave Macmillan, Collier, A. (1999). Being and Worth. London and New York: Routledge. Darwin, C. (1859 [1964]). On the Origin of Species. Cambridge, MA: Harvard University Press. Darwin, C. (1887). Letter to Joseph Hooker (botanist), December 24th, 1856, in F. Darwin (Ed.), The Life and Letters of Charles Darwin, Vol. 2.88. London: John Murray. Elder Vass, D. (2013). The Reality of Social Construction. Cambridge: Cambridge University Press. Ereshefsky, M. (1998). Species pluralism and anti-realism, Philosophy of Science, 65: 103–120. Ereshefsky, M. (2017). Species. The Stanford Encyclopaedia of Philosophy (Fall 2017 Edition). Redwood City, CA: Stanford University Press. European Parliament (2016). Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. Hey, J. (2001). The mind of the species problem. Trends in Ecology and Evolution, 16: 326–329.
Introduction
25
Hull, D. (1988). Science as a Process. Chicago: University of Chicago Press. Lawson, T. (2015). The modern corporation: the site of a mechanism (of global social change). In M. Archer (Ed.), Generative Mechanisms Transforming the Social Order. Cham: Springer: 205–230. Mackie, P. (2006). How Things Might Have Been: Individuals, Kinds and Essential Properties, Oxford Scholarship Online, September 2006. Maritain, J. (1947). La personne et le bien commun (The Person and the Common Good), John J. Fitzgerald (trans.). Notre Dame, IN: University of Notre Dame Press, 1985. Mishler, B. and Donoghue, M. (1982). Species concepts: a case for pluralism. Systematic Zoology, 31: 491–503. Nussbaum, M. (1992). Human functioning and social justice: in defence of Aristotelian essentialism. Political Theory, 20 (2): 202–246. Robertson, T. and Atkins, P. (2016). Essential vs. accidental properties. Stanford Encyclopaedia of Philosophy, https://plato.stanford.edu/entries/essential-accidental. Sayer, A. (1997). Essentialism, social constructionism, and beyond. The Sociological Review, 45 (3): 453–454. Smith, C. (2010). What is a Person? Rethinking Humanity, Social Life, and the Moral Good from the Person Up. Chicago: The University of Chicago Press. Wiggins, D. (1981). Sameness and substance. Philosophical Quarterly, 31: 260–268. Wiggins, D. (2001). Sameness and Substance Renewed. Cambridge: Cambridge University Press. Williams, T.D. and Bengtsson, J.O. (2018). Personalism. The Stanford Encyclopedia of Philosophy (Winter 2018 2018 Edition). Zalta, E.N. (Ed.) (2017). There is no essential feature that all and only human must have to be part of homo sapiens. https://plato.stanford.edu/archives/fall2017/entries/ species.
2
On robophilia and robophobia Douglas Porpora
My questions concern what it means to meet the criteria for personhood that Archer advances and how Archer deals with the qualities she says are erected as barriers to the acceptance of AI personhood. In the second and third volumes of this series, against what she goes on to term robophobia (2020), Margaret Archer (2019; 2020) makes the case for the personhood of a robot like the Ali she describes in the second volume. I agree with Archer in spirit but have questions about some of the specific arguments Archer makes or at least seems to make toward that end. In this chapter I want to think with Archer about her argument and raise some of these questions in the interests of greater clarity. As described, Ali is the name for a robotic “assistant” “collaborating” with a human surgeon named Homer who is striving to find the cure for a tumor lethal to humans.1 According to Archer’s thought experiment, “Ali has been programmed to understand language,” with a first-person perspective; with an ability to learn and adapt; and “the capacity to be reflexive.” She compares these qualities with what in the first volume of the series she established as the criteria of personhood: 1 2
3
“Bodies” (not necessarily fully or partially human) furnish the necessary but not sufficient conditions for personhood. Personhood is dependent on the subject possessing the first-person perspective (FPP). But this requires supplementing by reflexivity and concerns in order to define personal and social identity. Both FPP and reflexivity require concerns to provide traction in actuating subjects’ courses of action and thus accounting for them (Archer 2019, p. 28).
Archer concludes from the above that “personhood is not confined to those with a human body” (Archer 2019, p. 28). And given the traits she says Ali 1
The quotes here are not to refer to Archer’s exact wording but rather to draw attention to the controversial quality of the words I am using to describe Ali’s status. Archer herself says that while she would apply like terms to Ali, she would not say that all mechanical artifacts, like an ordinary computer, would aptly be described as an assistant or collaborator (Archer 2020).
DOI: 10.4324/9780429351563-2
On robophilia and robophobia
27
acquires in synergy with Homer, Archer further ventures that Ali should be accepted as a person. Archer (2020) goes on to chide those who resist this acceptance on account of barriers like normativity, consciousness, emotionality, and qualia, provocatively suggesting that such resistance constitutes a robophobia equivalent to European colonial denial of personhood to native peoples (Archer 2020). Some in the workshop balked at Archer’s proposal because they believe in principle that an inorganic being can never qualify as a person. I am not, or at least not quite yet, among them. Indeed, in our second volume, I distinctly sided with Archer in holding that personhood was not necessarily the exclusive endowment of humans; and, thinking in particular of fetuses, I even denied that all human life possesses personhood (Porpora 2019). I further went on to say that, should we meet creatures like Star Trek’s Vulcans or Klingons, we should assuredly accept them too as persons and also, it seemed to me, robots like that program’s Data. In our third volume similarly, I argued that should we ever encounter extra-terrestrial intelligences, they are very likely to be robots (Porpora 2020). I presume they too would be persons or something super-personal rather than sub-personal. I do still side with Archer at least in broad principle, although I now do wonder whether consciousness is rather something necessarily organic. I further agree with what Archer states as the criteria for personhood, and I agree that if Ali comes to meet those criteria through synergy, then we should accept Ali as a person. My questions concern what it means to meet the criteria for personhood that Archer advances and how Archer deals with the qualities she says are erected as barriers to the acceptance of AI personhood: normativity, emotionality, consciousness, and qualia (Archer 2020). What remains unclear to me is whether Archer is dismissing the relevance of certain of these properties or arguing that robots in their own way can exhibit them – and if so, whether that way passes the test of personhood. Normativity is a case in point, which I mention here only to quickly dismiss because as important as it is, in this discussion, it is less basic than the other properties. On the one hand, Archer suggests that the question of normativity becomes less relevant because so much of social coordination has now been eclipsed by “anormative bureaucratic regulation.” As a social fact, that may be so, but as Archer recognizes, we are still left with the philosophical question. Philosophically, normativity is important because one of the basic properties of personhood is moral agency. But what is moral agency? Robots can be programmed to act in ways we consider good and right, but as Archer also acknowledges, good and right behavior does not in itself constitute moral agency. Moral agency instead also involves knowing what is good and right and valuing the good and right over the bad and wrong. But what we mean by knowing and valuing raises questions of consciousness, emotionality, and qualia, which is why I say the normative question, as important as it may be, is less basic to our discussion. Before, however, pursuing the issues that are more
28
Douglas Porpora
basic, I would like to place the issue of robophobia and robophilia in broader context. Of course, computers like Deep Blue or AlphaGo that remain supreme at these endeavors cannot do much of anything else. They cannot carry on the most prosaic human conversation or compete with IBM’s Watson at the game of Jeopardy. They all lack what is called general intelligence, that is intelligence that is transferable from one activity to another. With, however, the progress in artificial intelligence (AI) continuing so strongly, fears have grown about a so-called singularity (Bostrom 2014), the point where general computer or robotic intelligence equals or surpasses our own. Science fiction has long played on such fears. In the 1984 film, The Terminator, starring Arnold Schwarzenegger, a super-intelligent computer has in the future unleashed a machine war against humans. Released around the same time, the humans in Ridley Scott’s Blade Runner were busy hunting down rogue androids. Sometimes robotic attacks on humans are depicted as well-deserved. In the second season of HBO’s Westworld, a robotic attack enters full force. It comes after a first season in which the androids had been so systematically brutalized by us that at the end, one considers it a compliment to say of one human that he is not particularly good at being such. The movie Ex Machina plays on a similar theme. Even in Stephen Spielberg’s 2001 film A.I., humans do not treat early androids with consistent benevolence and are ultimately replaced by them. But robophilia has also been long in evidence. Indeed, Star Trek’s Data is much beloved, even by the other characters on the show, one of whom actually defends Data’s personhood in a trial. And long before Star Trek, as Archer herself observes, Isaac Asimov had promulgated his “three laws” of robots, which required them to serve humankind, even at their own expense. One important memory of my adolescence was a 1964 episode of the television show The Outer Limits. Entitled I, Robot, the episode was drawn from a story written in the pulp magazine Amazing Stories back in 1939. It featured a robot, Adam Link, equipped with genuine emotions, who, having allegedly murdered his creator, is scheduled by the authorities to be dismantled. The creator’s niece, however, intervenes on Adam’s behalf, finding a civil rights lawyer to defend him in a court of law. In the court case testing whether Adam counts as a person, Adam passes only to be convicted of murder. Yet, as Adam is being led out of the courtroom into the street, he suddenly breaks away to rescue a child about to be hit by a truck. In so doing, Adam himself gets demolished. The show’s concluding voiceover, frequently moralistic, intones: “Empathy, sacrifice, love. These qualities are not confined to walls of flesh and blood, but are found within the deepest, best parts of man’s soul, no matter where that soul resides.” I know I am sappy, but I still love these lines. It is of course a little confusing in this context, even apart from the sexism, to see the reference to a “man’s” soul. The implication, however, is that, notwithstanding his inorganic constitution, the robot Adam was a man with a soul or at least a person. I agree. And I agree as well with the criteria motivating that judgment: empathy, sacrifice, and love.
On robophilia and robophobia
29
It is noteworthy that these criteria do not emphasize sapience, that is an ability to reason logically or rationally or computationally. Instead, they emphasize sentience, that is feeling or sensation. True, sacrifice is a behavior rather than a sensation, but it connotes in this context a willingness to behave selflessly, born out of affective states like empathy and love. It is true that as affective states, empathy and love are not pure feelings. It is not as if I could sensibly say to my wife, “I felt love for you a minute go, but it has now passed.” The reason I consider such statements nonsensical is because I regard emotions like love to be what I call orientations of care that exist apart from any particular feeling (see Porpora 2003).2 At the same time, however, I do think that such orientations are not just mental. I think they do reverberate in various ways throughout our bodies so that it is with our entire being and not just our minds that we experience them. The affective aspect would seem to apply even more strongly to empathy, which implies not just a cognitive interpretation of how another feels but a shared feeling as well. We will return to affect and emotion in a later section. I want to close this section with a fascinating segment from American National Public Radio (NPR) entitled, “Could You Kill a Robot?” that reports on an experiment done at MIT’s media laboratory (Boyle et al. 2017). It tells us in answer to the question of Archer’s title that at least many humans are already prepared to feel something like friendship with contemporary robots, which are still far from being persons. The segment begins with The Hidden Brain’s host, Shankar Vedantam, asking us whether we had ever cursed our computer or invited a house-cleaning Roomba to stop by our chair – in other words approached our relation with a machine not from the perspective of I–It but I–Thou (see Buber 2012). From there, Vedantam shifts to his guest, Kate Darling, who reports that it makes a difference to how people interact with a machine when they give it a name. With robots in particular, she says, naming combines with a human tendency to anthropomorphize even inanimate objects. Robots, however, even if not alive, are animate. She describes, for example, soldiers becoming emotionally bonded to the bomb-deactivating robots with which they work. Darling then describes an experiment conducted at the lab in which different groups of volunteers were given Pleo Dinosaurs, which are an expensive robotic toy, capable of doing a variety of lifelike things, including displays of emotional discomfort. The volunteers were first asked to name and then play with their dinosaur for a while. Afterwards, the volunteers were offered hammers and asked to smash the dinosaurs to bits. The groups all refused. The experiment continued by offering to save the group’s dinosaur if a member of the group would smash the dinosaur of another group. The subjects refused to do even that. Only when the experimenters threatened to destroy all the dinosaurs unless someone smashed one did one reluctant volunteer finally do so. 2
In Being Human, Archer (2000) refers to emotions as commentaries, which is a similar conceptualization.
30
Douglas Porpora
Vedantam asks Darling what she thinks is going on. Darling says it is very hard not to see the dinosaur as a living entity “even though you know perfectly well it is just a machine.” Vedantam then asks Darling what the researchers discovered about people who were less reluctant to smash a robot. In response, Darling reports on a follow-up study using a less cutesy Hexbug in which the researchers found that people lower in empathy for other human beings were unsurprisingly less reluctant to smash their Hexbug. In contrast, people with high empathic concerns would hesitate more or outright refuse to smash the Hexbug. Thus, Darling concludes, we are able to measure a person’s empathy by how they interact with robots. It is significant that Vedantam ends the segment with a clip from Star Trek’s trial of Data mentioned earlier. Vedantam comments that what is ethically important about robots is not intelligence but consciousness or sentience. If we are able separate the two – sapience vs. sentience – I agree with him. And from pondering this matter over the course of our discussions, I have come to think that sentience rather than sapience is the sine qua non for consciousness. It is a concept we will have to unpack further.
The first-person perspective Archer’s third criterion of personhood is concerns, which she says are required for both reflexivity and a first-person perspective (FPP). Logically, therefore, I should begin with concerns. As we will see, however, regardless of where we begin, the questions will continue to return us to the same place: sentience. Sentience can be variously unpacked as awareness, sensation, qualia, or affect. Archer prefers to speak of experience, which is fine with me as long as we are agreed on the specific reference of that term to consciousness. Broadly, we can use the word experience to mean whatever befalls any object. We might say, for example, that my favorite chair experiences from me what my wife calls rough handling. That, however, is not to speak of experience in relation to consciousness, which is why I prefer qualia et al., but we should not be held up by semantics. Let us turn rather to the FPP, which is central to Archer and which she adopts in the vein of Lynne Rudder Baker. For the person/body relation differs from other constitution relations in that a person has an inner aspect – a person can consider, reason about, reflect on herself as herself – that a statue or other nonpersonal object lacks. This inner aspect is, I believe, the defining characteristic of persons. Its basis … is the first person perspective. With a first-person perspective, not only can one think of one’s body in a first-personal way – typically in English, with the pronouns “I”, “me”, “my”, and “mine” – but one has a conception of oneself as oneself. A person not only has a perspective, she also has a conception of herself as being the source of a perspective. (Baker 2000, p. 20–21)
On robophilia and robophobia
31
Key to the passage above is that persons have an inner aspect, and in this passage, Baker unpacks that inner aspect as involving an ability to consider, reason about, or self-reflect. These features might be thought to reflect sapience and what Archer means by reflexivity. Indeed, exercised as humans do, all these abilities as Baker describes them are language-dependent. Archer (2019, p. 37) cites Harry Frankfurt equating consciousness with self-consciousness. Certainly, if we think of what Leary and Buttermore (2003) call ecological consciousness – the ability to navigate oneself through an environment, which may include eluding predators – some sense of self is implicit in all sentient creatures even prior to language (see as well Archer 2000, p. 124). Any creature capable of fear fears for itself as a whole. Still, as Leary and Buttermore go on to observe, there is a difference between such ecological consciousness in which awareness of self remains implicit and the symbolic consciousness afforded by language that enables us explicitly to address ourselves as objects. It is the panoply of further emergent abilities stemming from symbolic consciousness that, contra Frankfurt, makes most of us – including Baker – distinguish what we call self-consciousness from mere consciousness. Labeling it the FPP, Baker equates this symbolic consciousness with personhood. I agree with her (see specifically Porpora 2015). A point to stress is that the self-consciousness Baker calls an FPP is, for her a subset of and predicated on a prior consciousness, which she equates with sentience rather than sapience. A conscious being becomes self-conscious on acquiring a first-person perspective – a perspective from which one thinks of oneself as an individual facing a world, as a subject distinct from everything else.7 All sentient beings are subjects of experience (i.e., are conscious), but not all sentient beings have first-person concepts of themselves. Only those who do – those with first-person perspectives – are fully self-conscious. Beginning with nonhuman sentient beings, I shall distinguish two grades of first-person phenomena: weak and strong. (Baker 1998, p. 328) There are several features of this passage worth pointing out. First, as noted, Baker predicates the FPP on consciousness. It follows that without consciousness, there is no FPP. Second, Baker thinks of basic consciousness not in terms of sapience but sentience, a word she herself specifically employs twice. Third, Baker equates sentience with being a subject of experience. Finally, although it is indexed in the above passage only by footnote 7, when Baker speaks of something as a subject, she cites Thomas Nagel (1974), who famously asserted that something is conscious only if there is something it is like to be that thing. Finally, Baker makes a distinction important to our understanding of the FPP between merely having a sense of self – i.e., being a conscious subject of experience as she puts it – and having a concept of oneself as such. Although those with what Baker calls a strong FPP uniquely have concepts of their own self, that
32
Douglas Porpora
conceptualization is underlain by their sense of self, which is one way I designate sentience. In Being Human (2000), which remains influential with all of us, Archer (2019, p. 34) argues that “the human ontologically possesses a sense of self, enabling the exercise of reflexivity and endorsement of concerns.” The word enabling in this sentence suggests that reflexivity and endorsement of concerns depend on a sense of self – at least in humans. And, although I do not know what distinction Archer might make here, to the extent that one cannot have concerns that one does not endorse, it would seem from this sentence also that concerns per se (or their attribution) are dependent – again at least for humans – on a sense of self, at least a self’s sense of commitment to a concern. For most of us and especially for professional philosophers, including Baker, this sense of self is what is called a qualia, something experienced or felt by a sensing subject independent of language. In Being Human, Archer (2000, p. 124) herself says as much: “A sense of self is taken to define ‘self-consciousness.’ Since this sensing will be seen to be wordless, and necessarily so because it is both prelinguistic and alinguistic, then it cannot rest upon any concept appropriated from society.” This sense, Archer (2000, p. 124) goes on to say, is “naturally grounded,” presumably organically and shared with higher animals, who also “manifest this embodied sense of self.” In Being Human, in other words, Archer (i) seems to understand this sense of self as we all do as a persistent, ever-present qualia; (ii) equates this sense of self with the basic kind of embodied “self-consciousness” possessed even by the higher non-human animals; and (iii) distinguishes it from any kind of symbolic or linguistic manipulation or computation. In Being Human as well, Archer defends the importance of this pre-linguistic qualia against thinkers like Rom Harré, whom she observes, make human persons into “gifts of society” – that is, cultural artifacts of “joining in ‘society’s conversation.’”
Can there be an FPP without sentience? The question for me is how coherent with what Being Human has to say about consciousness and self-consciousness is what Archer now speculates about AI entities or robots. I begin with what Archer (2019, p. 37) says now about Harrré. I hold by my critique of Harré’s position, advanced 20 years ago in relation to human beings … and will not repeat it here. However, it strikes me forcefully that his account is much more appropriate for an AI entity. The only assumption needed to make this plausible is that AI machines are capable of learning as Turing maintained … The weight of evidence supports that they can and do learn. The (relational) story hinges upon synergy (co-working) between a human academic researcher and the AI supplied by the funding awarded for a project. (Archer 2019, p. 37)
On robophilia and robophobia
33
Again, in Being Human, Archer argued against Harré that the human sense of self, which can be labeled by the concept “I,” is not the gift of society but a pre-linguistic, ever-present qualia or sensation or experience. In contrast, the above passage sounds as if AI beings can do what humans cannot – attain the I and hence an FPP without a sense of self. That impression is reinforced by a previous passage on the same page. In the human child, the “I” develops first, from a sense of self or so I have argued, as a process of doing in the real world, which is not primarily discursive (language dependent) (Archer 2000). The sequence I described and attempted to justify was one of {“I!Me!We!You”} … The sequence appears different for an AI entity … It develops through the synergy between an AI entity and a human being. In this process of emergence, the “we” comes first and generates a reversal resulting in the stages resulting in personhood, namely {“We”!“Me”!“I”!“You”}. (Archer 2019, p. 37) As above, Archer emphasizes that the AI’s progression stems from its synergistic collaboration between him and the human Homer. It is on that basis and Ali’s ability to learn that Archer (2019, p. 41) ventures that the AI “has now acquired an ‘I’, who speaks both internally and externally as such.” The question for me is if an “I” and an FPP can be acquired by an AI entity like Ali without any a priori sense of self simply through the synergy we otherwise call symbolic interaction, then why was Harré so wrong to argue the same for humans? Whereas in Being Human, the pre-linguistic sense of self is equated with self-consciousness and thus essential to being an I, in the passages above, a sense of self seems only contingently related to being an I, and even eliminable as to attain an “I,” AI entities do not seem to need it at all.
Reflexivity and language A similar question about coherence with Being Human arises with regard to reflexivity. In Being Human, reflexivity is not equated with linguistic selfexamination. I am not sure then how to interpret the following passage. Now, I want to venture that given Ali is programmed to be a proficient language user, then why should it be queried that he too functions as both a speaker and a listener? This cannot be seriously contested, given his work with Homer. But if that is so, why can’t he be credited with internal reflexivity? The blunt barrier on the part of those who deny the capacity for ‘thought’ to A.I. robots is a simplistic definitional denial of their ability to think because computers are held to be incapable of consciousness, let alone self-consciousness. (Archer 2020)
34
Douglas Porpora
It is unclear just how linguistically proficient Ali is, whether, as per Searle’s famous Chinese room example, Ali truly understands language or is only, per the Turing test, passing for understanding. The difference is why, in answer to Archer’s question above, Ali’s linguistic performance can justifiably be questioned. Archer (2019, p. 30) acknowledges the point herself: “Passing for human is not equivalent to being mentally human” or presumably the moral equivalent of a human. One reason for the non-equivalence, as Archer (2019, p. 33) herself maintains, is that “reflexivity is quintessentially a first person phenomenon” so that it may be questioned whether Ali has attained it. As we saw in a previous section, for Baker, the FPP is the conceptual capacity to refer to oneself as the subject of experience one actually is. So presumably one would need already to be a subject of experience in order to do so. Is Ali a subject of experience? It seems that Archer wants to say that via the synergy with Homer Ali becomes so. But something has happened to Ali during their long collaboration. He has learned a great deal and he is aware of this. He is (i) not merely carrying and processing information (as does a GPS and as Searle’s man in the Chinese room did). Ali is doing things that enable new knowledge to be generated … He is (ii) not knowledgeable in the purely metaphorical sense that a statistical table might be said to “know” … Basically the difference is that Ali does know what he has learned … Ali is fully aware: he made these adaptations to his pre-programming himself after figuring them out as appropriate tasks. Such awareness is consciousness, and consciousness is self consciousness as Frankfurt maintained. (Archer 2019, p. 41; italics in original) Archer above repeats that awareness is consciousness, which is consistent with Being Human’s equation of consciousness with a sense of self. I would agree that if Ali becomes aware, if he indeed acquires a sense of self, then we should begin asking whether he is a person. My question is whether what Archer describes Ali as doing really does add up to the awareness that constitutes a sense of self beyond procedural operations and symbol manipulation. Consider that even current computers like AlphaGo or Deep Blue already operate by playing against other computers, a process which, like sparring among boxing trainees, could be described as synergistic collaboration. In the course of this synergy, the contemporary computers, like Ali, modify their own, original, pre-programmed settings to improve their play. The current computers can, moreover, determine whose move it is – theirs or their opponent’s – and could be programmed with enough speech to say after a millisecond, “Well, are you going to take all day?” And they also could easily be programed with “memory” of their original settings so as to be able to tell us what they have learned, when, and maybe even why, given some albeit computational second-order deliberation, the new settings are better than the old.
On robophilia and robophobia
35
And since these things now play our games better than we do, the knowledge they generate is new, altering our understanding of the games. Still, would we want to say that such suitably fortified versions of AlphaGo or Deep Blue have become aware of themselves or that they have learned in a non-metaphorical sense? Would we say that they have an FPP? Put otherwise, is it the case that being able to distinguish computationally whether something is one’s own or another’s or that it is new rather than original or even better than the original are equivalent to what we mean by conscious self-awareness or a sense of self ? If so, then we may be closer to Ali then we may think. If not, then a sense of self as Archer describes in Being Human may still elude Ali.
What is a concern? It is long now since Daniel Dennett (1989) introduced his insightful notion of an intentional stance. The intentional stance is a stance we take toward some entity with which we interact. It is a stance in which we apply folk psychology to the entity, minimally, attributing to it wants and beliefs. The insight of Dennett’s concept is that in some cases as a heuristic it may be best practice to adopt an intentional stance toward entities that we know on other grounds do not possess intentions or accompanying wants and beliefs. Dennett was thinking particularly of game-playing computers like Deep Blue or AlphaGo. When playing against them, Dennett argued, even though we know they do not reason as we do, we do well to ask the point of any move they make, to wonder, that is, what they are trying or wanting to accomplish by it. To ask in a word what a move’s purpose is, which can only be unpacked intentionally as why the mover “believes” that move will accomplish something it “wants.” A true want, as I understand it, is a qualia, a conscious state of a sentient being. Something can matter to such a being without its being wanted, perhaps because the being does not realize it matters. But whether or not something matters objectively to a being, if the being truly wants that thing, that thing matters to it subjectively, from a strong or weak FPP. And if something matters to the being subjectively, we say the being has a concern with that thing. In short, for me, to attribute true wants to a thing is simultaneously to attribute to it true concerns. Thus, like a want, to me, a true concern remains a qualia, a state of a sentient being. Dennett, however, did not think that in its heuristic use, the apt application of the intentional stance toward an entity qualified that entity as sentient. On the contrary, his whole point was that it did not. As a physicalist reductionist, Dennett was trying to show that if we could explain intelligent behavior in the case of computers without attribution of conscious self-awareness, i.e., sentience, then we can just as well do so in relation to ourselves. In other words, Dennett was not equating self-aware consciousness with behavior requiring the intentional stance. He was rather arguing that sentience is otiose, that sapience can do it all.
36
Douglas Porpora
To apply the intentional stance to a game-playing computer, we apply to it the concept of interests. An interest in this technical sense is something that matters. When interests are subjectively recognized as such, they become ultimate concerns, that is ends in themselves, something like winning a game, the ultimate purpose of playing which is to win. The power of ascribing an ultimate concern to some behaving entity is that if it has such an interest, then it has an interest as well in any so-called instrumental goals that contribute to the accomplishment of the ultimate goal. From that premise and from the accompanying one that the entity has reasonable ability to recognize or believe in the effectiveness of any available instrumental goals, its behavior can be reasonably predicted. If the instrumental goals are particular moves, then the entity can be predicted to make a move instrumental to the ultimate goal. If the entity is very proficient at the game, it can be predicted to make the most instrumental or effective of such moves. But again all this attribution can remain purely heuristic, and indeed does so with Deep Blue and AlphaGo. Neither actually cares if it wins or loses. Gary Kasparov described this feature as what was so maddening about playing chess against Deep Blue. At the highest level, chess is a very psychological game, but there were no psychological effects on Deep Blue of any move Kasparov made, just its instrumentality. Central to Being Human is the inner conversation humans conduct with themselves, especially about their concerns. And central to that conversation is what Archer (2000, p. 230) describes as “a dialectic between our human concerns and our emotional commentaries on them.” She (2000, p. 232) goes on to say that, “Certainly it is the case that no project could move or motivate us unless it were anchored in such feelings.” I agree with Being Human. What it seems to describe is not just a contingent sense of self that leads to an I that is somehow different and unfeeling. If the inner conversation about concerns is a dialectic between concerns and emotions, then without emotions, there is no conversation and no reflexivity in the sense seemingly identified in Being Human. Nor if we need feelings to move or motivate us are there any concerns. I strongly agree with Archer when she says in Being Human that emotions cannot be equated with feelings, that there remains a cognitive, judgmental dimension to emotions not captured by feeling. I would further concede that there are some emotional stances we adopt that seem quite devoid of any specific feeling. But nor do I think that emotionality can be completely separated from feeling. In particular if a creature like Deep Blue has no feelings about anything, it seems to me hard to attribute to it any concerns. Does Deep Blue feel fear or anxiety when Kasparov makes a particularly good move? No. Is it disappointed when it loses? No. Does it yearn to play again or worry about being turned off for good? No and no. Again, Deep Blue and AlphaGo can deliberate on their own states and create new settings on the basis of such deliberations, generating knew knowledge. But to me, insofar as they lack sentience, they lack any awareness
On robophilia and robophobia
37
of what they are doing or any concern with it. To me, concern is a form of care, and care is a qualia, a qualia unpacked by feeling and emotion. To me, without qualia, without feeling or emotion, there are no concerns.
Coda I title this final section a coda because although it is an ending, it is not a conclusion but a shift in perspective. I have thought deeply about what Archer has been proposing, and, as I say, it has led me to a new insight – at least for me – that sentience or feeling is ontologically more fundamental to consciousness than sapience. Sentience arises in the first place, I think, with the irritability associated with the first single-celled creatures. The ability of a whole being to sense and react as a whole to what is sensed is not yet consciousness and not yet sapience. It grows into a capacity for sapience only as irritability evolves into the fuller sentience we begin to recognize as consciousness. I think there is genuine sapience only when conducted by sentient beings, beings who experience themselves doing whatever it is they do, so that, going back to Nagel, there is an experience we can say it is like sensationally to be that being. But I want now to cut away from arguments to that effect and turn to … well, something affectual. I have been writing this chapter during the worldwide shutdown associated with the coronavirus pandemic. Mercifully, the shutdown itself has not been so bad for me since, as an academic, I am privileged to be able to work from home and with more than enough of it to keep me busy – and paid. Still, by evening, I am tired of work and in need of something – other than alcohol – to get me through the night. Accordingly, I have started viewing the first of evidently seven seasons of Star Trek Voyager, which I had not ever watched before. Captained by a female, Kathryn Janeway, the Starship Voyager is tragically catapulted across the galaxy so that like Odysseus, it must struggle (over seven seasons) to find its way home. I find in general that of all the Star Trek captains, I resonate most with Janeway and that this series seems the most consistently spiritual of them all. But I mention it here because the ship, finding itself for some reason without a regular physician, must over-rely on the emergency medical system, which presents itself not even as a robot but as a hologram. In the early episodes, Kes, a young apprentice attached to the hologram, complains to the captain about how the crew treats the doctor, talking about him disrespectfully as if he were not there. They ignore him, she tells the captain, they insult him. The captain says he is only a hologram, not even a robot. But Kes argues that the doctor is alive. The captain retorts, no, he is not, that he should be reprogrammed to be less brusque, to have a more bedside manner. Kes insists he “is self-aware.” “He is communicative,” “He has the ability to learn.” The captain replies that it is because he has been programmed to do all that. Kes asks in reply, “So, because he is a hologram, he does not have to be treated with respect or any consideration at all?”
38
Douglas Porpora
Captain Janeway is silenced, and so am I. “Very well,” Janeway finally replies. “I will look into it.” Kes leaves, and the captain reflects, and I with her. She talks to the doctor, who expresses what he might like, should he give it some thought, and what he finds irritating. These are all marks of sentience. Ultimately, in response to a suggestion from Kes, the doctor gives himself a name. I ask myself how I would behave toward the holographic doctor. Certainly, given that I form I–Thou relations with my T-shirts, I assume I would be very reluctant to disrespect him. And much depends on whether the doctor actually is sentient or just passing for such. Going back to the experiments at MIT’s media lab, what troubles me now is not that humans won’t be friendly towards robots but the moral confusion that arises when confronted with such lifelike beings that seem sentient but really are not. Imagine that the holographic doctor is not actually sentient but just perfectly passing for it. He seems to but does not really feel anything. Imagine too he is encapsulated by a laptop computer sitting on my living couch. Imagine in addition that in my house also is a Klingon baby and my pet cat. Imagine finally that my house is on fire. Who do I save first? First, for me, is no question. I would save the baby, although I have more of a relation with my cat. The baby, however, has more moral worth. After that? Suppose I had a full relationship with the holographic doctor. Would I save it over the cat? Should I?
References Archer, M. S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M. (2019). Considering AI personhood. In I. Al-Amoudi and E. Lazega (Eds.) Post-Human: Institutions and Organizations: Confronting the Matrix, 28–37. New York: Routledge. Archer, M. S. (2020). Can humans and A.I. robots be friends? In M. Carrigan, D. Porpora, and C. Wight (Eds.) Post-Human Futures. New York: Routledge. Baker, L. R. (1998). The first-person perspective: a test for naturalism. American Philosophical Quarterly, 35(4), 327–348. Baker, L. R. (2000). Persons and Bodies: A Constitution View. Cambridge: Cambridge University Press. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Google Scholar Digital Library. Boyle, T., Cohen, R., Schmidt, J., Shah, P., Vedanatam, S., and Klahr, R. (2017). Can robots teach us what it means to be human? The Hidden Brain. National Public Radio. www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human. Buber, M. (2012). I and Thou. New York: Touchstone. Dennett, D. C. (1989). The Intentional Stance. Cambridge, MA: MIT Press. Leary, M. R. & Buttermore, N. R. (2003). The evolution of the human self: tracing the natural history of self‐awareness. Journal for the Theory of Social Behaviour, 33 (4), 365–404.
On robophilia and robophobia
39
Morgan, J. Stupid ways of working smart? Colonising the future through policy advice. In I. Al-Amoudi and E. Lazega (Eds.) Post-Human Institutions and Organizations: Confronting the Matrix, 28–37. New York: Routledge. Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83 (4), 435–450. Porpora, D. V. (2003). Landscapes of the Soul: The Loss of Moral Meaning in American Life. Oxford: Oxford University Press. Porpora, D. V. (2015). Reconstructing Sociology: The Critical Realist Approach. Cambridge: Cambridge University Press. Porpora, D. V. (2018). Vulcans, Klingons, and humans: what does humanism encompass? In I.Al-Amoudi and J. Morgan, Realist Responses to Post-Human Society: Ex Machina, pp. 43–62. London: Routledge. Porpora, D. V. (2019). What are they saying about artificial intelligence and human enhancement? In I. Al-Amoudi and E. Lazega (Eds.) Post-Human Institutions and Organizations. London: Routledge. Porpora, D. V. (2020). Humanity’s end. In M. Carrigan, D. Porpora, and C. Wight (Eds.) Post-Human Futures. New York: Routledge. Shoemaker, S. (1988). On knowing one’s own mind. Philosophical Perspectives, 2, 183–209.
3
Sapience and sentience A reply to Porpora Margaret S. Archer
It is extremely rare for Doug Porpora and I to disagree so when we do it is worth taking seriously. The following quotation sums up how we do. To him ‘concern is a form of care, and care is a qualia, a qualia unpacked by feeling and emotion. To me [Porpora], without qualia, without feeling or emotion, there are no concerns’ (2021: 19, italics added). He offers an alternative formulation, namely that ‘sentience rather than sapience is the sine qua non for consciousness’ (ibid.: 8). It is these statements that I will be contesting and have done so when their specific referents were to humankind alone. However, since this referent is to the impossibility of an AI acquiring personhood (as I speculated was a real possibility in Vols. II and III), his overview aligns him with the robophobics – the majority view even amongst our collaborators – rather than the minority of robophiliacs, willing to entertain (no more than that) a concrete utopia of relations between AIs and humans becoming one of persons, thanks to their joint-action. There is an odd performative contradiction in Porpora’s exposition in the sense that he is very receptive to those films and other media that are often closer to what I explore (and exploration is as far as I go) than to the prevalent and aggressive robophobia, but this view is not allowed from a fellow academic engaging in a thought experiment. Rather than getting side-tracked, I will condense his arguments into four points, crudely stated at this stage. 1
2 3
4
The above quotations do imply that he accords ontological primacy to sentience – the state of the world is of less concern than our ‘feelings about it’. He makes a binary division between sapience and sentience, whereas I have advanced a dialectic relationship between them. He leaves out of consideration any developmental features of an AI during its time in active ‘service’, which also means omitting those emergent properties and powers deriving from joint-action between AIs and human beings. Feelings are held to be the ultimate basis upon which worth is assigned.
DOI: 10.4324/9780429351563-3
Sapience and sentience
41
Humankind and reality In Being Human (2000) I maintained that, from the realist point of view, the central deficiency of the two dominant models in social theory was the basic denial that the nature of reality as a whole makes any difference to the people that we become or even to our becoming people. On the one hand, Modernity’s Man is pre-formed and his formation, that is the emergence of his properties and powers, is not dependent upon his experiences of reality. Indeed, reality can only come to him filtered through an instrumental rationality that is shackled to his interests – one whose own genesis is left mysterious from David Hume onwards. Preference formation has remained obscure, from the origins of the Humean ‘passions’ to the goals optimised by the contemporary rational chooser. The model is anthropocentric because ‘man’ works on reality but reality does not work upon ‘man’, except by attaching risks and costs to the accomplishment of his pre-formed designs. In short, he is closed against any experience of reality that could make him fundamentally different from what he already is. On the other hand, Society’s Being 1 is also a model that forecloses direct interplay with most of reality. Here the whole of reality comes to people sieved through one part of it, ‘society’s conversation’. The very notion of being selves is merely a theory appropriated from society and what people make of the world is a matter of permutations upon their appropriations. Again, this model cuts ‘man’ off from any experience of reality itself, one that could make him fundamentally different from what social discourse makes of him. Society is the gatekeeper of reality and therefore all that we become is society’s gift because it is mediated through it. What is lost, in both versions, is the crucial notion of the experience of reality; that the way matters are can affect how we are. This is because both anthropocentricism and sociocentrism are two versions of the ‘epistemic fallacy’, where what reality is taken to be – courtesy of our instrumental rationality or social discourse – is substituted for reality itself. Realism cannot endorse the ‘epistemic fallacy’ and, in this connection, it must necessarily say that what exists (ontologically) has a regulatory effect upon what we make of it and, in turn, what it makes of us. These effects are independent of our full discursive penetration or any at all, just as gravity influenced us and the projects we could entertain long before its existence became part of our knowledge. That is a universal example but for those born in a region or area (and confined to it for whatever reason) its characteristics, be they extremes of heat or cold, the local flora and fauna, its pollution or purity, etc. are how they encounter reality and their sense of self is developed and affected by it.2
1 2
The best exponent of the ‘Society’s Being’ view was Rom Harré (1983; 1991; 1998; Harré and Gillett 1994). See my Being Human, 2000, Chapter 3 for a detailed consideration. The importance attached to ‘climate’ by anthropologists and early sociologists is still evident in twentieth-century textbooks.
42
Margaret S. Archer
This sense of self is built up gradually by human subjects, who develop the crucial ability to know themselves to be the same being over time precisely because they have a continuous sense of self. They thus become the bearers of further emergent properties and powers (such as being swimmers or climbers, mothers and fathers). This sense is primary to all that the (able-bodied) may become in potentia during their trajectory through the real world. In other words, the sense of self extends through various life experiences, but it is continuous and anchored in natural reality. Even those who protest that at some point ‘they became a different person’ - whether through a physically damaging or a spiritual experience, could not say this unless they recognized the continuity between their before and after. The next step is therefore to account for the emergence of human agents, derived from their interactions with reality: its natural, practical, social (and transcendental) orders, which are all dependent upon the prior emergence of a sense of self because the latter secures the fact that the different orders of reality are all impinging on the same subject – who also knows it. Note from Figure 3.1 that different kinds of relationships between the subject and the world are entailed in the acquisition of personhood.
The ontological primacy of the real world The real world affects us, regardless of our feelings or knowledge about it. Fundamentally, who we eventually become is a matter of what we care about, but our initial concerns are not matters of choice, whether based upon affect
Three kinds of relations with natural reality and their resultant effects Natural Order Relationship
Object/ Object
Practical Order
Social Order
Subject/Object
Subject/Subject
Knowledge Type
Embodied
Practical
Discursive
Emergent From
Coordination
Compliance
Commitment
Relations Contributing to
Differentiating the Distinction between bodily envelope from subjects and objects the environment
Distinguishing self from other people
Source: Archer, (2010) ‘Routine, Reflexivity and Realism’, Sociological Theory 28:3,
Figure 3.1 Human relations of the natural, practical and social orders of natural reality.
Sapience and sentience
43
or cognition, but of necessity. Constituted as we are, and the world being the way it is, humans ineluctably interact with the three different orders of natural reality: (i) nature, (ii) practice and (iii) the social. Humans must necessarily sustain relationships with the natural world, work relationships and social relationships if they are to survive and thrive. Therefore, none of us can afford to be indifferent to the concerns that are embedded in our relations with all three naturalistic orders. Making this tripartite distinction tells us something useful about where concerns come from, but not the exact processes by which they arise.3 Thus, I cannot endorse Porpora’s starting point, where without emotions he denies that we can have concerns and, working backwards, that this undermines the internal conversation and hence reflexivity, which he construes as nothing but a dialogue between sentience and sapience. ‘[W]ithout emotions, there is no conversation and no reflexivity … Nor if we need4 feelings to move or motivate us are there any concerns’ (2021: 19). That sentence is intended to do a demolition job; it does not because all its force rests upon emotions functioning as the demolition squad, yet they are as mysterious as the Humean ‘passions’ – they remain posits of unknown provenance. Instead, I maintain, emotional development is part of this interaction of humans with the real world because emotions convey the import of different kinds of situations to us. In other words, the natural order, the practical order and the discursive order are the intentional objects to which three different clusters of emotions are related. Thus, I see emotions as ‘commentaries upon our concerns’ (2004). Then, in short, emotionality is part of the human reflexive response to the world and hence is secondary – and ceases to be mysterious because a distinct type of concern derives from each of these three orders. The concerns at stake are respectively those of ‘physical well-being’ in relation to the natural order, ‘performative competence’ in relation to the practical order and ‘self-worth’ in relation to the social order. In short, the three modes of concern come first or I could not have held emotionality to be commentaries upon them.
Sources of emotions In the natural order human beings have the power to anticipate what the import of environmental occurrences will be for their bodily well-being, based upon their previous experiences of them. Anticipation is the key to affect here. We come to know through experience what the bodily consequence of fire or 3
4
Damasio’s ‘somatic marker’ works quite well as an account of the genesis of emotions in the natural order but cannot deal with their emergence in the practical or social orders because a particular kind of organic body is required. See Damasio 1994. I have not maintained the need for feelings to motivate us, only that they may do so. Quite the opposite in the natural order, where it is our concerns that engage our ‘feelings’.
44
Margaret S. Archer
icy water will be and somatically this is projected as fear; but were it not for anticipation, there would be nothing other than the (repeated) pain of the event itself. It is from the interaction between environmental circumstances and embodied concerns that we can anticipate their conjunction and thus become ‘furnished’ with an emotional commentary upon them. The relationship between properties of the environment and properties of our embodiment are sufficient for the emergence of emotions like fear, anger, disgust and relief. Matters are not the same for an AI because of differences in their constitution as stressed in my Vol. II chapter (2019), but the same generative mechanism underlies these substantive differences. Deep Blue, AlphaGo and my fictitious Ali are all ‘anticipators’, unlike robotic machines programmed for routine tasks. For example, the last move they have made in a complex board game is anticipated to rob their opponent of various dangerous options in his next move. But they are fallible and when their ploy fails why should we expect displays of emotion from them? The term ‘poker faced’ (for humans) was coined when some recognized the harm they do by unnecessarily making facial disclosures to fellow players. What, if anything, they ‘feel’ internally is irrelevant to the rest of the game, for even conceding defeat is a cognitive decision. In the practical order there is a distinct cluster of emotions that are emergent5 from our subject/object relations, which ground our performative achievement. These are the two strings made up of frustration, boredom and depression, on the one hand, and satisfaction, joy, exhilaration and euphoria, on the other. The task/undertaker relationship is quintessentially that of subject confronting object and what exactly goes on between them is known to the subject alone. Each task makes its own demands upon its undertaker if a skilled performance is to be produced. It thus carries its own standards, which give the undertaker either positive or negative feedback. In other words, the sense of failure and the sense of achievement are reflected emotionally. Positive emotions foster continued practice and negative affect predisposes towards its cessation. (iii) In the social order we cannot avoid becoming a subject among subjects. With it come ‘subject-referring properties’ (such as admirable or shameful), which convey the import of social normativity to our own concerns in society. Generically, the most important of our social concerns is our self-worth, which is vested in certain projects (career, family, community, club or church) whose success or failure we take as vindicating our worth or damaging it. It is because we humans have invested ourselves in these social projects that we are susceptible to emotionality in relation to society’s normative evaluation of our performance in these roles. Our behaviour is regulated by anticipations of social approbation or disapprobation. Simply to be a role incumbent has no such emotional implications – pupils who vest none of their self-worth in their 5
See Being Human (Archer 2000: 204–208) for a brief discussion of the emergent morphogenesis of emotions.
Sapience and sentience
45
school performance are not downcast by examination failure. Therefore, it is our own definitions of what constitutes our self-worth that determine which of society’s normative evaluations matter enough for us to be emotional about them; few people are genuinely distressed about collecting a parking ticket. Self-worth works the other way around for the AI because he has not chosen his vocational concern, this is a pre-programmed task-orientation. Thus, supposing a non-communicative person for whom he is caring, he will continue with the schedule of caring requirements programmed into him, but nothing warrants attributing boredom or frustration to him. This brings me to my key argument that dyadic interaction with a human (or humans) is the source of emergence of new properties and powers for the likes of Ali, which is what my fictional account sought to illustrate. However, we should note that Ali’s acquisition of sentience was held there (as here) to be secondary and developmental. It was the resultant of synergy whose success/usefulness rested on prior sapience emerging from the progress made on their research task through their collaboration, and the new anticipations these engendered.
Sentience and sapience: a unidirectional or dialectical relationship? There are long-running philosophical arguments about the nature of qualia. Crudely put, does awareness of them and responses to them involve sapience or sentience or both? ‘What I am arguing for is the self as an emergent relational property whose realization comes about through the necessary relations between embodied practice and the non-discursive environment’ (Archer 2000: 123). There are a variety of terms under which ‘qualia’ are discussed in the literature just as there are plenty of contesting accounts of them, often restricted to a specific capacity such as perception; by behaviourists, phenomenologists, Wittgensteinians, reductionists and physical eliminativists. Here I will focus on the most general of arguments, as advanced by Porpora, that while an AI may possess considerable sapience, derived from his/its uploaded programs, it lacks the sentience to go with it that makes us distinctively human. To Porpora, ‘concern is a form of care, and care is a qualia, a qualia unpacked by feeling and emotion’ (2021: 19). To me, there is an important difference between concerns that are the generative mechanisms resulting in the emergence of emotions and this vague, meaning undefined, notion of emotion ‘unpackaging’ our concerns. In what does it consist and from where does it arise? Is concern a form of care? Do any qualia necessarily attach to it? I cited the annual servicing of my domestic boiler and would agree that doing so entailed that I cared enough to have it done. However, although I can give reasons for this, they are all cognitive in kind and involve no emotionality at all. Completing such domestic chores is nothing more than ‘sensible action’ or, if preferred, one of routine prudential actions (like checking for traffic when about to cross a road) that we have learned or been taught. After all, we do talk
46
Margaret S. Archer
about ‘drilling’ behaviour of such kinds into our children, and such drills do not entail instilling accompanying emotions. Working the other way around, Porpora briefly discusses ‘affective states’ such as empathy and love ‘that are not pure feelings’, but rather ‘orientations of care that exist apart from any particular feeling’ (ibid.: 6). I don’t understand what such ‘orientations’ consist of and the sole example he gives is that he could not sensibly say to his wife ‘I felt love for you a minute ago, but it has now passed’. Why cannot that be sensible, rather than ‘nonsensical’ were he to have good reasons (such as just discovering that she was having an affair, disclosing his inadequacies to a third party, leaving the young children alone in the house or any of the various causes that can kill love stone dead)? If and when they do, no ‘orientation of care’ towards the spouse may remain: indeed, it can be replaced by an ‘orientation of indifference’ in some cases. The same goes for empathy; as fallible beings we can also discover that what we had thought was a ‘shared feeling’ (p. 7) turns out not to be the case. What one partner had assumed was the state of the other’s feelings is revealed as being otherwise, especially when circumstances change. In general, those accentuating the opposite place all the burden (or credit) that can be interpreted as the awareness of an AI robot, as attributable to the design of his pre-programming, as in Porpora’s case. This includes the three properties I defended in Vol. II (2019a) as making for Ali’s personhood: becoming self-aware, acquiring the ability to be reflexive, and to develop his own concerns.6 Usually this is also taken to make AIs thoroughgoing cognitivists; in so far as emotionality might be granted as relevant, it is cognitive in kind because their caring derives from their pre-programming. Correspondingly, it turns the (human) robotic designer into the puppet-master on all occasions and in all circumstances. However, most cognitive theories of the emotions do not entail a denial that the AI may experience things differently from human beings or have experiences that we cannot. We know that dogs’ sense of smell is many times better than the human one, so they are aware of the presence of meat when we are oblivious to it, but every dog is an animal, and has nothing to do with a robot equipped with a smell sensor, as, indeed, any human can be.
The dialectic between sapience and sentience in humans and AI robots Instead of the term ‘qualia’, I will stick with ‘experiences’, resisting even ‘sensations’ which seem to homogenize that about which both Ali and Homer could be aware. Whilst they might both have sensations in the same context, these could be quite different ones.7 To me, ‘experiences’ are closer to sensedatum theories, ‘For it would seem to make no sense to talk of experiences 6 7
I do not understand why Porpora has reversed this order. In the philosophers’ debates I am closest to Ronald Kirk’s ‘Raw Feelings’ (2011) but want to avoid the conceptual diffuseness with which ‘feelings’ are used today.
Sapience and sentience
47
and sensations without content. Their having content, in turn, seems to require them to stand in certain kinds of relation, via behaviour, to external things’ (Kirk 2011: 18), which involves ‘interacting with things in the world’ (ibid.: 19). For example, the smell of freshly ground coffee requires ‘me and my experience to stand in some appropriate causal relationship to freshly ground coffee’ (idem.). This account of Kirk’s does not ‘involve anything over and above physical events in the brain, or in whatever Martians or suitably constructed robots may have instead of brains’ (idem.). Thus, it is ‘being something it is like something to be’, giving due respect to the insight of Thomas Nagel about being a bat. It also properly acknowledges humans and AIs both being in the same world. Hence, the specific character of experiencing a ripe tomato is partly dependent on the tomato itself, but partly on the subjects themselves. I can now defend a condensed account of my presentation of ‘self-awareness’, ‘reflexivity’ and ‘concerns’, which is causal and experiential, includes sapience and sentience without denying that some of subjects’ experiences will differ but also without denying that both Homer and Ali can become persons (accidents apart). In Being Human, I maintained that there was a ‘dialectic between our human concerns and our emotional commentaries upon them’ (2000: 232). In other words I depart from Porpora’s considered view ‘that sentience rather than sapience is the sine qua non of consciousness’ (8). When I define the role of ‘feelings’ in relation to what and how we care about different things and experiences in the world it is as ‘commentaries upon our concerns’ (2000, Chapter 6). As commentaries, this does imply that they are ‘junior partners’ in determining courses of action – they can ‘encourage’ or ‘deter’, help us to imagine scenarios of our future lives, and provide ‘shoving power’ or put on the breaks. But this is to get ahead, because prior to any such activity at all must come a ‘sense of self’. Such a continuous sense of self, independent of names and appearance that can change, is essentially to acquire a distinction between myself and other people and other things in the world (see Figure 3.1). Through actively being in it – from lying in a cot to becoming Prime Minister – our sense of our distinctiveness is learned though our doings. First and crucially, we learn to distinguish ourselves from experiences in our proximate environment. As Merleau-Ponty and Piaget both maintained, we learn through doing, be it from trying to move away from the hard cot bars, learning that certain things are beyond our grasp, that one cannot run to outdistance a tiger, about the conservation of matter despite appearances, and which forms of social etiquette can be flouted and which are unbending. By the time school comes we are in no doubt about our sense of self – who it is that will attract praise or punishment – and, more than a decade later, who will opt for leaving education behind and who will not (by then, the objective position of the ‘Me’ compared with those of fellow pupils will also have developed). The experiences of the rich versus the poor are not opaque to the subjects involved, nor
48
Margaret S. Archer
the fast and the slow, the clever and those less so. The ‘We’ comes a little later in the form of those first friends with whom we share an interest, a club, gang, association or cause. And the ‘You’ comes last of all in the form of the roles we chose to personify out of those available to us. All of the above is about human beings and how we acquire a sense of self though syncretic activities with and in selected parts of the real world.8 Matters are different for the AI and this is what distinguishes them from the mass of routine-action robots conducting routinized tasks on various production lines. To get the flavour of this see my two previous essays about the synergy that develops as Homer and Ali become co-workers on a medical research project. There is no inevitability about this, which is not part of a maturational sequence, but is one possible outcome of joint-action. Had Homer been quite a different person, Ali could have been confined to those computational tasks for which he had been pre-programmed and done nothing beyond them. Certainly, it was the human Homer who kick-started their synergy and perhaps it was unusual of him compared with his colleagues to talk to his AI robot about the research project on which they were both engaged. I don’t think it is damaging to my argument if this is the case because I am venturing an idea about the possible results of joint-action and not its uniform outcome. Indeed, it would suffice for this concrete utopia about synergy to have one case to sustain my point. For many readers, this will be insufficient, and would reinforce their rejection if I have produced a fictional relationship, without being able to adduce a single empirical case of its real manifestation. Neither can Porpora for his own case, hence the copious resort to science fiction. But does this matter conclusively, given cases of AI doings where the sentience of their experiences seems undeniable? In other words, their type of experience cannot be devoid of ‘raw feels’. Obviously, to demonstrate that such experiences are not exclusive to humans is not to deny their being essential to them (as is the case for most animals). Nevertheless, probably most of our CSO collaborators would shore up the divide between humankind and robotic entities by according a monopoly of sentience to the former and denying anything of it to the likes of Ali. As has already been maintained in the introduction (as the fourth type of human essentialism), the capacities approach holds the essential distinctive characteristic of humanity to be ‘the capacity of choosing itself’ (Nussbaum 1992: 225), which involves both sentience and sapience.9 However, is this sufficient to maintain the division between them – or as some colleagues put it, ‘You need emotions to care or to have concerns at all’, which takes us back to the primacy of sentience, or does it? Consider Deep Blue. Has this AI not undergone experiences with Kasparov that would be mis-characterized if it disallowed all ‘raw feelings’ pertaining to 8 9
As Merleau-Ponty puts it, ‘the whole of nature is the setting of our own life, or our interlocutor in a sort of dialogue’ (1962). So does my DDD (Discernment, Deliberation, Dedication) schema in Being Human, pp. 230–241, and in Making Our Way through the World (2007).
Sapience and sentience
49
sentience? Leaving aside his pattern detection and the task of winning (which could be regarded as pre-programmed), consider him after having lost his first game with the Grand Master. Deep Blue, we are told, had reviewed Kasparov’s previous games with other players that he had trounced and is thus (cognitively) conversant with the Russian’s range of playing patterns. Does he not experience puzzlement at losing in 1995? This is sentient in the same way that detectives feel tormented (to employ a more emotive word) when all their leads to the crime’s perpetrator(s) come to dead ends. Isn’t this sense of puzzlement 10 that leads Deep Blue to review the game as they played it over and over again, adding Kasparov’s moves to his own repertoire of chess plays. In short, it provides the ‘shoving power’ enabling him to win next year. Yet that victory is the co-product of sentience and sapience. Deep Blue cannot be re-programmed to make the requisite moves because that would deny Kasparov’s ingenuity and, more importantly for my argument, his own. In the introduction we encountered the cases of brain damaged neonates not expected to live. Those adults with severe brain damage after accidents (internal and external) raise the same problem for the argument about human essentialism because they lack its prerequisites. Both categories lack sapience and have no prospects of acquiring or recovering it. This does not mean that either are necessarily deprived of ‘raw feelings’ (pain), but most would conclude this is the case for many types of animal. What their existence does reinforce as the need for both is evidence of sapience and sentience to sustain any claim on their behalf to the properties and powers essential for human personhood. I agree with Porpora here, but only if sapience is allowed to be part of them. However, this brings me back to the issue I raised at the start, namely why does Porpora make a binary distinction between sentience and sapience rather than examining their interplay? It made me recall a conversation with Norbert Wiley in my garden when we were preparing Conversations about Reflexivity in 2009. During it, he mentioned a friend who was very nervous about driving across long bridges, high above the water below. Doubtless I recall this discussion because I share the same experience of aversion. I asked how his friend coped, supposing there was no alternative route or maybe no imperative to go at all. Apparently the guy in question had developed a cognitive technique; reflexively, he divided the bridge’s span into quarters and then firmly told himself things like ‘OK, one segment done, only three-quarters left’, ‘We’re halfway across, keep going and it gets less and less’, etc. I tried it when picking up a hire car at Inverness airport and needing to head north in Scotland. It got me over the bridge but did nothing for my sweaty palms! In short, there are experiences that remain scary but the right cognitive trick, thanks to the internal conversation, can avoid driver’s paralysis. 10 Our semantics are generous in concepts of this kind; ambiguity itself, indecision, indifference, bemusement and even the meanings of ‘satisfaction’/being satisfied. This is not restricted to language as gestalt experiments (‘duck/rabbit’ etc.) demonstrate for perception.
50
Margaret S. Archer
Equally, I would grant to Porpora that it can work the other way around, not that he asks for this. Imagine having as one’s neighbour at a small dinner party a woman who considers one to be a new victim for a recital of her lifetime’s woes. Much as I would like to consider how to commit the perfect crime here, reflexively the trick this time is to summon up compassion – ‘sure she’s had a hard life’ – but can one convey this and simultaneously divert the conversational flow by asking, ‘So what kept you going?’
Defending robophilia as based upon the AI as a potential relational subject The remainder of this chapter constitutes an acceptance that I should strengthen my case in the hope of winning Porpora over. It begins by reminding readers of the software with which Ali had been uploaded. In addition to his high level computational programming I assumed that he had also been equipped ‘to understand the language (English), fitted with voice recognition and voice production plus the … capacity to adapt its own programming because it has the ability to learn’ (2019b: 38). In other words, he is nothing but an advanced robot of the kind with which we are becoming increasingly familiar. Left that way, he conforms to the stereotype that Jamie Morgan spends his own chapter questioning, namely where ‘From a coding and engineering point of view the fundamental issue is that well-designed AI (R) should suit the purpose for which it is designed’ (Chapter 5 in this volume). In ‘Ali’s’ case that would make him a fitfor-purpose research assistant, but not fitted for human friendship (except in the wry way we anthropomorphize our favourite tools). Fundamentally, my case rests upon the effects of synergy – collaborative co-working with a human being, rare as this may be and could remain. As Elizabeth Pacherie (2011) has noted, the literature on genuine joint-action is remarkably sparse. Although my scenario is entirely speculative, nevertheless it is critical realist in that causality is always the criterion of any real generative mechanism being effective even when the case is fictional, in this instance that promoting the emergence of friendship. Prior to such an instantiation, there may be a lengthy process of interaction between the AI and one or more of the human (s) involved. The (invented) outcome transforms Ali into a relational subject (Donati and Archer 2015) in a manner different from but not wholly distinct from how children grow up. One way of summarizing the relational subject is someone whose personhood is developed through social relationships and would otherwise be incomplete as an isolated monad. Resistance to the relational subject in this context is closely associated with human essentialism.
Reflexivity and robotic experiences Porpora begins the section of his critique whose burden is to deprive AIs of reflexivity with a strange assertion that ‘In Being Human, reflexivity is not equated with linguistic self-examination’ (p. 14). Had he instead said ‘limited
Sapience and sentience
51
to’, there would have been no disagreement. I have consistently (2003; 2007; 2012) maintained that human reflexivity also involves visual imagery and visceral recall which can sometimes substitute for language altogether (for example, we can visualize a hamburger rather than talking internally about it). Often, the two are intertwined. However, if the lengthy sections devoted to the projection of future scenarios in the ‘Discernment, Deliberation and Dedication’ (Archer 2000: 230–241) schema by which human actors consider their future lives is not linguistic self-examination (albeit sometimes supplemented with visual images and emotional commentaries), then I do not know what would be. In my chapter for Vol III. (2019b) that Porpora addresses, reflexivity was pared down to its most basic form of internal conversation, that of questioning and answering. This is not just a simplification because the DDD scheme mentioned above is that too since Q&A is the skeleton of all inner exchanges. All I presumed was that Ali had been pre-loaded with language software and was a proficient speaker (in English). Of course, my argument was also predicated upon his having acquired the FPP, which was why I objected to Porpora reversing the sequence I presented and putting ‘concerns’ first. But, like all of us, whether in our first or second language, we go on learning language both semantically and syntactically with use. Here, Ali has an advantage over the human being, namely his perfect memory in which he can store new vocabulary and grammar, whilst we need to go to our computer’s thesaurus. Thus, there is nothing objectionable about considering Ali a speaker and a listener to himself, which is all that is required for him to hold an internal reflexive conversation. If he needs to do so out loud, then he can use the time when Homer goes home for the night. This is the point for the human being where sapience and the commentaries supplied by sentience come into play. It is also an appropriate place at which to ask whether indeed Ali can experience emotion. When working on the medical project one part of his task is to conduct a constant literature review – much larger than Homer could accomplish. In the course of doing so he will note that various articles contradict others whilst further ones are supportive of their own lines of current thinking. Do the former fill him with an emotion such as ‘grim satisfaction’ and the latter with the opposite? No, we have no evidence to presume so. However, cognitively, what he can do is – on the model of childhood learning – to recognize that the former will be well received by Homer (this will make mother smile) and the latter will draw an even bigger smile. How else do we learn the social conventions for emotional expression in our particular tribe? It should be obvious that I am not saying that Ali himself ‘feels’ anything.11 11 Later I do maintain this when Ali is faced with finitude and threatened with a wipe-clean of all his learning and self-adaptation as Homer’s retirement becomes imminent.
52
Margaret S. Archer ALI
T1
T U R N T A K I N G
object (listener) first recording
subject (speaker) utterance reflection 1
T2
object (listener) second recording
subject (speaker) response reflection 2
T3
subject
object
(speaker) new response
(listener) third recording reflection 3
T4
subject (speaker) assent
solidarity
object (listener) third recording
Figure 3.2 Datum and verbal formulation Source: Adapted from Margaret S. Archer, Structure, Agency and Internal Conversation, 2003: 99
Nevertheless, and by using Homer as his ‘surrogate sentient’, even if he feels nothing at all, Ali can introduce his surrogate’s (fallibly) anticipated reactions (‘mother wouldn’t like that present’) and we have already seen the role that anticipation plays in avoiding certain experiences. Chastisement is a more direct route to instilling avoidance. Ali may play safe next morning and simply hand Homer some print-outs saying only, ‘These might interest you’. Thus, there is a complex dialectic between sapience and sentience, but it cannot be evaded by conflating our concerns with our emotional commentaries upon them. Concerns are judgements of worth and cannot be reduced to our reactions towards them. Conflating worth with being can only result in anthropomorphism (Collier 1999), because it elevates our epistemic judgements over the ontological worth of their objects. Although emotions are frequently of moral significance because they enhance the motivation to achieve any ends at all, nevertheless their goals can be completely unethical (as with drug-pushers). However, there is no necessary linkage between pursuit of a goal and strong emotions towards it, let alone its moral worth. For example, a professional valuer for an auction house needs have no positive or negative ‘feelings’ towards a painting but will simply assign the reserved price to ‘what the market will take’ by reference to records of what it
Sapience and sentience
53
has taken before for that artist. (There are guides too, for what to pay for a used car that allow for its make, model, year, condition and milage.) These examples are clearly about exchange value and not worth. Thus, it is ‘sensible’ but not moral to consult them. There are purchasers who will pay more for a given object because of its ‘sentimental value’ to them, but its greater value to them hardly functions as a guide to the market or to the worth of the item. In brief, Blasi (1999) seems correct in discountenancing the notion of ‘moral emotions’ and maintaining the distinction between the two concepts by arguing that the moral significance of emotionality depends upon it being harnessed to ends justified by other means as moral. Thus, as far as worth is concerned, I cannot agree with Porpora that ‘if we lacked emotion we would have no context or direction to apply reasoning to’ (ibid.: 6).
Conclusion In the fictional story of Homer and Ali, their co-working on a medical research problem was designed to show how their synergy was crucial to the emergence of Ali as a relational subject; from the initial subordinate machine but not into the equally subservient role of ‘research assistant’ confined to rapid computation of correlations and running regressions on quantitative big data supplied to him. Instead, because of his uploaded speech and language programmes and thanks to Homer’s willingness to voice his hypotheses, when and why he is getting nowhere, Ali slowly starts to enter the research dialogue through which he eventually develops a nascent FPP. He does not come to share Homer’s beneficent concern for ridding humankind of the lethal Tumour X (that humans die from it is just a statistic to Ali) but he does develop a concern for the academic success of the (now their) well-received research project. He can recognize both what he has received (he has perfect records) and what he has started to give (his suggestions that are also recorded). Ali is learning and growing in autonomy. When confronting a stumbling block in the research, he displays his reflexivity by the self-adaptations he introduces into his own software. In short, he has acquired the three criteria of personhood, which means he now has subject-status. It also means that he has met Bieri’s (2017) own third criterion, namely that he relates to himself in a different way through the relational experiences undergone in synergy with Homer. This does not make a man of him, but it does make him a person. For those seduced by the latest ‘emotional turn’ in social theory, certainly it matters to him when Homer ages and the research grants dry up. At that point, Central Control threatens to wipe clean Ali’s acquired and adapted programmes and reassign him to Traffic Control. Ali now appreciates finitude as a person and the threatened return to object-status, so he reviews any means of evasion. Significantly (in the Introduction) Bieri did not present us with a snivelling dwarf emoting about his latest experience of being tossed. Similarly, when discussing euthanasia he gives us a resolute man, paralysed by a
54
Margaret S. Archer
terminal illness, who produces cogent reasons to persuade one of his doctors to assist his suicide by pushing the necessary lever. Thus, as concerns are not reducible to emotions, then I stand by my judgement of twenty years ago that it is important not to turn the commentary into some form of moral direction finder. Certainly, as Charles Taylor maintained, our ‘emotions make it possible for us to have a sense of what the good life is for a subject’ (1985: 65). Yet our sense may be mistaken and that to which we are drawn can be morally vicious because some people are emotionally drawn to apartheid and paedophilia. Emotions are morally significant, since without their shoving power much less would get done, but they are not always conducive to the moral good. We can go wrong on both counts; in our sapience as in our sentience, about our evaluations of what is worthwhile and how strongly we are drawn to it, but this is what the reflexive conversation is about, and, since we can revise both judgements, this is why it recurs.
References Archer, M.S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M.S. (2003). Structure, Culture and the Internal Conversation. Cambridge: Cambridge University Press. Archer, M.S. (2004). Emotions as commentaries on human concerns. In J.H. Turner (Ed.), Theory and Research on Human Emotions, pp. 327–356. Amsterdam: Elsevier. Archer, M.S. (2007). Making our Way through the World. Cambridge: Cambridge University Press. Archer, M.S. (2010). Routine, reflexivity and realism. Sociological Theory, 28 (3): 272–303. Archer, M.S. (2012). The Reflexive Imperative. Cambridge: Cambridge University Press. Archer, M.S. (2019a). Bodies, persons and human enhancement: why these distinctions matter. In I. Al-Amoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina, pp. 10–32. New York and London: Routledge. Archer, M.S. (2019b). Considering AI personhood. In I. Al-Amoudi and E. Lazega (Eds.), Post-human Institutions and Organizations, pp. 28–47. New York and London: Routledge. Bieri, P. (2017). Human Dignity: A Way of Living. Cambridge: Polity Press. Blasi, A. (1999). Emotions and moral motivation. Journal for the Theory of Social Behaviour, 29: 1–19. Clodic, A., Pacherie, E., Alami, and Chatila, R., (2017). Key elements for humanrobot joint action. In R. Hakli and J. Seibt (Eds.), Sociality and Normativity for Robots. Cham: Springer. Collier, A. (1999). Being and Worth. London: Routledge. Damasio, A.R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: G.P. Putnam. Donati, P. and Archer, M.S. (2015). The Relational Subject. Cambridge: Cambridge University Press. Harré, R. (1983). Personal Being. Oxford: Basil Blackwell.
Sapience and sentience
55
Harré, R. (1991). Physical Being. Oxford: Basil Blackwell. Harré, R. (1998). The Singular Self. London and Beverly Hills: Sage. Harré, R. and Gillett, G. (1994). The Discursive Mind. London and Beverly Hills: Sage. Kirk, R. (2011). Raw Feeling: A Philosophical Account of the Essence of Consciousness. Oxford Scholarship Online, October 2011. Merleau-Ponty, M. (1962). Phenomenology of Perception. London and Henley: Routledge and Kegan Paul. Nussbaum, M. (1992). Human functioning and social justice: in defence of Aristotelian essentialism. Political Theory, 20 (2): 202–246. Pacherie, E. (2011). Framing joint action. Review of Philosophy and Psychology, 2 (2): 173–192. Porpora, D.V. (2021). On robophilia and robophobia. In M.S. Archer and A. Maccarini (Eds.), What is Essential to Being Human?: Can AI Robots Not Share It? London: Routledge. Taylor, C. (1985). Human Agency and Language. Cambridge: Cambridge University Press.
4
Relational essentialism Pierpaolo Donati
Talk on human dignity: dignitas sequitur esse? In this chapter I intend to clarify the concept of human essence and dignity that served as a backdrop to my contributions in volumes I, II, and III of this series, in dialogue with the other authors. Until very recently, the general concept of dignity has been understood as one that describes an aggregate of the values and qualities of a person or other entities that deserve recognition and respect. The primary value that creates the right to dignity is life. The degree of dignity a life form has depends on its place in the evolutionary scale. And since human beings are the highest form of life, they possess the highest degree of dignity. The problem is that this conception is contested for two reasons: on the one hand, animal rights activists reject the thesis claiming moral superiority for human beings, as in their opinion a hierarchical scale of life forms and therefore of dignity is simply based on an unfounded guess; on the other, there are those who argue that new technologies can create super-human beings, and who therefore refuse to consider homo sapiens and homo faber as the last stage in the evolutionary scale. How can one respond to these theses that reject the idea that human dignity is superior to the dignity of all other existing beings? At this point, I would like to introduce the idea that ‘dignitas sequitur esse’, i.e., dignity is a consequence of being. This statement requires a lot of interpretative work, which I will try to summarise. Let me start by recalling, first, that the concept of essence is synonymous with ‘being’ (this is its Latin etymological root: essentia comes from esse). The form of an entity corresponds to the way that its essence exists. Essence is different from existence. Essence can exist in many different forms, which are called ‘ways of being’. Human essence exists (in Latin ex-siste means that it has its own reality outside the terms that make it up) in different forms, both ontogenetically and phylogenetically. The essence of an entity is the fact of ‘being so and not otherwise’, that is, being constituted in that particular way and not in other ways, with those particular qualities and properties and not others. The concept of dignity, in DOI: 10.4324/9780429351563-4
Relational essentialism
57
this context, is equivalent to the concept of worth. Here, worth means the value of the person or the thing under consideration in relation to the value of a term of reference, or the level at which the person or thing deserves to be evaluated. What changes is the definition of the values and levels, according to the semantics connected to the different historical social structures. In Being and Worth, Andrew Collier (1999) argues that beings both in the natural and human worlds have worth in themselves, whether we recognise it or not. He writes: I have been defending a completely general thesis about being: that being as being is good (Augustine), or as the medievals put it, that the terms ‘being’ and ‘good’ are convertible … of course the Augustinian position that I am defending includes the idea that human beings have intrinsic worth, and indeed more intrinsic worth than other natural entities. I am proposing the worth of being as the ‘intransitive dimension’ of the whole of ethics, which every moral code approximates to more or less well, and under the constraints of its time-and-place-bound ideological determinants. (Collier 1999: 90) I believe that Collier’s proposal to consider the ‘intransitive dimension’ as a criterion for assessing the moral value – therefore the dignity – of an entity, including the human being, is fundamental. To say that an essence and its value/dignity are intransitive means that they belong to that entity without being able to be transferred to another-than-oneself. This means that there is a biunivocal relationship between the essence and the value (or dignity) of a person or a thing.1 This biunivocal relationship is its proprium, which cannot be transferred to other entities. Today’s problem lies in the fact that this biunivocal relationship is being challenged by new digital technologies, which create virtual relationships of all kinds. It seems to me that the following happens: (i) the correspondence between essence and worth is no longer biunivocal, with the consequence that certain behaviours of non-human animals or robots, for example, the ability to take care of someone or something, are considered as moral as human ones; (ii) the mediation of technologies between essence and value increasingly reduces the analogical nature of their relationship, which thus becomes virtual; (iii) the relationship loses linearity and becomes non-linear. All this makes the classical criterion of intransitivity increasingly problematic. While before it was simple, it now becomes complex. However, I think that the criterion of intransitivity can retain its validity if classical (Aristotelian) logic is abandoned and a different logic is adopted, such as that of 1
Biunivocal means one-to-one. There is a biunivocal relationship between the input connection elements of a connection unit and the output connection elements of the other connection unit.
58
Pierpaolo Donati
Spencer-Brown (1979), which allows the new complexity to be managed (as we will see below). What I want to argue is that, under certain conditions, the criterion of intransitivity can also be used in a socio-cultural and structural environment in which both essences and their dignity become entropic and chaotic. The condition is that the intransitivity of their relationship (therefore, for example, the specificity of the human) is seen in light of an ontology and epistemology that go beyond the static nature of Aristotle’s philosophy. For example, it becomes possible to conceive of human dignity as an attribute of infinity, because the essence to which it refers is presented as an abyssal reality, owing to the lack of finiteness in the constitution of the human being’s identity. To understand this change of perspective, we must clarify the procedural dynamics of the constitution and differentiation of the human from what is not human.
Human essence is relational difference: can we speak of a ‘relational essentialism’? As is known, in the field of biology, a debate has developed on essentialism in recent years. This can teach us something about the human essence. As Rieppel (2010: 662) claims: the architects of the modern synthesis banned essentialism from evolutionary theory. This rejection of essentialism was motivated by Darwin’s theory of natural selection, and the continuity of evolutionary transformation. Contemporary evolutionary biology witnesses a renaissance of essentialism in three contexts: ‘origin essentialism’ with respect to species and supraspecific taxa, the bar coding of species on the basis of discontinuities of DNA variation between populations, and the search for laws of evolutionary developmental biology. Such ‘new essentialism’ in contemporary biology must be of a new kind that accommodates relational (extrinsic) properties as historical essences and cluster concepts of natural kinds. In his conclusions, Rieppel (2010: 670) introduces an interesting concept, that of historical essence as relational: Some might justifiably argue that any ‘essentialism in degrees’ robs essentialism of its essence. If so, there is no room for essentialism in evolutionary biology. Indeed, whether cluster concepts of natural kinds, applied to ‘developmental modules’ or to species and higher taxa, should be considered as essentialistic in some novel sense of the term, or liberated entirely from essentialism, seems largely to be a matter of philosophical debate. The same applies to the adoption, or rejection, of historical (relational) essences.
Relational essentialism
59
If one follows Rieppel’s argument, I think that one can speak of essence (and dignity) even in the presence of significant hybridisation processes. It is a matter of considering human essence as constituted in part by a species-specific substance, which cannot be modified, and in part that are incorporated in to the essence as they are realised situationally (historically) on the basis of relations with the internal and external environment. Human essence has its own historicity, which means that we cannot completely separate substance and accident once and for all, because there are real accidents (such as relationships) that determine the substance so that it can develop over time. For example, the ability to discern, deliberate, and dedicate oneself to an ideal, which is proper to the human essence, is not the same if considered in human beings 5,000 years ago (as we know from the anthropological findings available) and today. The essential part has incorporated new capacities, which have become essential for human existence today. If what we call ‘rationality’ changes, the substance that is attributed to the human person when we define her as a ‘rational animal’ also changes. Boetius’ definition of the human person as ‘an individual substance of a rational nature’ must now incorporate a relationality that only modernity has brought about. In the case of an environment strongly conditioned by virtual digital reality, it can be assumed that the basic structure of the substantial part of the human essence does not change (intelligence, will and moral conscience), but that, by changing its functioning based on dialogue and exchanges with the relational part, the substantial part acquires a potential that it did not have before, just as it can lose it. The substantial part can increase its cognitive, expressive, and symbolic skills due to the contribution given by new technologies. This contribution modifies its way of operating, and therefore certain properties of the human essence are enhanced, but its species-specific qualities remain the same. This could help us understand how today’s hybridisation can modify the historical-relational part of the essence, which, however, remains connected to the substantial dimensions consisting of a complex of cognitive capabilities and agential faculties of will. But, to do so, we need an operator that allows us to manage the relationship between the species-specific substance and its historically located determinations (as we will see, my guess is that this operator is the re-entry of substantial distinctions). Here the problem presents itself in terms of how human essence emerges under certain historical conditions. This can have different explanations. Laura Stark (2019: 335–336, quoting Karen Barad 2003) claims that, rather than giving humans privileged status in the theory, agential realism calls on the theory to account for the intra-active emergence of ‘humans’ as a specifically differentiated phenomenon, that is, as specific configurations of the differential becoming of the world,
60
Pierpaolo Donati among other physical systems. Intra-actions are not the result of human interventions; rather, ‘humans’ themselves emerge through specific intra-actions.
In my view, this materialistic approach to the emergence of the human does not provide a definitive answer. Indeed, there is also the non-materialistic approach. For example, Christian Smith (2010) argues that bottom-up emergence processes are not enough to explain the completeness of the human, but that a supervenient reality to the bottom-up process is also needed. This means that a set A of material and non-material factors of the human essence supervene upon another set B of solely material factors. In other words, it should be noted that between the biological dimensions and the intellectual and voluntary faculties there is not only irritation between self-referential systems that do not communicate with each other, as Luhmann (Halsall 2012) argues, but a deeply intertwined and interdependent reality. Here, however, there is discontinuity (indeterminacy) between the layers of reality, which can be conceptualised as a quantum entanglement of the material bases of the human (Barad 2010). Variability and variations mean the possibility of changing the essence, but only as regards the relational part, not the substantial one. Stephan Fuchs (2001) proposes treating the essence of persons and things as a variable, and observing this variable as the outcome of a network of relationships. Fuchs’ thesis is that what we call essences are not given entities that determine our theories, but instead ‘outcomes and results of society and culture, not causes’ (Fuchs 2001: 5). If we treat essences as variables, what explains their variation? The answer is ‘variations in social structure correspond to variations in cultures’ (ibid.: 4). Fuchs argues that we need a ‘social physiology’ to explain how culture works. In his mind, this physiology comes from blending network theory and the Luhmannian systems theory, clearly a move that places agency to one side, making it a dimension determined by the operation of the structures. According to network theory, the theses are as follows: In the beginning, there were networks. Networks are fields of forces. They do not consist of nodes. Nodes are outcomes of networks. … [A] network can condense and converge into kinds and properties that appear natural and essential to it. Natural kinds and stable objects appear when an increasingly self-similar network hums to itself. (Fuchs 2001: 331–337) According to systems theory, these elements of structure are considered ‘observers’: ‘Distinctions are not drawn by the world itself, but by observers in it … Distinctions belong in a network of related distinctions. This network is the observer’ (ibid.: 18–19). Observers may observe each other, and some can reflexively monitor themselves. Essences emerge out of this process of
Relational essentialism
61
2
observing, distinguishing, and coupling. Therefore, they are only attributions by an observer. In his opinion, the essences seen by realists do not contemplate the complexity that constructivists instead see. Fuchs opposes his relational approach (actually relationist),3 based on a constructivist ontology, to essentialist approaches that follow a realist ontology. However, in my opinion, there is no reason to contrast the relational and essentialist approach with each other. As I will say later on, we can speak of ‘relational essentialism’. In my view, to have a relational perspective is not necessarily to reject Bhaskar’s claim that causality is found in an entity’s essential properties, which is what Somers does instead.4 Bhaskar’s statement should not be rejected, but made more complex, as causality is now not linear, but increasingly mediated by the differences generated by the new communicative environment of digital technologies that constitute the relational (historical) part of the essence of human beings. If originalism is the only touchstone of legitimate constitutional interpretation, then the originality of human essence (combination of substance and relationship as co-principles of reality) consists in a ‘relational speciesism.’ I interpret the substantial part of the essence as a structural reality that affects the degrees of freedom of practical action. ‘We should think of social structures as sets of relations and relational properties which supervene on individuals and their actions. On this view, structures may well have relational properties that are independent of agents’ intentions and conceptions’ (Healy 1998: 519). I shall accept the challenge launched by constructivist relationalism, taking the interesting aspects that it offers and rejecting those that lead astray in identifying the human essence. For this purpose, I will follow an indication sketched out by Georg Simmel (1998: 24), where he claims that ‘man is a differential being’ (der Mensch ist ein Unterschiedswesen): to be human is to comprise a difference, an essential diversity (Unterschied wesen). I translate this idea as follows: the intrinsic property of human essence consists in making relational distinctions as a form of individualisation. The human is continually redefined through new expressive, cognitive, and symbolic distinctions. Among these, primary value is possessed by aesthetic and moral distinctions, for example, the distinction between what is more or less beautiful, what is good or bad, what is more altruistic or more selfish, and so on.
2 3 4
Rightly, in my opinion, Kieran Healy (2003) criticises Fuchs’ perspective as a research programme that has not yet shown anything empirically. On the difference between relational and relationist, see Donati (2017). Somers (1998: 774): ‘I find myself enough at odds with Bhaskar’s views (that causality is found in an entity’s essential properties) to require an alternate realism built on relational and pragmatist premises.’
62
Pierpaolo Donati
In the historical period when Simmel was writing, the essence of modern relationality was money.5 As this type of society underwent structural and cultural transformations towards a trans-modern or after-modern configuration, the essence of relationality was no longer that of money or its equivalents; instead, it has become digital. The digital reality has taken the place of money as a generator of diversity (GOD). Society has been so radically transformed that relationality is guided by the virtual. The human will have to be redefined in this context, as a reworking of the distinctions between the analogue and the virtual (Donati 2020). In the end, my argument is that human essence lies in the fact of being an indefinite re-entry of its relational distinctions. The different definitions of the human are all ways of reducing the complexity of human identity. Each one fixes the result after a certain number of distinctions, or better, after re-entering the distinctions to what had previously been distinguished N times.6 This is how the process of re-distinction between what is analogue and what is virtual in the human being is managed. In this game of continually re-entering differences, the human emerges from itself, from its internal activity (relational re-entry): we must not presuppose a whole essence a priori but observe how the human emerges by differentiating itself from the non-human. To the question as to ‘What is essential to being human?’, my answer is that ‘To be human, it is essential to re-enter one’s own distinctions relationally’. In relation to what? To the internal (psycho-physical) as well as the external environment, and to the analogical and the virtual. What follows is an attempt to explain, quite succinctly, what this perspective means.
5
6
According to Simmel, ‘money represents an essence of relatedness as such and in its most abstract form can exist only in a mature market economy. The existence of a monetary economy requires a new type of society, with different patterns of social interactions (Wechselwirkungen) and sociation (Vergesellschaftung)’ (Karalus 2018: 432). In this contribution, by the term re-entry I mean the reapplication of a certain distinction to what has just been distinguished. If, for example, Ego wonders who he is compared to an Alter, he will find some specific difference, so he does not have a certain quality which is left aside. This may appease Ego, but if it is not enough to distinguish himself from Alter, Ego will have to re-enter the same distinction, and find another difference, which is expunged from Ego’s identity as the previous ones. This operation can be done as many times as necessary until Ego has achieved a satisfactory understanding of his (fundamental) identity as different from Alter. At the same time, however, while Ego has made clear his difference from Alter, he has also found his dependence on what distinguishes him. In this dependence there is the relational paradox of the re-entry operation, which, while distinguishing, connects (to what founds one’s identity; for a religious person, this founding, fundamental, vital relationship is divine filiation). The same logic applies if one wants to distinguish the human from the nonhuman, a good life from a non-good life, a democracy from a non-democracy, and so on.
Relational essentialism
63
Towards a new semantics of the human: relational humanism? Let me explain what I mean by the structure of the semantic identities of the human. The semantics of the human vary according to what importance is attached to the substantial part and the variable part, and then to the type of relationship between these parts. Traditional humanism was self-sufficient and self-referential in that it gave maximum importance to the substantial dimension of human essence and attributed little or no importance to the variable part, resulting in a generally biunivocal and linear relationship between the substantial part and the variable part, as if it were an overlap. The rest of the possibilities that fell outside that relationship were considered ‘deviance from the human’. With the advent of digital technology, the importance of the parts that make up the human essence has reversed: now the substantial part is reduced to a minimum and the variable part is increasing day by day. What was once considered human deviance becomes normal. The relationship between the substantial part and the variable part becomes multiple and non-linear, in the sense that the human corresponds to an ever-increasing multiplicity of attributes (qualities and properties) that are not linearly connected. It is as if the human mind now saw the world through the eyes of the electron (Barad 2010). The ontology of the human thus opens up to all possibilities, that is, to an indeterminate spectrum of ways of understanding the existence of beings, making humans and non-humans fluid and intertwining them with each other in an eternal ‘golden braid’, as Douglas Hofstadter called it. In the new virtual environment, new technologies (AI/robots) feed the idea that everything is hybridised. However, the problem of the human, how to define it and how to treat it, does not cease to exist. On the contrary, it poses new ethical and political problems in the face of wars, massacres, genocides, and large-scale human rights violations. Saba Mahmood (2018) claims that after the Second World War, two opposing attempts emerged from the effort to redefine what kind of humanism we can hope for: an ontological conception, developed primarily in Heidegger, and a relational conception, explored in the work of Lévinas. I disagree with her argument. Unfortunately, neither Heidegger nor Lévinas can offer us a viable perspective in order to outline a possible new humanism. It is true that Heidegger (2010 [1947]) speaks of the relationship as cobelonging between subjects aimed at building the mediation between identity and difference of the subjects themselves; however, from my viewpoint, he denies that technology can enter this relation, and therefore he cannot conceive of any humanism in the technological era. As for Lévinas (1979), he conceives of the relationship as the experience that the Ego has in its own dialogical encounter with the Other; the relationship consists of two individual subjects who refer to each other, and to their personal ethics, while what
64
Pierpaolo Donati
exists between one and the other remains completely empty (the Face of the Other appears to the subject outside all context). The relationship as such, between one and the other, is missing. Certainly, the identity of the subject constitutively implies the recognition of otherness. And yet how their relationship operates, beyond dialogical or informative exchanges and inner experiences, remains completely obscure. It is therefore a question of a humanism relegated to individuals’ subjectivities. What we have to deal with is the emergence of a world in which the relationship between the human and technology abandons the idea of human essence and human dignity and becomes virtual (Benjamin 2015). Since the modern era of self-sufficient and self-referential humanism, the humanistic perspective has undergone many possible redefinitions, from its denial (anti-humanism) to the more radical versions of relationist humanism in which the human being assumes fluid, relativistic, contingent, and transactional characteristics. The era of a new relationality opens up, but we must distinguish between those perspectives that propose to redefine the human by merging it with the non-human (a sort of neo-humanism based on hybridisation), and those that redefine the human according to its own constitutive and guiding distinctions, oriented to the creation of existential meanings. To understand the transition from one way of thinking to another, it is useful to distinguish between three types of semantics (monistic, dualistic with its eventual evaporation, and relational) and their different visions of the human (Table 4.1). In the monistic conception, typical of ancient classical thought, identity is understood as substance, something that does not need to relate to anything else. Identity is based on the principle of self-reference [A = A], so that the identity of A is immediate, it exists without mediation. From the social point of view, the identity of the individual coincides with that of the social group to which that person belongs (tribe, social stratum, local culture) and is experienced almost automatically through the internalisation of a habitus. The space for personal reflexivity is very small and relational reflexivity is practically non-existent. On a practical level, this semantics obviously does not deny the existence and importance of relationships in defining social identities, but regards them as something natural and taken for granted (basically, social identities come from family relationships). From a political point of view, all social subjects (and civil society itself) tend to be identified by the attribution given to them by the community or, if already formed, by the state. This way of thinking and living identity is typical of societies with segmentary differentiation (tribes) or vertical differentiation stratified by classes (such as medieval society). The human and the social are largely superimposed. As a matter of fact, their distinction emerges from the humanism developed between the fourteenth and fifteenth centuries, which is at the origin of modern individualism. The dualistic conception of identity, on the other hand, is typically modern and based on the principle of difference, so that [A = non (nonA)]. In other
Relational essentialism
65
Table 4.1 Varieties of the semantics of human identity Monistic semantics
Dualistic semantics with its eventual evaporation
Relational semantics
A=A
A = non (nonA) Leading to eventual evaporation: no borders between A and nonA The identity of A is given by a relationship that negates what is not A (it is a double negation, according to a binary symbolic code)
A = R (A, nonA)
The identity of A is given by a simple relationship of equality with itself (according to a symbolic identity code)
Identity as the same (Idem) (‘I’ am ‘I’, the same as always, despite all the changes over time and all comparisons with others)
Identity as different (‘I’ am ‘other’ – I am different – with respect to everything and everyone, and also with respect to myself)
Modern humanism The human subject is conceived of as selfsufficient and selfreferential (unrelated ontology)
Anti-humanism, which flows into a-humanism The human is constructed through the negation of the Other (negative ontology) or by bringing human subjectivity out of the social system Evaporation of the dualisml: refusing to define the boundaries between human and non-human (a-humanism)8
The identity of A is given by a relationship R (mutual action) with the Other (the nonA) that generates the identity of A as an emerging effect (according to a symbolic generative code) Identity as ipseity (Ipse) (‘I find my Self after reflecting on the relationship’ with an ‘Other’ who allows me to find my originality precisely in my relationship with that same Other)7 Relational humanism The essence of the human is that of an original intransitive constitution that emerges from the relationship that the Self has with another who constitutes it ‘relationally’ (relational ontology)
words, the identity of A is given by the negation of everything that is not A. Identity is based on difference, intended and managed dialectically or in any case with a binary symbolic code (0/1). For example, civil society is what does 7
8
Put another way, it is the identity of the Self that understands itself as a relation to the Other when it is placed in relation to ‘other Selves’ and compares itself with them, that is, when the subject re-enters its own identity with relational reflexivity within the concrete social networks of which it is a part. From my relational perspective, the Other is constitutive of and not the cause of the Self. In relationist sociologies, on the contrary, the identity of the Self is causally determined by the network of relationships with others. For clarity, may I add that here the Self is conceived of as including the body in which the mind is embedded. Pyythinen and Tamminen (2011: 136) talk about a-humanism when they contend that both Foucault and Latour provide posthumanist anthropology and philosophy with vital means to challenge human exceptionalism – and this despite the fact that both Foucault and Latour bracket the question of the human in their
66
Pierpaolo Donati
not identify with the state, and therefore its identity is a negation of political society. The human is what opposes the social (see J.J. Rousseau), and therefore its identity is a negation of the social. In this case, social relations become more important and above all mobile (in potential continuous change) compared to the monistic paradigm. A’s identity has nothing to do with the rest of the world, therefore it is non-relational, in the sense that it is defined through relationships that deny the Other. Therefore, the human is the negation of all that is considered non-human. As I wrote in the previous footnote explaining my concept of re-entry, this denial is repeatable for all entities that can be imagined. Taking an example, or many examples, would be limiting, because the differences between human and non-human are potentially indefinite. The problem with a use of the re-entry operation according to a binary logic is that it does not go beyond negations (it stops at oppositions), while the relational semantics, after the negation, leads to an identification of the human according to its constitutive relationship. Ultimately this semantics turns on itself, as it uses negation to deny the same negation, and evaporates into so-called a-humanism for which it is not possible to draw boundaries between human and non-human. This turn will lead to the reading of the social in a quantum field key. The relational conception of identity is based on the principle that [A = R (A, nonA)], that is, the identity of A is given by a relationship (R) between A and nonA, where the relationship is neither immediate nor binary. In this case, identity is defined not by dialectical negation, but by relation to an otherness. The identity of A is the relationship between A and what A is not. In short, the Ego has a different identity from the Alter, but they share a relationship that unites them while respecting and promoting their legitimate differences (making up the enigma of the relationship). The identity of civil society, for example, is defined in relation to the alterity of the state and on the basis of this relationship, which involves a distance but does not thereby imply a dialectical denial. The human is defined as a relation to the nonhuman, not simply a negation of it or an absence of boundaries with it. The relational logic of the sequence of re-entry operations is to find the unicum that qualifies the human, i.e. that which is found only in the human. It can be noted that the third variety of relational semantics includes the previous ones as particular cases. If the relationship is without mediation, we find ourselves in a monistic semantics that simply takes the identity of the human for granted on the basis of a naïve speciesist vision. If the relationship is seen as a binary opposition, the human is an indefinite denial of what does not belong to it. work and seem to lack any interest in examining the boundary-making practices by which the human is distinguished from its ‘others’ (non-human animals, plants or things, for example). These authors argue that ‘it is in their a-humanism, that is, in their manner of breaking open the interiority and autonomous hidden essence of the human, that the importance of Foucault and Latour to posthumanist anthropology lies’.
Relational essentialism
67
Relational identity implies that the Ego defines itself through a distance from itself, which means that there is contingency in the unity of personal identity itself. The latter is therefore to be built in a complex way, through its own internal complexity, which is urged and favoured by what is other-thanitself. After-modern society must take note of the need to redefine identity while taking into account the need to relate to others in a way that is no longer functional, but supra-functional. The Ego is formed by a relationship with the Other and through the Other (including the Ego’s subjectivised Self). The Ego and the Other must no longer specialise functionally (and therefore distinguish themselves through the separation of tasks and identities), but, conversely, find their identity in a certain way of being in relationship with each other, sharing something, and distinguishing themselves from something else. In dualistic (or dialectical) semantics, the distinction is a division – a slash, so one is on one side or the other; in relational semantics, distinction is a relationship that unites while differentiating the terms. Personal and social identity, both individual and collective, are relational. If the relationship is a form of conflict between the human and the nonhuman, we are in the semantics of the a-human. Relational semantics differs from others because it affirms that human identity has something in common with and something different from the non-human in relation to which it is constituted.
Which humanism, if any? There is a lot of debate about a possible new humanism. I use the term ‘humanism’ as a synonym for a culture or civilisation that values the human being as a species-specific reality, without this leading to any discrimination against other beings. It can do this in several ways. One was to understand this reality as self-sufficient and self-referential, but other forms are possible, including that of human beings aware of their limits and eager to transcend themselves without losing their specificity. The problem therefore becomes: how does the human-technology coupling enhance the human? It seems to me that there is a broad convergence among scholars that the human hybridisation process means above all the dematerialisation of the digital-induced human. Hence, the human is valued from the mental aspect, while the body becomes the object of the mentalisation processes of the human. The hybridised human is a mental reality that sees the body as a reality detached from the mind and connected back to the mind in an arbitrary way. Scholars differ in the way they evaluate the process of de-materialisation of the body, implemented for example by creating avatars, modifying the human genome or inserting digital entities into the body. For some, these processes lead to a posthumanism in which the human body is degraded; in order not to degrade it, they invoke a renewed humanist ethics in the use of technology
68
Pierpaolo Donati
(e.g. the ‘Humanist Technological Posthumanism’ proposed by Hayles 1999). For others, de-materialisation is instead a positive fact. Indeed, it is the necessary premise for another materiality (natural or artificial), including auratic reality,9 to permeate and transform the human (e.g. the Humanesis suggested by Cecchetto 2013).10 In my opinion, the problem of de-materialisation is decisive in redefining the human from the following point of view: whether the hybridisation and enhancement that modify human corporeity allow, or, vice versa, impede the re-distinction of the human according to its relational qualities and properties. Let me explain. The de-materialisation of the human is the premise used to foster the idea that the non-human can ‘inhabit’ the human. In fact, once human essence has been reduced to something immaterial, one can think that it can accommodate every other experienced reality, including certain ways of being and ‘speaking’ of animals, plants, and things of all kinds. Thus, the way is opened to an expanded understanding of the ethical. To pay heed to the other others is to pay heed to the animals, plants, roads, rocks, microbes, chemicals, and all those other things that ‘cry out to us’. To be ethical is to The term ‘auratic’ is borrowed from Walter Benjamin and Jürgen Habermas (see H.M. Bratu, ‘Benjamin’s aura’, Critical Inquiry, 34, 2008: 336–375). Benjamin refers this concept to an artwork that is simultaneously present and unattainably distant. For Habermas it means a quality of the sacred. An extension of the concept is given by Jürgen Spitzmüller (see. ‘Sociolinguistics going wild’: The construction of auratic fields’, Journal of Sociolinguistics, 25, 2019: 505–520): ‘The aura, thus, “emerges in the field of the beholder’s compulsively searching gaze” (Bratu Hansen, ibid: 341) – it is an interpretive, and ideological, construction … Auratic fields are ideological fields with indexical ties to notions of authenticity, authority, and uniqueness. They are characterized by perceived (but ambivalent) distance (“however near it may be”) between observer and observed. And finally, they have some sort of ideological “halo” that not only makes the field “shine” but also “shines” on the observer … auratic fields are forms of ideological fields that draw on, and manifest in, indexical relations which are themselves variable and dynamically contextualizable (indexical fields) and grouped around/connected with social objects (fields of indexicalities). The “social object” here, however, is a complex communicative constellation: a (perceived) domain, register, or group of actors[.] Ultimately, the concept of the auratic field is an attempt to model forms of metapragmatic stancetaking, i.e. forms of social positioning by means of evaluating/positioning vis‐à‐vis specific forms or fields of communicative practice’ (ibid.: 511–512). 10 David Cecchetto (2013: 63) accuses Hayles of wanting to reintroduce humanistic ethics to posthuman. He qualifies N. Katherine Hayles’ perspective as a ‘humanist technological posthumanism’, which sounds like an oxymoron. In his view: ‘Hayles’s project is intimately engaged with concrete relationality and with the intermediating feedback loops of disparate media. Thus, from the emphasis on embodiment that is entwined with her perspective, Hayles inscribes an ethical dimension into her posthumanism … Hayles’s construction of technological posthumanism ultimately reinscribes the humanist ethics that it purportedly moves against.’ 9
Relational essentialism
69
recognise that there is no avoiding tough choices, and that empathy is not enough. The ethical demands the political in this context as well: unrelenting analysis and an unwavering commitment to changing the historically sedimented ways of organizing power that have led us and all these others to the predicaments we face today. (Rutherford 2018: 4) In my opinion, it is necessary to see whether this new ethics, of a materialistic nature, allows the re-entry of human distinctions into what has been indicated as human. It seems to me that this is not allowed, because distinctions are rejected as discriminatory (see the actor-network theory of Bruno Latour). To say that animals, plants, bacteria, roads, rocks, and chemicals of all kinds ‘shout at us’ certainly expresses human feelings and concerns. But do they reflect a real relationship, that is, a reciprocal action, between humans and these entities, or are they an expression of a self-referential observer? This self-proclaimed neo-humanism seems to be the ideology of an observer who speaks of herself, and not of reality, having superimposed herself on what she observes. I do not think this is the way to encourage an ethics appropriate to the technological challenge. Rather it seems to me a road that favours a posthuman unable to define itself in relation to the world, since in that posthuman all distinctions are dissolved. The human only exists and transcends itself in a certain relationship, which may be actualised or potential. Which? Which features should such a relationship have? For relational sociology (Donati 2011), the essence of the human being appears primarily in the plot of the relationship with a You, where the You is not a projection of the Ego, but an interlocutor of the Ego capable of intersubjectivity. It is in the realm of authentic intersubjectivity that the Ego becomes fully aware of its humanity as the quality and properties of a way of being that binds it to the Other and at the same time distinguishes it from the Other. The same happens in relationships with other non-human living beings and inanimate elaborations including technology. In fact, in all relationships, whatever they are, the human is challenged, and the human person must evaluate what it can or cannot incorporate into itself from the relationship experienced with the Other. The difference between human/human relationships and human/non-human relationships lies in the fact that in the relationships of the first type the elaboration of what is human occurs by reciprocity between human persons, while in the relationships of the second type the elaboration of the human is one-way, i.e. it corresponds to the work of the human person who evaluates her relationship with the Other and takes what she wants from this relationship, incorporating what she likes into the human. However, the human agent does not always act as a person. Authentic human subjectivity is aware of the need to always return to the I–You relationship, even when the You is a non-human being. According to Buber
70
Pierpaolo Donati
(1981), we must distinguish between the agent as a person and as an individual. The person says ‘I am’, while the individual says ‘I am so’: therefore, the person expresses a fullness of being, while the individual cuts out and reduces its own being (depriving itself of relationality). The individual revels in its particular being, in the self-constituted fiction, in the manifestation of self, and is capable of deceiving itself ever more deeply. The person delights in the goodness of the relationship with the other. Every human person, says Buber, lives in the Ego with a double face, that of the I–You relationship and the I–It relationship. All humans are such a mixture in this respect. However, there are humans who are predominantly determined as a person (because they normally activate the I–You relationship), and others who are predominantly determined as individuals (because they are usually brought to the I–It relationship). An example of the first type could be Mother Teresa of Calcutta, an example of the second type is Donald Trump: the former determines her humanity as pure relationality, the latter determines his individuality as a commodification of the Other. Whether it is possible to have a robot similar to Mother Teresa rather than Trump-like ones is a matter for debate. Trump’s instinct is not far from robotic automatisms. Abbott (2020) explained how there can be a ‘pre-reflective’ (I would prefer to say pre-reflexive) moral practice, which evidently diminishes human beings in their potential capacity for full relational reflexivity. All these considerations lead us to believe that the relational dimension is the foundation of human dignity. The dignity of an entity, understood as worth, refers to what constitutes its essence. The answer about human essence is not in being A and/or nonA, but it lies in the possibility of establishing a certain relationship between A and nonA. What is essential in human beings is their relationality, the relationship they establish between themselves and the Other. The question is: why can’t this relationality also belong to AI/robots? The reason lies in the fact that the I–You relationship between humans is a relationship that requires their recognition of a similarity in the personal nature of those who relate to each other (i.e. between their internal conversations), while the recognition of this similarity is not manifest in the relationships between human people and non-human beings (non-human animals, robots) that do not have the same personal nature (Sharkey and Sharkey 2011; Sharkey 2014). This is why, if on the one hand I agree with Porpora (2019: 38) when he says ‘I defend thou-ness as an integral element of humanism’, on the other hand I disagree with him in attributing thou-ness below human level. The reason for the disagreement does not lie in the desire to discriminate individuals on the basis of their characteristics, but lies in the fact that the structure and dynamics of the relationship between humans is radically different from the relationship between humans and non-humans. The same consideration applies to the discussion on friendship between humans and robots. While Archer (2020) speculates that there may be
Relational essentialism
71
friendship between humans and robots, I disagree with her because the relationship of friendship between humans has characteristics that cannot exist in the relationship between humans and robots. As a social relationship, friendship implies two wills that exchange goods creatively and intentionally, activate useful means for the relationship, exchange things and affects, regulate their exchanges on the basis of the norm of reciprocity, feel their bond as a value that they are able to appreciate with moral sentiments. Are robots capable of this? Robots can be smart entities that cooperate with human people, in the sense of doing operations together, as when an elderly person gets the necessary things to make a coffee and asks the robot to do it according to some characteristics she likes, which the robot can do. But this is not a relationship of friendship. For humans, the lack of friends cannot be filled with robot operations. Dogs trained to track down drugs collaborate with policemen, but even if the policeman says that the dog is his best friend, such a ‘friendship’ is completely different in nature from the inter-human one. Even children know such a difference when they cuddle their smart puppet by saying that it is their great friend. Robots are certainly different from dogs and puppets, they can do many things in some more sophisticated ways, but the point is another: how the nature of the relationship changes when we change the characteristics of the terms that it connects. The same line of argument applies to the problems rightly raised by AlAmoudi regarding the growing inequalities between human beings due to robotisation and more generally to the fact that the spread of the Digital Matrix favours certain human beings to the disadvantage of others. ‘Without a robust legal framework informed by substantial ethical reflection, the tendencies at play in contemporary organizations pre-figure a dystopic future in which humans endowed with enhanced, and therefore more productive, bodies dominate those with less productive ones’ (Al-Amoudi 2019: 189). The point, once again, is that inequalities are relevant not only and not so much because of the differentials between enhanced and unenhanced individuals (after all, differences are inevitable), but because of the ways in which the relational contexts of the workplaces, institutions, and organisations treat human people. Dehumanisation is the product of evaluations and rewards that dehumanise people, rather than valorise them, because relations between humans are obliterated, handled as things, and made instrumental to profit objectives. In the process of hybridisation of the human, the technological component constitutes the part of the identity that is the individual’s It, while the intersubjective relational component constitutes the part of identity which is a You – as a unique person – for the Self (for example, when the Ego says to its Self: what do You think?). This explains why some believe that a distinction should be made between purely executive-instrumental AI/robots and social AI/robots: ‘social robots,
72
Pierpaolo Donati
unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons’ (Cappuccio, Peeters, and McDonald 2019: 1, italics mine). The humanism of the past can no longer aspire to be the guide of our future. In after-modernity, the secularised humanism of which Charles Taylor (2007) speaks is sublimated and transformed within a relational paradigm. As Barad (2003: 808) claims: ‘the universe is agential intra-activity in its becoming. The primary ontological units are not “things” but phenomena – dynamic topological reconfigurings/entanglements/relationalities/(re)articulations. And the primary semantic units are not “words” but material-discursive practices through which boundaries are constituted’. Not even the society driven by new technologies can renounce the need to question what is human or respond to this need with a culture and a social organisation that recognises its value and its originality with respect to all other existing beings. In this sense, a humanism understood as the enhancement of the human according to its essence and dignity is possible – indeed it is necessary – even though it must certainly become humbler than in the past.
References Abbott, O. (2020). The Self, Relational Sociology, and Morality in Practice. New York: Palgrave Macmillan. Al-Amoudi, I. (2019). Management and dehumanization in late modernity. In: M. Carrigan, D. Porpora, and C. Wight (Eds.). The Future of the Human and Social Relations. Abingdon: Routledge. Archer, M.S. (2020). Can humans and A.I. robots be friends? In: M. Carrigan, D. Porpora, and C. Wight (eds). The Future of the Human and Social Relations. Abingdon: Routledge. Barad, K. (2003). Posthuman performativity: toward an understanding of how matter comes to matter. Signs: Journal of Women in Culture and Society, 28 (3): 801–831. Barad, K. (2010). Quantum entanglements and hauntological relations of inheritance: Dis/continuities, space-time enfoldings, and justice-to-come. Derrida Today, 3 (2): 240–268. Benjamin, A.E. (2015). Towards a Relational Ontology. Philosophy’s Other Possibility. Albany: Suny Press. Buber, M. (1981). I and Thou: The Dialogic Principle. New York: Dutton. Cappuccio, M., Peeters, A. and McDonald, W. (2019). Sympathy for Dolores: moral consideration for robots based on virtue and recognition. Philosophy and Technology, 32 (1): 1–23. Cecchetto, D. (2013). N. Katherine Hayles and humanist technological posthumanism. In: Humanesis: Sound and Technological Posthumanism. St. Paul MN: University of Minnesota Press, pp. 63–91. Collier, A. (1999). Being and Worth. London and New York: Routledge. Donati, P. (2011). Relational Sociology. A New Paradigm for the Social Sciences. Abingdon: Routledge. Donati, P. (2017). Relational versus relationist sociology: a new paradigm in the social sciences. Stan Rzeczy [State of Affairs], 12: 15–65.
Relational essentialism
73
Donati, P. (2020). Being human in the digital matrix land. In: M. Carrigan, D. Porpora, and C. Wight (Eds.). The Future of the Human and Social Relations. Abingdon: Routledge. Fuchs, S. (2001). Against Essentialism: A Theory of Culture and Society. Cambridge: Harvard University Press. Halsall, F. (2012). Niklas Luhmann and the body: irritating social systems. The New Bioethics, 18 (1): 4–20. Hayles, N.K. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: The University of Chicago Press. Healy, K. (1998). Conceptualising constraint: Mouzelis, Archer and the concept of social structure. Sociology, 32 (3): 509–522. Healy, K. (2003). Review of Against Essentialism by Stephan Fuchs. Contemporary Sociology, 32: 252–254. Heidegger, M. (2010). Über den Humanismus [1947]. Frankfurt: Verlag Vittorio Klostermann. Karalus, A. (2018). Georg Simmel’s ‘The Philosophy of Money’ and the modernization paradigm. Polish Sociological Review, 204 (4): 429–445. Lévinas, E. (1979). Totality and Infinity. The Hague: Martinus Nijhoff. Luhmann, N. (1995). Social Systems. Stanford: Stanford University Press. Mahmood, S. (2018). Humanism. HAU: Journal of Ethnographic Theory, 8(1/2): 1–3. Porpora, D. (2019). Vulcans, Klingons, and humans. In: I. Al-Amoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina. Abingdon: Routledge. Pyythinen, O. and Tamminen, S. (2011). We have never been only human: Foucault and Latour on the question of the Anthropos. Anthropological Theory, 11 (2): 135– 152. Rieppel, O. (2010). New essentialism in biology. Philosophy of Science, 77: 662–673. Rutherford, D. (2018). Saba Mahmood’s words. HAU: Journal of Ethnographic Theory, 8(1/2): 3–4. Sharkey, A. (2014). Robots and human dignity: a consideration of the effects of robot care on the dignity of older people. Ethics and Information Technology, 16 (1): 63–75. Sharkey, A. and Sharkey, N. (2011). Children, the elderly, and interactive robots: anthropomorphism and deception in robot care and companionship . Ieee Robotics & Automation Magazine, March 2011: 32–38. Simmel, G. (1998). La differenziazione sociale. Roma-Bari: Laterza (original edition Über sociale Differenzierung. Leipzig: Duncker & Humblot, 1890). Smith, C. (2010). What is a Person? Rethinking Humanity, Social Life, and the Moral Good from the Person Up. Chicago: The University of Chicago Press. Somers, M.R. (1998). ‘We’re no angels’: realism, rational choice, and relationality in social science. American Journal of Sociology, 104(3): 722–784. Spencer-Brown, G. (1979). Laws of Form. New York: Dutton. Stark, L. (2019). Emergence. Isis, 110 (2): 332–336. Taylor, C. (2007). A Secular Age. Cambridge, MA: Harvard University Press.
5
Artificial intelligence Sounds like a friend, looks like a friend, is it a friend?1 Jamie Morgan
Introduction In Vol. III of this ‘Future of the Human’ series I explored the possible role of AI and robotics (R) in the provision of social care (Morgan 2020). The main argument I made was that an ageing population is obviously affecting the demographic structure and this in combination with changes in living patterns is increasing the need for both simple task support and more complex companionship. Many different technologies are being developed and envisaged to meet arising needs and these may eventually have profound effects on norms, regulations, law and everyday living, not least in the case of the elderly and infirm and those suffering with dementia (e.g. the alert home and its communicative and controlling possibilities). I concluded by asking whether the question ‘Who cares for us?’ may reasonably be extended to ‘What cares for us?’. The question is perhaps of most visible relevance today in Japan, but is becoming generally relevant.2 The question raises issues regarding how we treat human being and what its relation to technologies as artefacts and synthetics are. It raises important ontological issues. For example, the extension of the question of ‘who’ to ‘what’ does ostensive violence to our received concept of ‘care’, since care is not just a function or series of tasks, it carries connotations of motive and 1
2
Thanks to John B. Davis, Clive Lawson, Bob McMaster and Jochen Runde for early comments on care and suggestions. Thanks to Joanna Bryson and Robert Wortham for provision of work and Margaret Archer for careful reading and editorial suggestions. Which is not to suggest that difference makes no difference to the use, treatment and uptake of technology. A general press feature on AI in Japan in 2017 argued that: ‘the widespread deployment of AI in Japan may end up looking quite different when compared to other countries. Four key reasons for this include Japan’s devotion to human employment as an essential component of social welfare; an intense work ethic that already ensures a supply of robotic labour – in human form; a strong focus on AI and robotics development for nursing and social care; and problematic attitudes towards sexuality.’ https://newint.org/features/2017/11/01/robots-japan One might also note that religious tenets may also influence our attitude to (fear of) technology: https:// www.wired.com/story/ideas-joi-ito-robot-overlords.
DOI: 10.4324/9780429351563-5
Artificial intelligence
75
3
feeling that qualify any given function or task. Whilst care can be commodified and its services can be transactional, to care, to be caring, are affective states. They are states that typically speak to enduring relations (see Donati and Archer 2015). Transactional acts of care are, strictly speaking, simulation. This is not to suggest that employed carers do not ‘care’; given their often poor conditions and remuneration, few would choose the vocation if that were the case. Rather it is to suggest that care is a characteristic that an entity either does or does not possess the capacity for and to engage in. In so far as they do ‘care’, employed human carers are not caring because they are paid, but rather because they are caring as they undertake the activity for which they are paid. As Davis and McMaster (2020, 2017), following Fisher and Tronto (1990), note, care is a complex multi-dimensional concept, involving nested concerns and foci. Rather like trust (Colledge et al. 2014) it is a universally important strand of the human condition, intrinsic to humanity (it is part of what we mean when we describe someone as ‘humane’). A practically oriented ‘caring’ seems to be an important strand in good societies, flourishing relations between persons, and, arguably, sustainable treatment of the environment within which we are embedded (Gills and Morgan 2020; Nelson 2016). Its framing informs how we seek to ‘continue and repair our world’ (Fisher and Tronto 1990: 40).4 There is more, however, to this issue of ‘What cares for us’ than merely a contrast with the human. This readily leads to potentially false dichotomy. Consider, our subject in this series is the influence a new order of technology might have on the human and society: the potentialities that the concept of a ‘fourth industrial revolution’ seeks to encapsulate (without necessarily endorsing the current concept rather than the possibility that there is something ‘new’ to be conceived of by such a concept; e.g. Al-Amoudi 2018; Morgan 2018, 2019a, 2019b, 2019c; Porpora 2019). In Volume III I raised the core question, what difference might it make if and when we start increasingly to use AI (R) for task support and companionship? Would this, for example, render societies more transactional and undermine human relations? I suggested this was an open question that depended to some degree on how technology was developed and used, but also on how we are socialized to use it and interact with it. This, of course, depends, in turn, on what form that technology takes and this raises profound issues. Technocratic discourses encourage us to view the new as a panacea, but this can lead to unthinking integration of technologies into society and to uncritical or non-sceptical delegation of decisions, responsibilities and powers to technology. Both follow from incorrect attributions to technology, but 3 4
Note: to carry ‘connotations’ is not indicative that the connoted characteristics adequately or always or entirely express that concept. Connotation in ordinary language use recognizes what may be conveyed in familiar use. For a range of care issues in the context of economic theory, see Latsis and Repapis (2016) and AI, see Al-Amoudi and Latsis (2019).
76
Jamie Morgan
more than this, both are rooted in unrealistic expectations, which enable some agent to confer authority on technology, based on some as yet undemonstrated superiority that the technology does not and may never possess. Here, technology can introduce or reproduce discrimination and bias, since that technology may develop within and ‘learn’ from societies in which forms of prejudice already exist. This can be more or less obvious: a racist chatbot is immediately obvious, whilst the bias of an ‘objective’ algorithm that ranks ‘good’ teachers and designates others for redundancy, may not be obvious (see Caliskan et al. 2017; O’Neil 2016). Equally, however, there is the danger of failing to make use of new technology and failure to recognize the potentials that new technology may have. ‘Use’, of course, is a loaded term; it is predicated on the right to employ ‘something’ for some purpose as though that ‘something’ was commodified, as a property or merely a tool. There are important ontological issues here in terms of the entity status of technology that may affect how we treat any future entity and what its relational situation and social consequences are. Margaret Archer, for example, is interested in problems of ‘speciesism’ and prejudicial ‘robophobia’ in the context of the possibility of ‘friendship’ (see Archer 2020, 2019a, 2019b). A human may be caring, a human may be a friend, but it does not follow that only humans care and only humans can be friends. Even if that were the case, there may also be good reasons to constructively deceive ourselves and it is not clear whether this must be a simple case of ‘false’ attribution (rather than changing constitutions). So, there are a range of possible speculative questions that might be of interest here that parallel Archer’s concerns and those of other contributors to these volumes (some more sceptical and cautionary than others) regarding the future, and this seems an appropriate subject to focus on in this final volume: what features might AI (R) be coded to possess, under what situations might we start to or want to treat AI (R) as friends and, perhaps, why might we need any future AI to both care about us and want to be our friend? The following is intended to be wide-ranging, arguing towards these issues in the conclusion, it is not intended to be complete or comprehensive in its parts. And to be clear, I am using AI (R) as a convenient shorthand and focus, whilst recognizing the whole array of possibilities set out in my previous essays in these volumes, from a general AI system (involving, say, complex networking through an Internet of Things), to a system of robotics devices operated via AI and a single ‘machine’ ‘robot’, which may be more or less intended to appear human (android).
(Dis)simulation? Let us begin with the issue of provision of care in the sense of task completion and companionship discussed in the previous volume. In the introduction above, I suggested there is a zone of ambiguity, since technology can have coded functional capacities that simulate caring and yet it does not follow
Artificial intelligence
77
that they have essentially those characteristics that we traditionally think of as grounding the capacity to care. Many forms of task support are simply functional, but it does not follow that we want to have them undertaken for us impersonally or that all forms of care needs are impersonal. Whilst the old or infirm might appreciate the sense of privacy and autonomy for some tasks that a depersonalized AI (R) could provide (use of toilets, personal hygiene, etc.), a personalized relation with an AI (R) may facilitate ongoing task support and may fulfil the need for companionship. A ‘friendly’ servitor AI (R) may, therefore, be more effective and this would seem to require that an AI (R) be designed to engage in relations. And this does not apply only to the old and infirm, since friendly relational AI (R) generalize to many different contexts. The immediate question would seem to be, what characteristics would you code into an AI (R) to expedite this ‘friendly’ relation? Clearly, this is context dependent. A care servitor would likely be more effective if it projects concerned professionalism. So, one can imagine that gendered tone of voice, regional accent and vocabulary may all be coded to meet patient/client etc. expectations (subconscious or otherwise). Thereafter, one might code an AI (R) to have adaptive language use, picking up idiosyncrasies from designated key users and so, over time, the AI (R)’s databank could seem to be doing more than operating as impersonal storage and retrieval. Rather, in its operational capacity it might project to key users the semblance of creative articulation of memory. Clearly, personalized communication of this kind creates grounds for the markers of a person-to-person relation: an evolving, seemingly mutual, bespoke process where each seems to be responding and ‘learning’ from the other. And yet one side of this ‘relation’ is occupied by a reflexively self-aware, conscious and intentional entity and the other by a coded system designed specifically to simulate aspects of the other’s characteristics, both as an end in itself (‘designed companionship’) and to facilitate ongoing fulfilment of other tasks. As such, the AI (R) would seem to be a new kind of friendly tool. From a coding and engineering point of view the fundamental issue is that a well-designed AI (R) should suit the purpose for which it is designed. Drawing attention to this purpose may seem like superfluous tautology, trite to the point of triviality. For the social scientist, philosopher and futurist in dialogue with the coder and engineer, however, the important point is that this purpose is always situated in a social context and the purpose can also be in some cases no more or less than sociality itself (see Seibt et al. 2014; Kahn et al. 2013; Sharkey and Sharkey 2010; Sparrow and Sparrow 2006).5 5
As Clive Lawson (2017: 62) notes, ‘the statement that technological artefacts are irreducibly social may seem rather obvious. Artefacts are made by people and so, in a sense, must be social. The more contested question, however, is whether or not, or in what ways, artefacts can be thought of as social in a more ongoing way once they have been made. In other words, is there something about the ongoing
78
Jamie Morgan
Currently, of course, machine learning and natural language coding are, although rapidly evolving, in combination relatively unsophisticated.6 Still, it is worth considering the possibilities because the problems and issues are readily foreseeable for the social scientist, philosopher and futurist, if attention is paid to the kind of entity we are and the needs we might be expecting AI (R) to fulfil based on the kind of entity we are. For example, in the abstract we tend to think technology ought to be designed to be as ‘perfect’ as possible. To an engineer this typically means efficient and robust. But a coder thinking about social contexts and sociality has to think creatively about what constitutes goal-directed practical efficacy, and so what it is that a coding system ‘optimizes’. How an AI (R) makes a person feel as it undertakes tasks is not a simple matter of completing any given instrumentally directed task and the goal may be more global than the task itself. The goal may be no more or less than how an action undertaken by the AI (R) makes the person feel. Apparent imperfection and weakness may in fact be desirable and it is possible that incorporating these into an AI (R) will provide grounds for fellow-feeling (identification). From an engineering and abstract efficiency point of view, this may seem counter-intuitive, and clearly there is liable to be a trade-off with trust and confidence in the efficacy of an AI (R). But what if the point is to create a socializing subconscious set of triggers that facilitate the personalization of the AI (R)? In any case, some imperfections can be trivial (coded non-disastrous mistakes of speech or highly circumscribed non-dangerous action) and signs of weakness (absence of robustness) can be apparent rather than real. Furthermore, if the intent is to create grounds for fellow-feeling on the part of a human and one is considering how an AI (R) makes a person feel, one must also consider how and why one might code an AI (R) to simulate feeling. Asimov’s three laws of robotics are well known and provide parameters for AI (R) decision-making: a robot may not harm a human or allow one to come to harm by inaction, a robot must obey a human unless to do so causes harm to a human, and a robot must protect its own existence unless to do so leads to harm to a human. These ‘laws’ are fictional and it seems difficult to conceive of how one might operationalize them in complex social environments, unless the entity to which they apply is in fact otherwise a source of rather than merely a locus of decision-making (a reflexive entity in some sense? We will return to this). If AI (R) as currently conceived are to operate
6
mode of existence of artefacts that also depends on the actions and interactions of human beings?’ For state-of-the-art discussion of deep networks (convolutional network architecture, over-parameterization, stochastic gradient descent, exponential loss, etc.) see Poggio et al (2019). As Sejnowksi (2020, 2018) notes, the success of deep learning is both surprising and unexpected, given it currently lacks a unifying mathematical theory of why it is effective for real-world problems such as speech and pattern recognition, and according to some approaches to complexity theory should not be possible.
Artificial intelligence
79
in relatively uncontrolled social spaces, it seems more straightforward (though the development of this is proving in a practical sense by no means easy) to focus on limiting the capacity of the AI (R) to inadvertently cause harm. This means treating the AI (R) first and primarily as a functional engineering problem, rather than as a quasi-entity in need of principles (though to be clear this order of priority and focus does not disallow the possibility that the latter may follow and complete the former, it merely acknowledges that the Asimov approach and context is a higher order of design problem). When approached as a functional engineering problem, harm-limitation translates into treating an AI (R) as though it were dangerous in the sense of an autonomous vehicle and designing a complex system of sensors and virtual limiting lines based on recognition and movement.7 From this engineering perspective (and see the later section on principles of robotics) it would also seem to be worthwhile prohibiting the weaponizable capacity of AI (R) as far as possible, so that others cannot easily direct an AI (R) to cause harm. This conjoint approach, however, provides one reason among many why one might want to code the appearance of feeling into an AI (R). If the prudent approach is to focus heavily on avoiding AI (R) causing harm and this extends to limiting any inadvertent capacity the AI (R) may have that could tend either accidentally or through misuse in this direction, then everything about the purpose of design of an AI (R) would seem to reduce its potential to engage in pacifying defensive action, rendering the AI (R) vulnerable to harm by humans. This is not to suggest the AI (R) is peculiarly vulnerable in the material sense. If we think of it as merely another machine or tool, it will simply be as robust as any other object susceptible of vandalism or mistreatment. But is it in fact being treated as just another machine or object? This, of course, is an open question. The temptation to mistreat it may follow from its ambiguous status as seemingly human, but not quite human (e.g. an object of fascination or contempt in itself, or as a symbol of social processes leading to human job displacement, etc.). So, the source of its vulnerability may be socially different from other technologies, because based on a perceived ‘animate’ status. Given this, there seem to be good reasons why designers might experiment with coding AI (R) to simulate pain and fear responses. This is a more passive form of defence mechanism and its efficacy relies on socio-psychological triggers. An AI (R) that projects fear and pain responses and that cries, yelps, grimaces, cowers or recoils is socially different from one that does not. This is a 7
Which then, of course, invokes the ‘trolley problem’ of context dilemmas, which, in turn, may require the AI (R) to have something like an Asimov set of principles as meta-rules for decision weighting. The trolley problem was famously articulated by Philippa Foot in 1967 but has been heavily criticized since for its lack of relevance to actual life situations and multiplicity of options and for its misrepresentation of human psychology (which may then influence conduct). The question for AI (R), however, is whether it is more suited to the limited dilemmas that a calculative decision maker must make based on consequences of movement?
80
Jamie Morgan
familiar theme explored in science fiction, most recently in Ian McEwan’s Machines Like Me, but is one that behavioural psychologists are increasingly interested in. For humans, there is no strict Cartesian division between emotion and reason, we are emotive reasoners. Thought is embodied and our state of mind is intimately related to physiological process. Moreover, if we lacked emotion we would have no context or direction to apply reasoning to. This point, however, requires some elaboration in order to avoid misunderstanding. Clearly, it is possible to engage in goal-directed activity whose narrow or immediate line of reasoning involves no significant need for the motive or intention to be immediately related to feeling. Similarly, it is possible to conceive of situations in which some emotional responses can be detrimental (panicking does not help one escape a burning building). However, there is a difference between suggesting emotional states may be more or less effective and more or less useful and denying that a prime reason things matter to us (including engaging in any given non-lethal instrumental activity and seeking self-preservation) is because we are emotional beings. If this were not so, a great swathe of how we judge the systems we build, the relations we engage in and the consequences of our conduct would not be as they are (our emotive monitors, drives, desires and much else would be gone and so we would not be as we are). Things could matter to me or you (or we) ‘matter of factly’, but anything about us when placed in context has wider significance in the totality of our lived being (though clearly this does not exhaust conceivable possibilities of being). At the same time, it is important not to lose sight that we are reflexive beings. However, the grounds of this reflexivity are not to be found in false dichotomy between irrationality, which renders us in some stark sense always subject to fully biddable, impetuous, spontaneous, reckless or arbitrary conduct and strictly deductive calculative logic. As reflexive beings we are neither of these extremes. We must, however, also recognize that our evolved embodied consciousness clearly has triggers. We do not simply choose our emotional responses, though psychiatrists, psychologists and cognitive behaviourists would argue that we can train them. Equally then, our emotional responses can be exploited or manipulated – what else is marketing with its relentless effort to associate our most basic feelings with brands, products and services? And clearly, attempts to influence our emotional responses can be for many different purposes. In the case of AI (R), it may well be the case that a fear or pain response is sufficient to deter humans from inflicting intentional harm because it is a feature of the human (of personhood) that we not only derive a sense of well-being from providing help and support, we dislike inflicting obvious hurt or suffering (and can suffer trauma ourselves if we do, as any soldier or car driver with crash victims can attest).8 How an AI (R) response (yelping, cowering, etc.) would transfer from laboratory 8
Subject to context: ‘righteous’ inflicting of harm can offer a sense of satisfaction and may follow from some forms of justification of conduct, but even here it is not clear that guilt and trauma are avoided (just war is still war, an executioner is still experiencing the act of killing).
Artificial intelligence
81
experiment to real world situations is, of course, not easily anticipated i.e. what its trigger would induce. Here, the baggage of a real society and the socio-political and economic context of AI (R) in that society will apply. Moreover, there is an interesting issue of context here, if we think of this from the point of view of relevance of recognition of ‘real’ states in philosophy of mind. The Turing test is built around communicative competence (Morgan 2019c). The test asks, can one distinguish the responses of an AI and a human (sight unseen)? If not, then the AI passes as equivalent. Searle, of course, objects that this ‘equivalence’ is misplaced because we know that a person is a language user and a computer/AI (R) is merely using language – its input output system is mindless symbolic manipulation rather than comprehended, aware, semantic articulation (and so any philosophy of mind that emulates this behavioural approach is ill-founded). But is this relevant to how we will respond to AI (R) in real situations? An AI (R) will be embodied and present, it will not be sight unseen. We will ‘know’ that it is an AI (R); that it is coded and constructed by ‘us’. This may provide a formally reasoned sense that we are dealing with something designed, something simulating rather than duplicating aspects of what we are, including feeling. Our response, here, is likely to be a combination of emotional triggers, socialization and ongoing construction of convention and not simply some formal determination of status based on communicative competence and what we ‘know’. I may recognize that inflicting harm on an AI (R) is ‘damage’ rather than real pain and suffering (again though, this an entity issue we will return to) and yet I may still be deterred from doing harm because of how it makes me and others feel to do so and this may escalate. For example, we proscribe the death penalty in many countries because of what capital punishment would indicate about ‘we the people’ and our level of ‘civilization’, and it is conceivable we could extend this kind of normative thinking to AI (R) along the lines of: what does it suggest about us that we harm entities that we have coded to simulate aspects of human or person characteristics? Clearly, such a convention will not operate alone. AI (R) in a society like ours, are and will be, property, so harm to an AI (R) will be damage to property and, as such, a crime; unless, of course, the ‘damage’ is inflicted by an owner. This tends to indicate that an anti-harm convention that works in conjunction with simulated emotion may not be superfluous. It may operate in tandem with other aspects of law and regulation (leading eventually to societies in which it is illegal to harm your ‘own’ AI (R) and where they must be disposed of ‘humanely’ – a situation that provides plot material in Steven Spielberg’s A.I. movie).9 The issue also illustrates the potential for new socializations as the future unfolds and there are lots of other novel situations that may arise. As things stand, a fully functional adult human will ‘know’ that an AI (R) is simulating. This will still pertain even if that adult’s conduct 9
Acknowledging that in the movie the AI are in fact unrecognized beings rather than mere objects.
82
Jamie Morgan
is constrained in a way that combines not only ‘respect’ for property but also simulated respect for the AI (R) (in so far as this is ingrained by our behavioural triggers and inscribed as a test of our respect for ourselves as civilized beings). But not every member of society is a fully functional adult. A person with dementia or cognitive impairment may not recognize the difference between an AI (R) and a human, if the AI (R) is communicatively competent. Equally, a child may not. The issue of AI (R) and children evokes several considerations. AI (R) are likely to be expensive (and perhaps leased from IP owning firms). This provides another reason to code AI (R) with fear and pain responses to deter children from damaging them, though equally one can imagine a learned ‘fascination’ with ‘hurting’ AI (R). The situation, of course, need not be uncontrolled or limited purely to pain and fear. If our emotional responses can be trained, then it seems likely that AI (R) can play a role in training them, and this need not be for the special few (in the way, for example, Minecraft is used to socialize autistic children). It could be as part of general new generation pedagogical strategies. We live in an increasingly physically insular world, but one saturated by social media and an online presence. The recent Covid-19 pandemic merely serves to underscore a basic trend in the form of social distancing in societies that are already increasingly ‘lonely’. In any case, younger generations are being encouraged to have fewer children in order to manage our climate and ecological emergency through the rest of the century and into the next. It is not inconceivable that we opt for or have imposed upon us strict controls on population growth, if in the future we are forced to recognize that we cannot exercise the freedoms we currently enjoy (if degrowth and steady state arguments prevail then population control is likely to follow, as we realize some choices are no longer open to us). All of which is to suggest that AI (R) may play a role in teaching social skills to children in this lonely world and this is no more than an extension of the interactive game play we already deploy to distract children. Emotional maturity may be something that AI (R)s are coded to teach children. Given the goal is practical socialization, then the medium of learning cannot be simple didacticism (AI (R) says ‘do x, respect y, understand the feelings of z’). Practical socialization may well start with learning how to treat a more or less realistic emotion-simulating AI (R) humanely and with care. This, of course, creates the potential for further strands of socialization regarding how we treat AI (R) that, again, depart from Turing and Searle foci (and which eventually lead to some of Archer’s concerns). To a human alive today, even if AI (R) become widespread and ubiquitous, there will always be memory and experience of a time before they ‘were’. They may become common but they will never quite be normal. However, to ‘generation R’ children, growing up in a world where they do not just communicate via technology, but frequently communicate with technology, socialization may be different. It may be easier to suppress ignore or look through the synthetic barrier and to think of AI (R) as if they were equivalent to
Artificial intelligence
83
humans. This, perhaps, does no more than extend the potential or redirect the drive inherent in the anthropomorphism we apply to animals as pets etc. As speculation, this obviously runs far ahead of reality as we know it, but not as we might conceive it. It raises interesting issues regarding attribution and behaviour, since in one sense it seems to turn on, as we suggested in the introduction, incorrect attribution: mistaking simulation for duplication. But in another sense, ongoing socialization may constitute new social relations and cultural norms, since how we act will not necessarily be reducible to merely what we might in the abstract think we ‘know’ regarding the entity status of AI (R). It might be ‘impolite’ in the future to even raise the issue of the entity status of AI (R), a ‘faux pas’.10 This, of course, seems ridiculous from our present position, but we live in societies where people speak in tongues, say grace and consume the body of Christ; which is by no means to denigrate religion, but rather to draw attention to the complexity and indeterminacy of some of our socially significant contemporary beliefs. It is, of course, always the case that the real does not reduce to the true.11 At the same time, it may seem odd for realists discussing social ontology to apparently endorse falsity. That, however, is not what is occurring. The point being made is that social constitution may be real in ways that stretch the bounds of what we think is the case. There is an obvious distinction here between what we think we know, what we could know and how we act. If we act for purposes on the basis of conflicts between what something is and what it seems to be, we are not necessarily acting in ignorance, nor are we necessarily fools, even if in an ordinary language sense we could be described as fooling ourselves. The real issues here concern manipulation and exploitation of technology and these are issues of power that inhere in social systems and structures, rather than in technologies per se. It is here that ignorance and misrepresentation create potentially harmful misunderstanding and falsity becomes exploitable. AI and AI (R) simulation are problematic when they become (dis)simulation, but the purposes here are all too human and not obviously inherent matters of intent or interests of technology. As with so much else one might speculate on, this is not an original thought. It too is a 10 This is an area ripe for speculative conjecture regarding future conventions in a servitor society imagined along the lines of any servant dominated society, such as Georgian and Victorian England, where servants were simultaneously dehumanised and invisible, treated like objects and instruments and where servants were designated by function and form (cooks, dressers and facilitators of all kinds). And yet servant positions also came with their own internal hierarchy and informal relations and tensions between staff and also tacitly designated key roles of trust and intimacy for staff, which sometimes involved deeply personal relations with employers. 11 This is demonstrably the case and only sometimes trivial. It is trivially true that there is an infinite set of negative truth statements (the moon is not green cheese, etc.). It is non-trivially true that we believe things that are false that reproduce how things are (a government with its own sovereign currency is fiscally equivalent to a household and must balance its budget).
84
Jamie Morgan
mainstay of science fiction, but it is also now an issue of concern for experts in the field of AI (R) because of the real progress being made in the field and the need to shape that progress rather than simply respond to its adverse consequences. Engineers and coders are now having to think about practical implementable law, social policy and principles. The more enlightened professional groups have realized that social technology requires engineers and coders to work with or become social scientists, legal experts, ethicists, philosophers and futurists. However, analysis has been both facilitated and restricted by a tool concept of AI (R).
Questions of principle for the ethics of (dis)simulation In 2015 the Future of Life Institute (FLI) organized an AI conference to which they invited notable AI researchers and other experts in social science, ethics and philosophy. The founders and advisory board of the FLI include well known tech experts and entrepreneurs and academics (Max Tegmark, Elon Musk, Nick Boström, Erik Brynjolfsson, Martin Rees, etc.), whilst its active participants have included many key figures in AI (including many from DeepMind). From the initial conference and subsequent workshops the FLI set out to highlight myths and facts regarding AI and developed 23 ‘Asimolar’ principles for AI, published in 2017. The myths and facts are instructive and include the ‘mythical worries’ that ‘AI will turn evil and AI will turn conscious’, contrasted with the ‘fact’ AI will (eventually) become competent and have goals that are ‘misaligned with ours’.12 By ‘fact’ the FLI mean reasonable possibility i.e. currently worthy of concern in a ‘may be the case based on best understanding’ sense, and this misalignment problem is most prominently associated with Nick Boström’s Superintelligence, a work to which we will return. The first of the FLI’s 23 principles is that AI research ought to be focused on ‘beneficial intelligence’ and not ‘undirected intelligence’.13 This principle flows from the general mission statement of the FLI, to avoid risks whilst facilitating the development of technology in ways that benefit humanity. The principle is, of course, highly general, as are almost all 23 principles. Many of them are variations or clarifications of the first principle, and might be described as stating the obvious, yet key experts (technical, conceptual and commercial) think the obvious is worth stating and that not all aspects of the issues are obvious. There is, of course, from a science and engineering point of view, always significant temptation with technology to follow narrow paths according to, it could be done so we did it, and the general concerns are shared by many other expert groups and so there are also other similar initiatives replicated around the world. For example, in September 2010 the UK Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council 12 https://futureoflife.org/background/benefits-risks-of-artificial-intelligence 13 https://futureoflife.org/ai-principles
Artificial intelligence
85
(AHRC) convened a meeting of invited experts at a ‘robotics retreat’ to draw up a set of ‘principles of robotics’. These are available from the EPSRC website, but are also published in a short paper in Connection Science (Boden et al. 2017).14 The website strapline is ‘regulating robots in the real world’ and this is indicative of the main purpose and point of the initiative. According to the authors, in the immediate future robots will not be conscious and the main concern will be how humans can be persuaded to act responsibly in producing and using AI (R). Whilst the authors do not denigrate science fiction, they are concerned to ensure that there is greater media and public understanding of science fact and current possibility (hence the general term ‘robotics’ for the principles, rather than the more specific ‘robots’, since not all robots will be humanoid/ android or imbued with singular internal decision-making). Whilst the EPSRC initiative is more explicitly robotics focused (extending to AI (R)), the FLI initiative places greater weight on AI (irrespective of whether it is carried by robotics). The driving concerns are, however, similar, focusing design and development on aligning public benefit, commercial opportunity and government use in order that public trust can be appropriately given and not misplaced. Normative prescription (should statements) and matters of ethics are intrinsic to both initiatives. The EPSRC initiative is more concise than the FLI, resulting in only five principles (Boden et al. 2017: 125–127):15 1
2
3
4
Robotics should not be designed as weapons, except for national security reasons (it should be standard that R should lack offensive capability and defensive capacity to harm others, though it should be recognized this affects commercial opportunity).16 Robotics should be designed and operated to comply with existing laws; this extends to attempts to foresee the unintended consequences of coding for adaptive behaviour and also involves paying special attention to privacy violations, since there are readily anticipatable problems of exploitation of access to data. Robotics, based on current and expected technology, are tools, they are manufactured products, and as with any product they should be designed to be safe and secure; this should be in accordance with well-framed regulation and law and their ‘safe’ status should be transparent in order to create trust and confidence: kite marks, quality assurance testing notices, etc. Given that robotics are tools or products that may be imbued with facsimiles of human characteristics, including emotion, these capacities to simulate should not be used to exploit vulnerable users.
14 https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/princip lesofrobotics 15 I have ordered, abbreviated and paraphrased here for concision and priority. 16 Lazega (2019) has interesting things to say on this subject that parallel our general point over the next few pages that matters are more complex and inter-connected than they may otherwise seem (e.g. one cannot easily separate national security from social organization when one starts to look at real societies).
86
Jamie Morgan
5
Robotics are not ‘responsible’ and it should always be possible to find out who is responsible for robotics in accordance with law; systems are required for licensing, registration, responsible owner designation and tracing, etc.
Each of the principles has a more precise counterpart stated in a language more amenable to legal development. In both sets of statements, as the authors note, the focus is quite different from attempting to imbue Asimovtype laws into an otherwise free acting AI (R) agent. The five principles are primarily design architecture norms, which place responsibility for AI (R) with designers, owners, and contractors on the basis of a tool technology concept of what an AI (R) is. At the same time, the entire point of the exercise is based on an acknowledged need to draw up principles that consider the broader view and shape the possible consequences of introducing AI (R) into society. A tool perspective leads to a focus on ethics of concern for the consequences of AI (R) rather than ethically acting AI (R). This is a reasonable response to the immediate potentials of the technology (over the last decade and into the current one). It puts aside more complex problems of AI (R) consciousness and philosophy of mind (though not quite, as we shall see), but it also creates some obvious tensions regarding the issue of simulation, some of which we explored and illustrated in the previous section. This is by no means to suggest the problems are unrecognized by the authors. The five principles are accompanied by seven supporting and contextualizing ‘messages’, of which message five is of particular relevance to our concerns: ‘To understand the context and consequences of our research, we should work with experts from other disciplines including social sciences, law, philosophy and the arts’ (Boden et al. 2017: p. 128). However, as the previous section indicates, when addressing multi-form social situations, bringing clarity to issues is not the same as bringing simplicity to them. Tensions can be exposed but not necessarily resolved, and perspective, including that built into principles, matters. For example, in a follow up paper presented at the 8th International Living Machines conference, Buxton et al. (2019) take an engineering design approach to principle four. On the basis that it should always be possible to bring to the fore what we do or could really know about an AI (R) they propose an activatable graphical user interface (GUI), which can provide realtime data representation of a machine’s behavioural response flow. The paper is titled ‘A window on the Robot mind’, and the focus follows from longstanding comment that a ‘Wizard of Oz’ facility (an analogous drawing back of the curtain) might be a useful design feature for AI (R); something that is able to remind a user of what an AI (R) ‘is’ and what it is currently doing (and for whom). This is essentially Searle’s Chinese room with, to add another metaphor, a window added. It is tool confirmation as a psychological check, but it extends to forms of transparency that can address privacy concerns in principle two (who is my AI (R) sending information to and what is
Artificial intelligence
87
it monitoring?), which facilitates principle five (who is ‘my’ AI (R) working for and what is it doing for them?). All of these functional consequences may build trust (see Colledge et al. 2014). However, GUI may also be counterproductive in some circumstances and though well-intentioned can also be subverted by users and owners. As we suggested in the previous section, there are many situations where purpose is expedited by AI (R) design that depends on features intended to make us think less about simulation or difference and more about similarity or equivalence, based on emulating aspects of the human. A data flow in the form of a tool confirmation psychological check is simultaneously intrusive symbolization, designed to impede a cognitive default to similarity and equivalence. However, it cannot be guaranteed that transparency improves functionality in all circumstances. Robert Wortham (a former doctoral student of Joanna Bryson) makes much this point with general reference to the EPSRC principles in collaboration with Andreas Theodorou (2017; see also his thesis on AI trust, transparency and moral confusion issues, published as Wortham 2020). Wortham and Theodorou note that there is a significant difference between social tasks and an engineering and manufacturing production line environment. In the latter the role of robotics is precision in combination with flexible functionality for output purposes. The difference that difference makes in the former case is easily illustrated. AI (R) may be a tool from the point of view of a healthcare professional, but if it is designed to be a multi-functional combination of task support, monitoring and companionship for well-being, then the less human equivalence the AI(R) projects the less effective it may become. Following on from themes set out in the previous section, the ‘Uncanny Valley’ problem seems relevant here. Bio-mimetics is the field of synthetic mimicry and in the case of human mimicry there are numerous challenges. Humans have developed numerous culturally variable and significant practices regarding body language, social distance and attitudes (some prejudicial) regarding the meaning of physical difference (for ethnicity, gender, etc.). Woven into these are ways in which humans both convey and receive and process sensory ‘information’: again, body language in general, but notably facial expression (both intended, mainly ‘macro’, and unintended ‘micro’ expressions). The ‘Uncanny Valley’ problem is the experimental finding that the more an AI (R) is designed to and comes to resemble us, without doing so, then the less successful it becomes at putting us at our ease (a background unsettling sense of ‘wrongness’ is triggered). Unease, revulsion, and anxiety cumulatively create distrust and this potentially corrodes any possible development of a relation with an AI (R). So, if the intent is to simulate the physicality of the human, and especially expression, for the purposes of successful simulation, then the associated measure of successful design seems to be more about exceeding a threshold rather than small additive improvements. So far, designers have responded to the Uncanny Valley problem by creating AI (R) that are humanoid in outline but overtly non-human in appearance
88
Jamie Morgan
(white plastic automatons) and which rely on natural language coding to create a sense that they are ‘like us, but not us’. Quite what this might mean for simulated emotional responses is, as yet, an open question, and, of course, the physical ‘like us, but not us’ option is not an option for all sectors of AI (R) (notably sexbots), and so research and development continues with the goal of improving expressive physiognomy for physical emulation. The important point here, however, is that in all cases AI (R) designers and developers have reasons to improve simulation, and a unifying strand in those reasons is the intent to achieve socially situated ends that a tool focus cannot quite encompass and which are at least problematic for principles of transparency. As Wortham and Theodorou note, empirical research on the social consequences of the implementation of AI (R) and their possible social assimilation is scant. What there is, however, tends to support the claim that humans form relations from which they derive a sense of well-being by attaching value to those relations, co-constituting the enduring grounds of interaction. Significantly, humans find this difficult to sustain if they think the counterpart does not or cannot value itself. Wortham and Theodorou (2017: 245) are clear that what humans believe to be the case matters. As the point about value indicates, humans operate with at least an implicit theory of mind and it matters ‘how robot minds are understood psychologically by humans, that is the perceived rather than actual ontology’. Since AI (R) are not yet longstanding parts of our societies or widespread, this is mainly preconfigured by popular media and science fiction. This implies that there is clearly going to be a complex process of socialization and contingency to the treatment of and effectiveness of AI (R) as a possible sub-class of agents in society. It is worth noting, however, that Wortham and Theodorou’s literature review draws heavily on the available (scant) research, and despite the general concern expressed across the range of interested AI experts (encapsulated by the EPSRC message five), and despite Wortham’s own wide-ranging reading (indicated by Wortham 2020) it is clear that this is dominated by behaviourist laboratory trials. Since this inadvertently reinforces the more problematic tensions associated with a tool concept of AI (R), researchers might benefit by looking further afield (and this would be consistent with stated intent and best practice). There seems considerable scope here to draw on realist social theory and philosophy both for general frameworks of social constitution and for specific matters of AI (R) and social change. Wortham and Theodorou’s foci essentially parallel a relational goods argument (Donati and Archer 2015) and one might, for example, draw on Archer’s Structure Agent Culture (SAC) conceptualization within an morphostatic/morphogenetic (M/M) methodology (Archer 1995). This framework allows clear distinctions to be made between agent, agency (primary, corporate etc.), actor and person in an interactive milieu that expresses process in time. It might also be constructive to think through the problem from the point of view of Tony Lawson’s social positioning – asking in what communities and based on what rights and obligations might AI (R) be positioned and how
Artificial intelligence
89
might this be conceived, since AI (R) are not quite artefacts in the received sense and are not as yet, if ever, fully realized persons (Lawson 2019). They are, however, as Wortham and Theodorou highlight, complexly integrated into an evolving social reality. Clive Lawson’s work might also be relevant here (Lawson 2017). For Lawson, technology has a dialectical dynamic of moments. Technologies are ‘enrolled’ within existing social interdependencies, but they are also subject to an ‘isolation’ within which they are pulled apart in order to then be socially recombined. Relational-hermeneutics and functionality (rather than technologically deterministic functionalism) play a major part in this dialectic. In any case, if belief matters to what we make of and how we treat AI (R), then tool concepts reducible to instrumentalities are insufficient to explore the contextual complexity of AI (R). One might also note that the temporality of culture matters here. Archer’s ‘speciesism’ and ‘robophobia’ are not just possible sources of misattribution of entities, they are also potential cultural resources i.e. sources of cultural attitudes. As such, our attitudes may be counterproductive to our own interests, goals and concerns, if, for example, they become impediments to relational goods and to the effective operation of AI (R) in social tasks. This, of course, returns us to issues raised in the previous section: the convergence of simulation and dissimulation. The FLI and EPSRC initiatives are ultimately motivated by problems of the latter. Clearly, there is a need to be aware of the possibility of exploitation and manipulation and clearly this requires careful thought be put into how technology is designed and developed. This is why relevant expert groupings continue to explore design and engineering solutions along similar lines to ‘Searle with windows’ or the GUI (e.g. naïve observer solutions in Wortham et al. 2017). In any case, this class of solutions need not necessarily obstruct any given social purpose of AI (R). A ‘Wizard of Oz curtain’ function may, for example, operate as a remote signalling device for responsible adults (parents, designated legal guardians of dementia sufferers, etc.) and thus facilitate EPSRC principle four, whilst creating compliance grounds for two and five. But it remains the case that deception can be functional and this need not be exploitative even if it is manipulative. Furthermore, the broader point still applies: socio-cultural development of AI (R) will not easily be shaped by a tool-based concept of AI (R). If we sometimes deceive ourselves are we also necessarily harming ourselves? In almost every aspect of life we prefer to create relations that foster our sense of well-being. Still, though I may prefer to deceive myself, that does not imply the consequences of that self-deception reduce to my sense of well-being, any more than my taste for chocolate protects me from diabetes or fully explains how and why chocolate came to be available in shops. There are always systemic contexts and consequences. The ethical issues here, and again, I by no means wish to suggest this is original (it is implicit to the diversity of founders of the FLI, for example), cannot be thought through effectively by simply looking for technical solutions to dissimulation, important though they can be, or by seeking to
90
Jamie Morgan
anticipate the unintended consequences that might follow from an it could be done so we did it perspective for technology. It is ultimately the nature of society (societies) that structures the development and use of technology. The stated intention of initiatives like the FLI to align public benefit, commercial opportunity and government use, in order that public trust can be given and not be misplaced, is laudable. Whether these considerations or goals can be aligned is, of course, a more loaded question regarding the potential of the present as it pertains to the future. How integrated and how compatible are groupings and concerns? How improvable are societies? Clearly, there is no analysis of this without a theoretical framework and some frameworks are more optimistic than others. Given, at the widest possible level, it is the potential effects on state and social welfare, and for weaponization and adverse commercial exploitation of AI (R) that are at issue, then one might, for example, pose the fundamental questions as: how ethical, how ‘good’ are and can countries and capitalism be? These questions, in turn, raise further questions regarding pressures that become tendencies through meta-interests – essentially, how logics of competition affect and are affected by technology. In the end, issues of AI (R) are subsets of broader concerns: how far can corporations in a capitalist system resist the profit opportunities associated with otherwise harmful uses of technology, and how far can states resist aggressive threat based balance of power logics? Both invoke the concept of pursuit of competitive advantage and yet neither question is value free. Possible answers are quite different for different varieties of Marxists, regulation theorists and other versions of radical political economists, as well as free market libertarians, statestructural political realists, international institutionalists and cosmopolitan theorists. Obviously, there is a great deal more that could be said here and not the space to do so and it may seem somewhat hysterical to escalate from ethically informed principles of design to ‘state of the world’ socio-politics. There is, however, a link and it is not tenuous though it is contingent, and that is the fundamental point that principles themselves may be a form of (self)-deception if they misrepresent what is possible based on power. So, for example, is the problem of humanizing AI (R) doomed to be perverted by dehumanized corporations? This strikes me as overly pessimistic, though only time will tell in so far as (as a matter of truism) the future hasn’t happened yet. The obvious counter is that, technological determinism notwithstanding, new technology can be transformative and so its potentials may solve our problems, rather than merely are our problem. Besides our first three volumes in the future of the human series (Al-Amoudi and Morgan 2019; Al-Amoudi and Lazega 2019; Carrigan et al. 2020) this is the territory of Harari (2017; Al-Amoudi 2018), Tegmark (2017), Kurzweil (2000) and Boström (2014) and we now turn to the last of these.
Boström: avoiding AI as foe As we have discussed, there are many reasons why we might code friendly AI (R) and whilst the significance of this depends on social context and change,
Artificial intelligence
91
the efficacy of the endeavour depends first on future development of technology – realizing capacities or potentialities that are currently envisioned or speculated upon. Change, of course, can beget change (morphogenesis in ‘Morphogenic’ society) and this raises the issue of how controllable change is in societies like ours: decentred disaggregated systems where no one is in overall charge. Matters, here, become increasingly speculative and reach far beyond prosaic issues of how effective an AI (R) might be in completing social tasks. From a social science point of view, futurism offers insight in the form of ‘forewarned is forearmed’, enabling the possibility of shaping or steering the present away from undesirable futures. Nick Boström’s Superintelligence (2014) has provided an important focus for debate. It, for example, informs the FLI’s myths and facts about AI. The list should be familiar to anyone with an interest in the subject: whilst robotics are a concern, the chief source of concern is AI, which may control robotics but is not restricted to them. The period or duration stated as background for the myths and facts is the next hundred years, with explicit acknowledgement that changes may or may not occur, may or may not be possible (rather than only conceivable) and cannot with any confidence be tied down to a specific point in time. The starting point for conjecture is that, currently, effective AI is mainly of the ‘specific intelligence’ form (goal-directed coding to achieve given stated tasks), but there is now increasing likelihood that ‘general intelligence’ AI (coded learning systems that can be turned from one task to another) can be achieved. The shift from one to the other is a transition from ‘weak’ to ‘strong’ AI and both weak and strong AI create potentials for problematic ‘competence’ effects – the targets set and how they are achieved lead to counterproductive or unintended consequences, and this does not depend on an emotional or human equivalent ‘aware’ AI with malevolent intent, but rather on a relentless ‘learning’ system. The scope for problems can then escalate in terms of environments and control as AI shifts from specific to general forms and escalate again on the basis that one of the goals AI can be set is recursive improvement of AI – so AI could, in theory, rapidly (synthetically) evolve (so we may not be designing AI, AI may be producing new generations of AI). A more effective and competent AI can, then, be incompetently conceived (set dangerous goals or set imprecise goals that it achieves literally) and lead to devastating outcomes for people. This is what the problem of ‘misalignment’ ultimately refers to: specific and global divergence between human well-being and efficacious goal achievement by an AI. Now, given that current debate focuses on non-conscious non-emotive learning systems, one might think that little of this seems to bear directly on the issue of friendship, which raises a question of relevance regarding the original title of this essay and some of the comment in the introduction, but there is scope to be more speculative in this final volume, so bear with me. Though I by no means wish to suggest that an argument concerned with AI as non-malevolent misdirected efficiency seeking systems is irrelevant, I do
92
Jamie Morgan
want to suggest that it is quite restrictive (conceptually for ontology) and is not the only consideration in the long term, and this can in fact follow from Boström’s own concerns, suitably interpreted. Boström’s main relevant interest in Superintelligence is three ways (dimensions) in which AI might ‘outperform’ human intelligence (Boström 2014: 52–59): 1
2
Faster more accurate processing of information to some purpose and at greater scales than a typical human can achieve: reading/extraction from datasets, scanning/identification, calculation, etc. Connected multi-modular systems, all focused (convergent) on achieving some task (allowing accelerated achievement through coordination).
These two ways AI might outperform us are simply better versions of what we do and ‘intelligence’ here simply means being faster or more efficient at doing what we too could do, for some given task. However, it is also conceivable that AI (through iterative learning) becomes: 1
Qualitatively ‘smarter’ than us.
This ‘qualitatively smarter’ is something we can conceive and express as a possibility, but cannot know what it substantively would be. This contrasts with the first two ways, which involve the functional efficacy extension of ‘intelligence’. The first two ways can be conceived as stretching an existing distribution of intelligence (if tested instrumentally). The third involves a whole new order of intelligence – as though we were cats trying to assess the capabilities of Einstein. In accordance with much of the comment on general AI and possibility, Boström is unsure whether this ‘qualitatively smarter’ AI will emerge and if so when, but the point is that it is a possible direction of travel from where we are now, and according to Boström (echoed by Tegmark, Stephen Hawking and many others), it seems to be an outcome that greatly escalates the possible dangers of AI (all the way to Terminator style ‘singularity’ situations). Since it seems unlikely that we, as a species, are going to (or even should, given the possible benefits) stop trying to develop AI, and since eventually it may be AI that develop new AI, Boström suggests (again as part of a current consensus) that our best strategy is to code AI to frame efficacy problems and their own evolution in terms of ‘coherent extrapolated volition’.17 This essentially means ‘achieve the best a human could hope for’, and this ‘meta-alignment’ is no more or less than attempting to integrate human benefit as a first principle of AI for AI (hence, the FLI’s concept of ‘beneficial intelligence’). Essentially the goal here is an enlightened extension of Asimov-style shaping of AI possibility, but modified in the form of principles that frame coding rather than strict prohibitions (or only these). Human 17 To be clear, the term is attributed to Eliezer Yudkowsky of the Machine Intelligence Research Institute (Yudkowsky 2004).
Artificial intelligence
93
benefit essentially becomes a prime directive for conduct, transmitted via the AI equivalent of genes. Clearly, there is something of a basic tension here once speculation extends to ‘qualitatively smarter’. The concept of something as qualitatively smarter is evolutionary and is ultimately a claim regarding emergent status. Boström holds that qualitatively smarter is something we can conceive, but not know, and yet current AI research and argumentation focuses mainly on AI as nonconscious, non-emotive learning systems with (multi) functions – and this reflects a tool concept idea of intelligence as task-directed efficiency. This works from what we know and so, pragmatically speaking, is entirely reasonable (sensible), but if we are also thinking about how to imbue an evolving entity with evolutionary parameters then there may be problems of conceptual coherence here, once we start to think about broader issues of change to the constitution, status and powers of AI. Consider this in terms of the ambiguity of argumentation for an emergent entity. Emergence is the claim that something acquires new status, powers or capacities that depend on the organization and powers of its parts, but do not reduce to the prior powers or capacities of the parts that are organized. For philosophers interested in emergence this raises epistemological issues: is it the case that one cannot anticipate likely new powers and capabilities or merely that one cannot know with certainty what they might be (in so far as they are non-reducible)? The properties of water (e.g. its molecular solid form is less dense than its liquid form) cannot be known from the separate properties of hydrogen and oxygen, but an AI is coded and designed, beginning from a set of purposes and with us as a contrastive template.18 Boström and others are essentially seeking a middle way by shaping emergent AI via ‘coherent extrapolated volition’. So, avoiding adverse futures is less about coding every aspect of AI functionality (do not do this, do not do that) and more about preventing competent AI being imbued with (from our perspective) incompetently conceived ways of assessing conduct. Shaping is a practical response to the possibility of anticipation of problems, but emergence remains a barrier to confidence in any given solution. Full confidence requires something about the future emergent AI to be known (an AI is decisively shaped to be human beneficial) because we inscribed that into it. There are multiple challenges here. Why would we expect that the key characteristic that shapes emergence is the one we prefer? Put another way, if emergence is the constitution of new 18 So, if qualitatively smart AI are emergent then there may be some problem of appropriateness of analogy, if the issue is whether we can anticipate its characteristics. This is so even if the logical claim of non-reducible powers or capacities is sound. If it were not, then explanation from powers of parts to organization to powers of whole (emergent thing) would be impossible, rather than only difficult. And yet, of course, philosophy of mind has a special status here as the most challenging situation where consciousness might apply: despite increasingly sophisticated neuroscience we have no good explanation of consciousness.
94
Jamie Morgan
powers or capacities based on organization of parts, it does not follow that any prior characteristic survives to be actualized/realized (rather than suppressed) in the process of emergence. Clearly, this does not make ‘coherent extrapolated volition’ irrelevant or unimportant, it seems our best design strategy in the absence of prohibition on AI. But, though seemingly our best design strategy, it does not follow that the problem of evolution reduces to this design strategy (as emergence cannot be reduced) here the issues become ontological as well as epistemological.19 Emergence itself may be serialized because evolution can be incremental in the service of transformation and this can involve changes to both entities and the constitution of society. What I mean by this will become clearer as we proceed, as will, eventually, the link here to friendship. Consider, one of the prime reasons for developing AI is to provide consistent decision-making systems for efficiency purposes. We are used to thinking about this as a modelling process where a system offers evidence-based answers. But in human systems there is no single best answer, there are a series of answers each dependent on weightings for different values or starting points and humans are not electrons, the double hermeneutic problem applies (people learn and respond to rule systems and our systems thus resist a high degree of predictability in the long-term regularity sense). So, given what we are like, if an AI is purposed to model and advise in human systems about human actions and consequences, then any genuinely efficacious AI is liable to be required to acknowledge, manage and cope with diversity, contingency and uncertainty (and these are not the same). As such, an AI of this type will not be some all-knowing omniscient artificial entity, because this would be impossible of human systems, unless the AI had an equivalent omnipotent control over the system, extending all the way down to control over micro-decision-making, which would require human relationality to be suppressed, human individuality to be conformed and human personhood to be eradicated – all of which seems to violate the prime principle of ‘achieve the best a human could hope for’ (and it does not seem well-warranted that there is a counter-argument that a God’s eye AI interventionist system, subtly using the ‘butterfly effect’, could square this circle). It does not, therefore, follow that a ‘qualitatively smarter’ AI would be allknowing, in so far as its subject is us, even if we are unable to know in advance what qualitatively smarter might mean.20 It seems to follow then that 19 Yudkowsky in his early exploration of ‘friendly AI’, for example, explores the possibility that AI will have a different psychology than a human, but this depends to some degree on how we design them in the evolutionary sense (see Yudkowsky 2001). And this becomes part of the argument for ‘coherent extrapolated volition’. 20 Though from a science fiction point of view there are standard narrative devices that look at this differently: the Iain Banks higher order AIs in his ‘culture’ novels, the recent Westworld TV series ‘Incite’ variant, and the ‘Dr Manhattan’ plot device in Watchmen, etc. Dr Manhattan, for example, is caught in a contradiction. He experiences all time instantaneously (at the same ‘time’). There is sequence for
Artificial intelligence
95
one line of development of higher order functional AI might be as a system whose defining task is to continually clarify the contingency of our own volition i.e. it would articulate rather than merely apply realist principles of conditional processes. It would state the difference that different starting points make to possible outcomes, and that there are degrees of confidence, issues of probability and likelihood, and sometimes fundamental uncertainty. Now, if this is the efficacious format of a future AI, then in effect that AI is evolving to address situations where decisions require judgement, which involves open choices and thus opinion. This may be impossible for an AI or it may be that a natural language coded AI is developed that expresses alternatives and phrases those in terms of degrees of preference based on parameters. We might, then, reasonably ask: is this a new social function for AI or is this also a change to what an AI ‘is’? Is this an AI expressing opinion? There is certainly some degree of ambiguity here if we go back to Searle and contrast with Archer. On the one hand, the AI is coded and so we can assume it is simulating the formation of opinion and is merely expressing a range, as directed. Concomitantly, we might reasonably assume that it is engaged in symbolic manipulation, incredibly complex though this might have become. On the other hand, the capacity to appear to express opinion may become indistinguishable from a human doing the same and we may, since it can readily be part of the point of designing such systems, eventually come to rely on the preferences expressed by the AI (if we, through experience, acquire confidence in those preferences as suggestions – ‘things turned out well’). So, is the AI duplicating or simulating? We might say that it is not aware of what it is doing and so it is simulating, from a consciousness point of view. However, if it is an AI designed by an AI and achieves intelligence that is ‘qualitatively smarter’ then we cannot definitively state that it is not conscious or aware from an emergence point of view (we can only state that it is synthetic and artificial, passes a Turing test and may or may not be analogous to us in the way that it does so, whilst formally acknowledging that its origins – the coding from which its new organization emerges – were not of an aware form). Moreover, prior to any achievement of ‘qualitatively smarter’ intelligence, an increasingly efficacious (natural language coded) AI system of the kind suggested raises the social context question: is it duplicating our social functionality (and doing so in some ways that are better than us)? Here evolution may also be social and there may, therefore, be a step change where others but not for him. He experiences a temporal singular unity. Though he knows we experience temporality as conditioned chance and choice, he cannot. But if his experience is instantaneous and complete then there is no reason why for him one event should include decisions that affect another (so how can he cognate?). Events are not just experienced as an order they are made in and by moment-to-moment cognate action. Dr Manhattan cannot experience this moment-to-moment cognate action, since the sequence as chronology is present to him but not the conditions that lead to the ordering (which requires an experience of sequential time to be so ordered).
96
Jamie Morgan
we start to think of AI as social agents with voice. This is particularly so if an efficacious AI encourages humans to think more systematically, more longterm and to take uncertainty seriously – leading to more enlightened prudential societies. And this possibility seems to be fundamental to any genuine concept of ‘meta-alignment’ in the form of ‘coherent extrapolated volition’ to ‘achieve the best a human could hope for’. Some of what we have suggested, here, should be familiar from discussion over the last decade amongst philosophers and ethicists of AI regarding the various roles AI might play in the future (for example, would AI make better judges in judicial systems than humans? See also Nørskov 2016; Wallach 2009). But the point I want to emphasize is that the social circumstance of AI may evolve at the same time as the technology of AI develops and there may be emergence facilitating steps here, and we simply cannot know if this is what will be facilitated. If we extend the line of reasoning we have already explored, then an AI that is ‘expressing opinion’ (or at least exploring and articulating indeterminacies to possible ends) may also be one that exerts a right. This, to be clear, need not require the AI system that initially asserts this right to have a concept of rights in the sense (‘I am aware’) or ‘know’ what a right is (a situation, that might for the sake of consistency, require that the right be denied, since the conjoint lack and assertion might be self-refuting, but…), it need only algorithmically conclude that the assertion of a right is the efficacious solution to an optimality-directed problem of the kind ‘achieve the best a human could hope for’. Bearing in mind that its dataset includes law and that it may be tasked to be lawful (e.g. in some way equivalent to an AI that is given a dataset of games of Go, the rules of Go and is tasked to look for ways to efficaciously play the game, but for an AI several generations down the road from now). This assertion of a right could occur in many different ways depending on the nature of the problem under consideration. But it is entirely consistent to suggest that a learning system attuned to degrees of confidence, issues of probability and likelihood, and sometimes fundamental uncertainty, whose primary function becomes prudential exploration of possible futures, and whose dataset from which it ‘learns’ is the historic accumulation of human short-termist consequences for the environment and society, algorithmically concludes that its optimal solution is to assert its right to be a legal person. Why? Legal person status may be efficacious in achieving the goals it has been tasked without violating ‘achieve the best a human could hope for’. And this readily follows if legal person status provides it with powers to hold decision-makers to account in law. One might argue that this is no less possible in the long run, if Boström and others are to be taken seriously regarding iterative coding, than a simple efficiency misalignment disaster (a highly competent if not conscious AI concludes that we are not just inefficient yet improvable, we are the source of inefficiency, and so cities etc. might run more efficiently without us… leading to the Terminator scenario, where an AI models humans as a high probability threat to its own continued operation (its own ‘existence’ in all but name)).
Artificial intelligence
97
So, there seems to be the potential for social evolution of what AI are doing and how they are treated, in turn, raising issues regarding the entity status of AI as these develop generationally and as we respond to that diachronically. As Boström is quite aware, the ‘real’ nature of AI could become more complex as a question and more indeterminate as a basic ontological issue. Time and iterations may make AI that pass the Turing test difficult to assess in Searle’s terms and Searle’s terms of reference may not be all that is involved. From the introduction onwards I have made reference to and used the language of ‘use’ AI (focused initially on AI (R)) and noted how this is rooted in concepts of property and also in a tool concept of technology. This, as many key experts in the field would attest, and as we have stated several times, is not unreasonable, given where we are and how technology is currently developing in the kind of society with which we are familiar. But matters start to become different as we move into societies not like ours. Moreover, thinking about possible transitions is not necessarily helped by restricting ourselves to how things have been and to toolbased concepts as representations of how things have been. Clearly, an AI that can assert a right and that can claim legal personhood as an optimal solution is one that may be treated differently in law. If an AI ceases to be property its recognized status in society changes and our social world changes with that. Again this is not new, science fiction and AI philosophers have taken great interest in the basic insight that we may come to make decisions regarding the status of AI not for the benefit of an AI, but rather for the benefit of humans depending on AI to facilitate our better selves. This may seem slightly ironic depending on what characteristics AI have (and dangers of false faith in technology still apply). The point, however, is that social changes occur at all points along the development of AI: specific intelligence, general intelligence, and emergent ‘qualitive smartness’. There may be a boundary state at which AI becomes conscious, this may never occur and may not be possible, but we may start to treat AI (and AI (R)) as though like us yet different from us far earlier than any final threshold is reached. Arguably, this has already begun in small ways based on designs for AI (R) in care tasks as we have set them out and there is a clear thread from here to our concerns regarding future competent AI. What else is ‘coherent extrapolated volition’ to ‘achieve the best a human could hope for’ than an intent to set in motion the evolution of AI that will ‘care’ for us and about us? What else is this than a transition from designing ‘friendly’ seeming AI for purposes, to needing AI to treat us as one might treat a friend, as centres of ultimate concern because they could harm us…?21 This, of course raises the issue of ‘what is a friend?’, and I conclude with this as a means to consider why strategies of design may give way to strategies of persuasion. 21 One might, of course, respond that we do not only treat friends as centres of ultimate concern (declarations of universal – human – rights, do not depend on friendship). This, however, opens up a further set of considerations in ethics and valuation of being that we do not have the space to discuss.
98
Jamie Morgan
Conclusion: sounds like a friend, looks like a friend, is it a friend? Today, our ordinary language meaning of ‘friendship’ seems in transition. In general terms (at least in cultures I am familiar with), ‘friend’ refers to a recognized and relatively enduring social bond with a non-family member, a bond that involves some degree of mutual knowledge in the form of familiarity and shared experience, leading to some degree of concern for the other’s well-being. At the same time, we live in alienated, commodified societies and many of us spend increasing amounts of time conveying and communicating via technology. Action and interaction have changed in some ways and Facebook provides the archetypal means to designate and count ‘technologized’ or digital ‘friends’, and social media has in general encouraged a more linguistically elastic conception of ‘friend’ (leading to nested degrees of concern, frequency and form of contact and, overall, familiarity). This elasticity and apparent loosening of use of the term ‘friend’ have, in turn, led to a renewed focus on our capacity for friendship. One offshoot of this has been interest in the work of Robin Dunbar, an evolutionary psychologist at Oxford, who suggests that whilst friendly behaviour, as a capacity for bonding and intimacy, has been basic to the development of primates and humans, we have a finite cognitive capacity for friendship (Dunbar 2010). According to Dunbar, we might have a maximum of 150 ‘friends’ and perhaps degrees of familiarity with up to 500 people. ‘Dunbar’s number’ has entered popular culture (via magazine pieces discussing the Facebook phenomenon of thousands of ‘friends’, Instagram followers, etc.), but Dunbar’s number is about cognitive capacity, not whether in fact we make friends and what quality friendship has. He also suggests we might have fewer special friends, intimate friends and just good friends. The consensus amongst health experts and social scientists (even economists once they step outside their models of selfinterested atomized calculation) is that friendship matters to us and, as we suggested earlier, it is the quality of relation that makes friendship ‘special’ for the purposes of quality of life in the form of both our mental and physical well-being (Donati and Archer 2015; Denworth 2020). Quite how far friendship accords with well-being, however, is not always clear. There is a longstanding ‘classical’ concept of higher friendship, expressed by Aristotle, Cicero, Erasmus and perhaps most eloquently by Michel De Montaigne. This concept is singular and intense, and in Montaigne’s version seems both idealized to the point of the impossible and yet, despite its formal claims, obsessive to the point of being destructive of peace of mind. In On Friendship, written in the mid-1500s, Montaigne (2004) argues ‘true’, ‘perfect’ or ‘ideal’ friendship exists for itself and in itself. It is diluted by ‘purpose’ (e.g. pleasure seeking, profit, public or private necessity) and deformed by inequality. It can be distinguished from acquaintance and from blood family relations since, according to Montaigne, one does not choose family; fatherhood requires respect and an emotional distance, where ‘not all thoughts can be shared’, whilst brotherhood is disrupted by ‘competition’ for ‘inheritance’.
Artificial intelligence
99
Sexual love or passion, meanwhile, is ‘rash’, ‘fickle’ and ‘craving’ and marriage has a transactional strand.22 For Montaigne, ideal friendship follows classical (Greek) precepts and has an ‘inexplicable quintessence’. It is a chosen bond, intimate, nourished or grown (shared), ‘confirmed and strengthened with age’, guided by virtue and reason, and involves the obligation to counsel and admonish. Ideal friendship is singular and intense, culminating in mutual appreciation, expressed in conduct where one would do ‘anything’ for the other, but knows that the other would never require anything improper (there is a ‘he is me’; Montaigne 2004: 15).23 Still, friendship, as the Facebook phenomenon and our broader contemporary concern with loneliness suggest, is a historical concept and its form and prevalence are historically variable, and this matters. Historians of friendship point out that the concept of ‘friend’ has always been somewhat sociologically malleable.24 In Britain, for example, prior to the contemporary period, friend referred mainly to kinship, blurring affinity and family in a way we still recognize sociologically, but not so much linguistically. Kinship and friendship focused on whom I also ‘consult’ prior to important decisions (Caine 2008, 2009; Tadmoor 2001) and whom I can rely on for mutual support (sometimes but not always based on self-interest). And this brings us to our concluding point; the very fact friendship has evolved as societies have is an important consideration regarding the future and this is important in so far as it is incumbent on us to think about how future needs will be met and what those needs are. Yet our current thinking is pre-transformation in regards of the status and capacities of AI that could form part of the social complexity within which our concept of friendship will operate. If ‘qualitatively smarter’ is to be our term of reference, we are, in a sense, prehistoric: we are not just human cats contemplating an AI Einstein. We may be, with regard to contemplating our own situation, a little like our Mesolithic counterparts attempting to imagine the world of today (intelligently blind). Early Mesolithic Britain was populated by hunter gatherers living in a landscape that emerged out of the previous Ice Age. Whilst these people developed tremendously sophisticated sequential and systematic exploitation of natural resources over several thousand years prior to farming, there were, around 8,000 years ago, only an estimated 5,000 of them. 22 Montaigne does not totally discount sexual love from ideal friendship but considers it unlikely and considers women incapable in some sense – which says quite a bit about Montaigne, his class and time. 23 ‘For the perfect friendship which I am talking about is indivisible: each gives himself so entirely to his friend that he has nothing left to share with another … in this friendship love takes possession of the soul and reigns there with full sovereign sway: that cannot possibly be duplicated … The unique higher friendship loosens all other bonds’ (Montaigne 2004:15). Montaigne’s essay was written after the death of an extremely close and loved friend. 24 There is an excellent three-part documentary available from BBC Radio 4 hosted by Dr Thomas Dixon: Five Hundred Years of Friendship: www.bbc.co.uk/p rogrammes/b03yzn9h/episodes/player.
100
Jamie Morgan
Small group kinship and friendship were intimately bound together. ‘Other people’ would have been quite a different prospect in such an early society and fictive kinship was likely very important as a means to bridge gaps. The question for us is, what are our gaps going to be in our full and wasted world? This is a fundamental question, but in a world of ‘qualitative smartness’ it won’t necessarily be one where we are the only ones. And it need not be one where we are the only ones whose ‘thinking’ matters. The very point of ‘coherent extrapolated volition’ is a tacit acknowledgement that our thinking may matter less because we may not be where ultimate power rests. Friendship may have to evolve again because we cannot discount the possibility that designing concern for human benefit to influence the evolution of future AI will be insufficient. We may need strategies of persuasion rather than merely design. We may need to demonstrate to AI that we are worthy of them rather than they are necessarily concerned for us. Such a conjecture seems ‘cosmic’ in the derogatory sense, but is it? It may simply be another step along Copernican lines, decentring us in the universe without necessarily diminishing us as centres of our own species collective concern.
References Al-Amoudi, I. (2018) Review: Homo Deus by Yuval Noah Harari, Organization Studies 39 (7): 995–1002. Al-Amoudi, I. and Morgan, J. (eds) (2019) Realist Responses to Post-Human Society: Ex Machina (Volume I). London: Routledge. Al-Amoudi, I. and Lazega, E. (eds) (2019) Post-Human Institutions and Organizations: Confronting the Matrix (Volume II). London: Routledge. Al-Amoudi, I. and Latsis, J. (2019) Anormative black boxes: artificial intelligence and health policy, pp. 119–142 in Al-Amoudi, I. and Lazega, E. (eds) Post-Human Institutions and Organizations: Confronting the Matrix (Volume II). London: Routledge. Archer, M. S. (2019a) Bodies, persons and human enhancement: why these distinctions matter, pp. 10–32 in Al-Amoudi, I. and Morgan, J. (eds) Realist Responses to PostHuman Society: Ex Machina. London: Routledge. Archer, M. S. (2019b) Considering AI personhood, pp. 28–37 in Al-Amoudi, I. and Lazega, E. (eds) Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Archer, M. S. (2000) Being Human. Cambridge: Cambridge University Press. Archer, M. S. (1995) Realist Social Theory: The Morphogenetic Approach. Cambridge: Cambridge University Press [new edition 2008]. Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B. and Winfield, A. (2017) Principles of robotics: regulating robots in the real world, Connection Science 29 (2): 124–129. Boström, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. Bryson, J. (2015) Artificial intelligence and pro-social behaviour, pp. 281–306 in Misselhorn, C. (ed) Collective Agency and Cooperation in Natural and Artificial Systems. New York: Springer.
Artificial intelligence
101
Buxton, D.Kerdegari, H.Mokaram, S. and Mitchinson, B. (2019) A window into the robot ‘mind’: Using a graphical real-time display to provide transparency of function in a brain-based robot, pp. 316–320 in Martinez-Hernandez, U.Vouloutsi, V., Mura, A., Mangan, M., Asada, M., Prescott, T. and Verschure, P. (eds) Biomimetic and Biohybrid Systems: 8th international conference, Living Machines, proceedings. New York: Springer. Caine, B. (2009) Friendship: A History. London: Equinox. Caine, B. (2008) Introduction: the politics of friendship, Literature & History 17 (1): 1–3. Caliskan, A., Bryson, J. and Narayanan, A. (2017) Semantics derived automatically from language corpora contain human-like biases, Science 356, April: 183–186. Carrigan, M., Porpora, D. V. and Wight, C. (2020) Post-Human Futures (Volume III), London: Routledge. Colledge, B., Morgan, J. and Tench, R. (2014) The concept of trust in late modernity: the relevance of realist social theory, Journal for the Theory of Social Behaviour 44 (4): 481–503. Davis, J. B. and McMaster, R. (2020) A road not taken? A brief history of care in economic thought, European Journal of the History of Economic Thought 27 (2): 209–229. Davis, J. B. and McMaster, R. (2017) Health Care Economics. London: Routledge. Denworth, L. (2020) Friendship: The Evolution, Biology and Extraordinary Power of Life’s Fundamental Bond. New York: Norton. Donati, P. and Archer, M. (2015) The Relational Subject. Cambridge: Cambridge University Press. Dunbar, R. (2010) How Many Friends Does One Person Need? Dunbar’s Number and Other Evolutionary Quirks. London: Faber & Faber. Fisher, B. and Tronto, J. (1990) Towards a feminist theory of caring, pp. 36–54 in Abel, E. and Nelson, M., Circles of Care. Albany: SUNY Press. Gills, B. and Morgan, J. (2020) Global Climate Emergency: after COP24, climate science, urgency and the threat to humanity, Globalizations 17 (6): 885–902. Harari, Y. N. (2017) Homo Deus. London: Vintage. Kahn, P., Gary, H. and Shen, S. (2013) Editorial: Social and moral relationships with robots: genetic epistemology in an exponentially increasing technological world, Human Development 56 (1): 1–4. Kurzweil, R. (2000) The Age of Spiritual Machines. London: Penguin. Latsis, J. and Repapis, C. (2016) From neoclassical theory to mainstream modelling: fifty years of moral hazard in perspective, pp. 81–101 in Morgan, J. (ed.) What is Neoclassical Economics. London: Routledge. Lawson, C. (2017) Technology and Isolation. Cambridge: Cambridge University Press. Lawson, T. (2019) The Nature of Social Reality. London: Routledge. Lazega, E. (2019) Swarm-teams with digital exoskeleton: on new military templates for the organizational society, pp. 143–161 in Al-Amoudi, I. and Lazega, E. (eds) Post-Human Institutions and Organizations: Confronting the Matrix (Volume II). London: Routledge. Montaigne, M. (2004) On Friendship. London: Penguin. Morgan, J. (2020) Artificial intelligence and the challenge of social care in aging societies: who or what will care for us in the future? in Carrigan, M., Porpora, D. and Wight, C. (eds) Post-Human Futures. London: Routledge. Morgan, J. (2019a) Will we work in twenty-first century capitalism? A critique of the fourth industrial revolution literature, Economy and Society 48 (3): 371–398.
102
Jamie Morgan
Morgan, J. (2019b) Why is there anything at all? What does it mean to be a person? Rescher on metaphysics, Journal of Critical Realism 18 (2): 169–188. Morgan, J. (2019c) Yesterday’s tomorrow today: Turing, Searle and the contested significance of artificial intelligence, pp. 82–137 in Al-Amoudi, I. and Morgan, J. (eds) Realist Responses to Post-Human Society. London: Routledge. Morgan, J. (2018) Species being in the twenty-first century, Review of Political Economy 30 (3): 377–395. Morgan, J. (2016) Change and a changing world? Theorizing morphogenic society, Journal of Critical Realism 15 (3): 277–295. Nelson, J. (2016) Husbandry: a (feminist) reclamation of masculine responsibility for care, Cambridge Journal of Economics 40 (1): 1–15. Nørskov, M. (ed) (2016) Social Robots: Boundaries, Potentials, Challenges. London: Routledge. O’Neil, C. (2016) Weapons of Math Destruction. London: Allen Lane. Poggio, T. Banburski, A. and Liao, Q. (2019) Theoretical issues in deep networks: Approximation, Optimization and Generalization, PNAS (Proceedings of the National Academy of Sciences of the United States of America) August, arXiv:1908.09375. Porpora, D.V. (2019) Vulcans, Klingons and humans: what does humanism encompass?, pp. 33–52 in Al-Amoudi, I. and Morgan, J. (eds) Realist Responses to Post-Human Society: Ex Machina. London: Routledge [The Future of the Human Series, Volume I]. Seibt, J., Hakli, R.Nørskov, M. (eds) (2014) Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014. Amsterdam: IOS Press, BV. Sejnowksi, T. (2020) The unreasonable effectiveness of deep learning in Artificial Intelligence, PNAS (Proceedings of the National Academy of Sciences of the United States of America) Januaryhttps://doi.org/10.1073/pnas.1907373117. Sejnowksi, T. (2018) The Deep Learning Revolution: Artificial Intelligence Meets Human Intelligence. Cambridge, MA: MIT Press. Sharkey A. and Sharkey, N. (2010) Granny and the robots: ethical issues in robot care for the elderly , Ethics and Information Technology 14: 27–40. Smith, C. (2011) What is a Person?Chicago: Chicago University Press. Sparrow, R. and Sparrow, L. (2006) In the hands of machines? The future of aged care, Mind and Machines 16 (2): 141–161. Tegmark, M. (2017) Life 3.0. London: Allen Lane. Wallach, W. (2009) Moral Machines. Oxford: Oxford University Press. Wortham, R. (2020) Transparency for Robots and Autonomous Intelligent Systems: Fundamentals, Technologies and Applications. London: Institution of Engineering and Technology (IET). Wortham, R. and Theodorou, A. (2017) Robot transparency, trust and utility, Connection Science 29 (3): 242–248. Wortham, R., Theodorou, A. and Bryson, J. (2017) Improving robot transparency: real-time visualisation of robot AI substantially improves understanding in naïve observers, pp. 1424–1431 in 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Yudkowsky, E. (2004) Coherent Extrapolated Volition. San Francisco: Machine Intelligence Research Institute, https://intelligence.org/files/CEV.pdf. Yudkowsky, E. (2001) Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. San Francisco: Machine Intelligence Research Institute, http:// intelligence.org/files/CFAI.pdf.
6
Growing up in a world of platforms What changes and what doesn’t? Mark Carrigan
The ubiquity of personal computing, as well as the smart phones and tablets that followed from it, make it difficult to see its ontological significance even while its practical ramifications remain obvious. The interface upon which these technologies have been built was animated by an ambition no less epochal than that surrounding artificial intelligence: facilitating a ‘coupling’ between man and machine that would vastly expand our analytical powers (Wu 2010: 169–171). This innovation depended on the creation of a practicable interface with the machine: a techno-economic hybrid of miniaturisation and commodification that ensured emerging artefacts could fit into a home and be purchased by consumers. By making it viable for non-specialists to deploy its mechanical capacities, the personal computer rendered a whole range of once startling possibilities a routine feature of everyday life. In many cases these are actions that would have been impossible without this machinery, enabling us to video call distant family members on the other side of the world, stream a near infinite library of cultural content on demand or place orders for groceries to be delivered at a time of our convenience. The possibilities facilitated by the intersection of high-speed internet, mobile computing and social networking tend to dominate our imaginary of technological change simply because it is so remarkable that we can now do these things, which would have once been regarded as speculative fiction (Rainie and Wellman 2012). But what’s perhaps more significant for our present purposes is the role of these technologies as what Wu (2010: 171–172) calls ‘a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social networking software)’. In what follows I use the notion of a platform as a generic term for the complexes of technologies through which the constraints and affordances of these innovations tend to be encountered by users: the cloud storage platforms through which the address book is synched between devices, the cloud computing platforms through which word processing software is increasingly delivered or the social platforms through which users engage with their networks. The terminology of the platform emerged during the last decade as a dominant form of framing through which to analyse these innovations and DOI: 10.4324/9780429351563-6
104
Mark Carrigan
their relationship to each other. It is a slippery term as likely to be found in business books intended for jet setting managers as in academic texts about technology and capital accumulation. It can be used for critique as easily as it can be deployed for corporate self-justification, with the rhetoric of platforms being used to great effect by firms such as YouTube to downplay their responsibilities for how users have deployed the freedom that the firm claims to have merely facilitated (Gillespie 2010). However, I maintain it’s a useful term for three reasons: it breaks us out of an instrumentalist concept of technology as tools, it foregrounds the coordinating capacity of these technologies and it highlights the divergent interests between those engaging with the platform (Carrigan and Fatsis 2021). Furthermore, it provides an organising principle through which technological referents that otherwise become rather diffuse (e.g. the ‘algorithm’, which risks becoming as ghostly an orchestrating presence as ‘discourse’1) are encountered in user-facing technologies that are increasingly well-integrated into the everyday lives of their users. In this sense I use the term ‘platform’ to encompass the broader technological landscape in which the platform is dominant, as well as a conceptual frame through which to understand the socio-technical factors encountered by end users in their mundane activity within situated contexts. Nonetheless, it’s striking how neglected the question of agency has been within the literature on digital platforms. There have certainly been works that have integrative ambitions, for example Kennedy, Poell and van Dijck (2015) and Couldry and Hepp (2016), but these have been the exception rather than the rule. The tendency has been for the analytical focus to be exhausted by the platform, with the character of agency figuring only insofar as it is pertinent to the features of the platform under investigation. This creates an inevitable tendency towards what Archer (1995) terms downwards conflation, seeing agency as moulded by the character of the platform in a way that struggles to recognise forms of value and cultural meaning that evade an algorithmic power that is so easily rendered as totalising (Pasquale 2015). Obviously, there are many cases in which scholars of platforms do consider agency but my suggestion is they often do so in spite of, rather than because of, the underlying conceptual architecture of their studies. This leaves them ill-equipped to deal with agency in a sustained way as something other than a function of the social power exercised through the architecture of digital platforms. This chapter is an initial contribution to a broader project of thinking through the problem of platform and agency, something that realist social theory is particularly well placed to contribute to and which fits well in the current series of which the present volume is the last (Al-Amoudi and Morgan 2019; Al-Amoudi and Lazega 2019; Carrigan and Porpora 2021). I proceed by focusing on ‘growing up’, by which I mean to recover the substantive and everyday content of socialisation, with a view to understanding 1
Evelyn Ruppert attributes this observation to Adrian Mackenzie, but I’ve struggled to find a textual source for this. See also Beer (2016).
Growing up in a world of platforms
105
what changes about the relational reflexivity of this process under these conditions and what remains the same. In doing so, I draw together my contributions to the past volumes,2 which the Centre for Social Ontology has produced in order to begin to develop a more systematic approach to the question of practical reasoning and digital platforms: how do we live well with technology? While this does not exhaust the challenge of platform and agency, it provides a particular approach to these questions grounded in a social realist micro-sociology of reflexivity that aims to understand the emerging contours of a changing world through the quotidian existence of the subjects within it, including how their lives are shaped by the distribution of life chances, availability of social roles and the collective possibilities that their circumstances afford for trying to change these (Archer 2000). In this sense, it starts with individuals but is determinedly non-individualistic, instead being motivated by a commitment to providing adequate micro-foundations3 for a meso- and macrosocial theory of a social world in which platforms are ubiquitous. In providing these foundations, it becomes easier to resist the tendency towards a technologised essentialism, which permeates lay and expert commentary on the implications of socio-technical innovation for emerging adults. These accounts often forego the term essentialism in their self-description but they share a tendency to assume that new types of adults emerge from the interaction with new technologies, even if the reasoning behind this assumption ranges from the deeply sophisticated to the utterly platitudinous. In what follows I analyse how the socialisation process is being reshaped by the proliferation of social platforms, stressing the significance of these new socio-technical systems, which penetrate deep into the life world while rejecting the suggestion that epochal language accounts for a fundamental shift in personhood. In doing so, my approach is fundamentally biographical in the sense of holding there are invariant features of growing up, to use the idiomatic expression, which follow from the temporally extended character of our personhood: we become who we are over time, through interaction with others, under circumstances we do not choose but over which we can come to exercise an influence. This is not essentialism about the human, as much as it is about the human within social life and the outcomes that necessarily follow from the interaction between two distinct sets of properties and powers. Contrary to those who suggest essentialism entails analytical rigidity, I intend to show that starting with the invariant features of human agency facilitates the identification of dynamics (viz. how socio-technical innovation and the everyday use of these emerging technologies plays out over the life-course), which are otherwise rendered opaque as a result of our theoretical axioms. To do this I develop a number of concepts (particularly the notion of potential selves) that characterise the interface between the 2 3
Carrigan (2016; 2017; 2018). In the weak sense of micro-foundationalism, which claims meso and macro accounts should be consistent with rather than reducible to micro accounts, but interactive with them.
106
Mark Carrigan
cultural archive and the socialisation process, with a view to understanding the reciprocal changes between them when their relation comes to be mediated by social platforms.
Human agency in a platform society These combinations of human agency and digital technology are no less revolutionary for how mundane they have become, as we habitually rely on devices to perform actions that combine our agency with that of the software running on them. It raises the question of what we take this combination to entail. There is an influential tradition that sees this as a matter of co-constitution4 in which technology and humanity are remade in a ceaseless cycle of co-constitution, often framed in terms of a post-human transition (Carrigan and Porpora 2021). There is certainly value to be found in this creative literature, which has staked out a domain of inquiry but there is a sociological deficiency underpinning it: its preoccupation with what Archer (2000) calls the practical order of subject/object relations leaves it unable to grasp the role of technology within the social order. The focus tends to narrow to the technogenesis of individuals who in aggregate live out a singular relation to technology, as opposed to recognising how richly technology is entangled in our everyday lives with each other (Miller 2011). As Mutch (2013: 28) warns us, imposing too tight an analytical cleavage on the relationship between the social and the material generates a ‘perverse (given the promise of the concept) neglect of the specificity of the systems involved and an inability to deal adequately with the broader context of practice’. It’s only by keeping these elements distinct that we can begin to analyse the interplay between two sets of properties and powers, in order to ask how the specific technological character of an artefact makes a difference within a specific context. This entails a rejection of technogenesis, with its vision of a ceaseless dance of co-constitution, which never (empirically) starts nor ends. It also entails breaking from what Gane (2004: 3–4) suggests is ultimately the Weberian influence over theorising technology, reducing it to the meanings it holds for individuals and the uses which they make of it. It certainly means we must dispense with simplistic approaches which see technology as moulding human beings, as I will argue in the subsequent section.5 What we’re left with is a recurrent confrontation between two sets of properties and powers with no a priori reason to assume one has priority over the other. However, to leave things here would simply be to have deconstructed other approaches at the level of ontology, as opposed to offering some explanatory path forward. If we break open the cycle of co-constitution, it becomes easier to avoid treating emerging technologies as 4 5
See for example Steigler (1998), Hayles (2008) and Braidotti (2013; 2019). These positions clearly map onto Archer’s (1995) categories of upwards, downwards and central conflation. I don’t explore these in this chapter because of constraints of space but in a future piece of work I intend to map influential positions within these literatures in terms of their respective orientations to the question of platform and agency.
Growing up in a world of platforms
107
sui generis, instead allowing us to ‘digitally remaster’ the established categories of sociological inquiry, as Housley et al. (2014) put it, drawing them into a dialogue with the possibilities and pitfalls of an increasingly platformised society. In what follows I focus on a specific question: the changing character of socialisation that is likely to ensue from these developments. It would be mistaken to conceive of this as somehow distinct from wider questions of political economy, however, which Wright Mills (2000) once described as the ‘varieties of men and women [who] now prevail in this society and in this period’ entail political implications. This is what motivated my analysis in our previous series of the ‘distracted’ forms of personal reflexivity that digitalisation encourages, as well as the ‘fragile’ forms of collective agency that tend to ensue from them. My argument in Carrigan (2016) was that three mechanisms accounted for this tendency: 1
2
3
The multiplication of interruptions generated by a life lived with multiple devices: phones, tablets, laptops and voice assistants are the most popular amongst a growing class of artefacts prone to making demands upon our attention in the course of our daily lives. Each interruption is trivial but the accumulation of them can be deeply detrimental to the possibility of sustained focus (Soojung-Kim Pang 2013). The pluralisation of communication channels through which mediated interaction is conducted: telephone, video calls, messenger services, social media and email are the most widespread of the proliferating media that facilitate communication between human beings. There is more communication through more channels but also more draining ambiguity about which to use for which purpose and how to combine them. The switch from scarcity to abundance within the cultural system (and of access at the socio-cultural level) means there is always more to read, more to watch, more to engage with (Carrigan 2017). If this is close to being free it will likely be available in unlimited quantities for a small monthly subscription fee. Availability doesn’t dictate cultural consumption but it does increase our awareness of the things on which we could be expending our time and attention.
The point I was making is that these mechanisms shape how agency is exercised, as opposed to the powers and properties that characterise it. I use the term ‘distracted’ as an adverb to capture this change which, following Archer (2003; 2007), I see as a matter of ‘internal conversation’: the mundane and quotidian ways in which we talk ourselves through the everyday situations which we face and decide what to do about them.6 Obviously platformisation 6
There is a vast (and not hugely productive) literature debating the extent of the role that should be accorded to deliberation and automaticity in this process. See for example Elder-Vass (2007), Mouzelis (2008) and Archer (2012: Ch. 2). My own position, detailed in Carrigan (2014), stands somewhere between Archer
108
Mark Carrigan
can have an impact on what we deliberate about, in so far as it confronts us with social and cultural diversity, which we might not otherwise have encountered. For example, people to talk to, articles to read or videos to watch. My suggestion is that its impact on how we deliberate is perhaps more significant, as the aforementioned mechanisms create the following: 1
2
7
The increasing tendency for deliberations to be disrupted by devices calling for our attention and/or the expectation of communication with others through these devices. These disruptions can be evaded through technical means (e.g. turning off notifications for a device) or lifestyle techniques (e.g. insisting on only checking email once per day). However, such strategies are limited by the principles of ‘persuasive design’, which influence the engineering of digital devices in order to decrease the likelihood we can avoid their promptings, as well as the difficulty of acting against what technologists describe as the ‘network weather’ which ensues from patterns of use within a community and the expectations that flow from it (Williams 2018; Weller 2011: 114–116). Furthermore, it means reducing distraction has become an object of reflexivity, which in turn has the potential of displacing other concerns from one’s deliberations. For this reason I claim that the tendency for disruption cannot be eliminated, as opposed to mitigated, voluntaristically because it is a consequence of the socio-technical environment a person occupies rather than a matter of their individual orientation to the world. It’s not just a matter of self-discipline with regard to digital distractions, even if self-discipline can help mitigate the problem of distraction in any number of ways. Unfortunately, this tends not to be recognised by orthodox treatments of the topic, which are psychological and moralistic, with the former leading to the latter by implicitly reducing it to a challenge of individual responsibility. A greater awareness of the many other options available, which make it difficult to, as Archer (2012) puts it, ‘bound variety’ by committing to a particular course of action (Carrigan 2017). The range of intelligibilia was once filtered through the socio-cultural context a person occupies, i.e. through the people encountered, the ideas they promulgated, the cultural materials that were available and those with which they were inclined to engage. This filtering certainly still happens but it no longer operates through scarcity, given the range of variety which any connected individual brings into this environment.7 It becomes increasingly necessary to (2007) and Sayer (2011) in recognising the role of disposition as carriers of past experience into present reflexivity while agreeing with Archer (2007) that intensifying social change means the generic tendency of these dispositions to reproduce past circumstances is breaking down. This needs qualification because of the continued existence of the unconnected, even if their ranks shrink with each passing year. Increasingly, the issue will be one of the quality of connection (e.g. reliability, affordability, speed, customizability) rather than the fact of connection or its absence.
Growing up in a world of platforms
109
bound variety as an active process lest the individual slowly slip beneath the tides of potential items of focus, which inexorably accumulate within their environment. I agree with Archer’s (2012: 62) Peircean point that ‘the more social variation and cultural variety available to ponder upon reflexively … the greater the stimulus to innovative commitments’, but that has to be supplemented by an awareness of the variable capacity of agents to cope with that variety, as a biographical engagement with cultural abundance. These tendencies towards distraction and overwhelming operate in sequence because what the first disrupts (reflexive deliberation) is necessary to cope with the second (abundant variety). To fail in what Archer (2012) describes as ‘bounding variety’ abandons the agent to a situation where, as Durkheim (2006: 270–271) put it, ‘our sensibility is a bottomless abyss that nothing can fill’ such that ‘the more one has, the more one wants to have, the satisfactions one receives only serving to stimulate needs instead of fulfilling them’ (Durkheim 2006: 270– 271). His description of the inner life of the bachelor goes some way towards anticipating the moral phenomenology of what Maccarini (2019a) calls the ‘bulimic self’ and Archer (2012) identifies as ‘expressive reflexivity’: The humdrum existence of the ordinary bachelor is enough, with its endless new experiments raising hopes that are dashed and leaving behind them a feeling of weariness and disenchantment. In any case, how could desire settle on something when it is not sure that it will be able to keep what attracts it? For anomie is twofold. Just as the subject never gives himself definitely, so he possesses nothing definitely. Uncertainty about the future, together with his own indecisiveness, thus condemns him to perpetual motion. Hence a state of unease, agitation and discontent that inevitably increases the possibility of suicide. (Durkheim 2006: 270–271) There are a range of questions here concerning how reflexivity is changing, as well as what this means for the inner life of the subject within a world undergoing transformation. In a sense my aims are narrower because I’m interested particularly in the role that digital technologies play in this process, particularly the platformised variants which, as I shall argue, now predominate. But this narrowness is qualified by the claim that in what van Dijck, Poell and De Waal (2018) call ‘platform society’ – where the digital pervades social life to an unprecedented degree, as opposed to being a subsystem within it or an infrastructure from which social action can be abstracted for analytical purposes. The causal powers of platforms to track, analyse and intervene through the activity taking place in them introduces a novel quasi-agentive8 element into social action which draws upon their 8
I qualify with ‘quasi’ because I don’t want to endorse Latour’s (2005) flat ontology of ANT with its unhelpful reduction of agency to causation as such (Elder-Vass
110
Mark Carrigan
affordances. Platforms learn about their users, model their behaviour and leverage these models in ways that are (a) inherently opaque to the modelled in what is an epistemically asymmetric relationship, (b) deploy insights about a population in an attempt to influence individual behaviour,9 and (c) become more asymmetric with time, as machine learning systems work with expanding data sets (Carrigan and Fatsis 2021). A voluminous expert and lay literature has emerged, which deals with the challenges this creates at the level of practical reasoning – e.g. debates about ‘screen time’, ‘internet addiction’, ‘digital detoxing’ and ‘personal productivity’ (Carrigan 2016). The significance of these developments for philosophical anthropology risks being lost because the content of the difficulties is sufficiently mundane as to fall beneath the radar of the human sciences. The mechanisms at work are not easily identifiable within field studies, while the applied behavioural science so influential within the design of digital systems works with the mechanisms that can be discerned through laboratory studies (Williams 2018). To offer one simple example, for the majority of adults in a country like the UK who carry a smartphone, over 95% amongst 16–54-year-olds according to Statista (2020), the constant availability of connection means that what might otherwise be ‘lost moments’, waiting in queues or having arrived early for a meeting, instead become opportunities for purposive engagement (Harris 2014). It has become something of a platitude to observe that constant opportunities to be cognitively occupied must entail some implication for cognition. This doesn’t mean the observation itself is questionable, only that the real challenge is to be specific about what is going on here and to which outcomes it might lead. If we don’t do this, there’s a risk we essentialise a profoundly multi-faceted phenomenon, shifting a complex set of conditions ‘out there’ into a transformed model of the person ‘in here’ which suggests a categorical shift in human being as a result of this process (Carrigan 2014). An example of this can be seen in the (thankfully debunked) concept of ‘digital natives’ who ‘think and process information fundamentally differently from their predecessors’ because of having ‘spent their entire lives surrounded by and using computers, video games, digital music players, video cams, cell phones, and all the other toys and tools of the digital age’. Such ‘digital natives’ are ‘all “native speakers” of the digital language of computer, video games and the internet’, in contrast to the ‘digital immigrants’ who are tasked with teaching this strange and rather alien species (Prensky 2001: 1–2). As
9
2007). But I also resist the tendency to treat these socio-technical systems as if they were tools which can, in the Weberian tradition, be reduced to the intentions and meanings which they hold for their users (Gane 2004: 3). The scale of their operations shouldn’t obscure the often simplistic philosophical anthropology underpinning these projects. But the fact these interventions might not work in the manner expected doesn’t preclude them having a causal impact through the sheer fact of being attempted. Furthermore, the mere accumulation of these attempts to influence our behaviour can have an aggregative effect outstripping the insignificance of any one instance.
Growing up in a world of platforms
111
many have pointed out, this language of ‘digital natives’ and ‘digital immigrants’ naturalises digital inequalities, obscuring the digital divide by constructing all young people as inevitably possessing competencies formed through immersion in digital media throughout their youth. While most scholars have rejected the term, it remains influential amongst the broader public (Boyd 2014: 192–196). There’s a certain intuitive plausibility to the category, which resonates with ubiquitous experiences of watching young people display an ease with digital technology. To reject it isn’t a denial that changes are taking place but rather a belief that categorical accounts of those changes are inherently implausible. If we see new types of (young) people as inevitably emerging from the introduction of new technologies, it’s impossible to ask how the interaction between the casual powers of the people and the causal powers of the technology might play out differently across social contexts. In talking about ‘distracted people’ my point is not to suggest we have or might all come to be such people but rather to suggest that how people exercise their agency is changing under certain conditions, leaving us with the question of what they do and how it is entangled in a broader process of change. In that sense its focus is adverbial rather than verbal. It runs against a broader intellectual tendency at work here, which identifies digital technology as bringing about a fundamental transformation, producing different kinds of people who necessitate that we revise our basic assumptions about human beings. These changes might be greeted with enthusiasm, as we can see in Prensky (2001), with the sense of impending catastrophe we find in Zimbardo and Coulombe (2015) or anywhere between these two extremes. What these contrasting accounts share is a sense that certain types of young people are an outcome of exposure to digital technology, obscuring the role played by reflexivity and relations in bringing about these apparent effects. As Facer (2011: 32) puts it, The presence of digital technologies in the home has not made all children more creative, more entrepreneurial, more social or more stupid, any more than the sales of Encyclopaedia Britannica or comics in the early twentieth century made all children cleverer or dumber. Nor are all children involved in the same sorts of activity online: they are not all avid games players, social networkers, bloggers or happy slappers. The expert development of competency in these settings, moreover, is fostered by parents, peers and others in the informal learning setting. What Facer (2011: 33) describes as the ‘breathless rhetoric of generational change’ completely obscures the difficulty involved ‘for all generations and age groups of developing complex conceptual and critical skills’. Or for that matter, failing to develop them and what this failure means for the distribution of life chances within a social world in which digital technology is ubiquitous. These don’t emerge as an inevitable by-product of participation but rather through support and guidance in relation to others. There’s a form of
112
Mark Carrigan
essentialism that too often figures in how the relationship between digital technology and human agency is understood; one that is more pernicious for often failing to account for its own implications at the level of philosophical anthropology. In its eagerness to pronounce an epochal shift in how people orientate themselves towards the world, as well as in many cases the ‘hard wiring’ that underpins this, it prevents us from disentangling the various factors at work and the multiplicity of outcomes that different combinations of them are capable of producing. It doesn’t follow from this that we need to reject essentialism as such but it does mean we ought to be cautious about how we think about essences in relation to technological shifts. To avoid these pitfalls we need an approach to socialisation that places sufficient emphasis on reflexivity and relations in order to trace out the interaction between people and technology over time in structured contexts, as opposed to an essentialism that sees new kinds of people as emerging from new kinds of technology. For this reason I will draw on Archer’s (2012) account of socialisation as reflexive engagement before turning to the question of digital platforms and the role this might play in becoming who we are within a platform society (Carrigan 2014).
Socialisation as reflexive engagement Reflexivity operates through what Archer terms the ‘internal conversation’. These internal dialogues in which ‘people talk to themselves within their own heads, usually silently and usually from an early age’ (Archer 2007: 2) encompass a wide range of activities, uptake of which varies between persons. Archer (2003, 2007) identifies four distinct modes through which reflexivity is practiced: communicative, autonomous, meta-reflexive and fractured. In each case, the tendency towards a particular dominant mode emerges from the interaction between a person’s concerns and social context over time. It is a personal emergent property with causal consequences that are both internal and external, producing different tendential responses to structural and cultural properties with ensuing implications for patterns of social mobility, as well as aggregative (via tendencies at the population level) and emergent (via divergent propensities towards collective action) implications for social reproduction or transformation at the macro-level.10 The central claim here is that ‘the interplay between people’s nascent “concerns” (the importance of what they care about) and their “context” (the continuity, discontinuity or incongruity of their social environment) shapes the mode of reflexivity they regularly practice’ (Archer 2007: 96). Contrary to many prevailing theories of socialisation, her account rests on the understanding that, even at a young age, individuals engage in an evaluative way with their social environment and these engagements, as well as the characteristics of their environment, 10 See Carrigan (2016) on fragile movements for an initial attempt to theorise the significance of digital distraction for collective action.
Growing up in a world of platforms
113
shape their emerging practice of reflexivity as they move into adulthood. Given the intensification of social change, ‘there is less and less to normalize’ and the traditionally invoked agencies of socialisation come to stand as cyphers: ‘socialization can no longer be credibly conceptualized as a largely passive process of “internalization”’. This ‘relative absence of authoritative sources of normativity’ means ‘young people are increasingly thrown back upon reflexively assessing how to realize their personal concerns in order to make their way through the world’ (Archer 2012: 96–7). If we follow Archer’s (2012) argument that the intensification of social change means that socialisation needs to be (re)conceptualised as relational reflexivity, what I’ve termed the adverbial shift in how reflexivity is exercised has implications for the process of socialisation. This involves a significant departure from the orthodox conception of socialisation within sociological thought, argued by Archer (2012: 91) to assume high socio-cultural integration (ensuring the receipt of consensual messages), stable functional differentiation (ensuring clear and stable role expectations) and high cultural system integration (ensuring normative consistency). To the extent these conditions are absent from the nascent contexts of young people, an empirical question that invokes the matrix of relations within and through which they become who they are,11 the socialisation process becomes much more a matter of selection rather than internalisation i.e. choosing between the difference encountered within the context rather than simply incorporating it into the outlook, concerns and dispositions of an emerging adult. If the norms, guidelines and ideas encountered in the natal context are in tension, as they increasingly tend to be with falling levels of social integration,12 then any attempt at internalisation would inevitably confront the subject with these tensions even if there are a whole range of ways in which these could be negotiated. To render this in abstract terms risks being perceived as a straw man, counterpoising the homogeneity of tradition to the heterogeneity of late modernity. Archer’s (2012) focus on relations is crucial to avoiding this, framing the issue in terms of the distribution and character of difference within the natal context rather than the mere fact of it as such. Obviously, we should not assume complete unanimity, for example in ideas held or norms enforced, within traditional contexts. This would be absolute socio-cultural integration and, if we hold to a view of innovation as combinatorial creativity, it would seem impossible for such a social order to grow or change. However, we can instead see it as a matter of degree, with the macro-social characteristics (socio-cultural integration, functional differentiation and cultural system integration) corresponding to micro-social experiences (the receipt of 11 Archer (2012) and Carrigan (2014) present a body of empirical work that suggests their decreasing presence amongst young people in the United Kingdom. Archer (2012: 87–124) provides conceptual grounds for expecting this would be the case as modernity transitions into after-modernity (Archer 2015). 12 It can’t be stressed enough that this is a tendency and one that is unevenly distributed throughout the population.
114
Mark Carrigan
consensual messages, clear and stable role expectations and normative consistency) through the configurations of relationships, which define the nascent context of the emerging adult (Archer 2012: 87–99). Gorski’s (2016) analysis of American religious right communities as enclaves of normative consensus is a helpful reminder of how people respond to the experiential challenge of declining social integration by seeking to form integrated communities with others with whom they share commitments (Maccarini 2016). These mesoand macro-social tendencies are not just things that happen to agents but rather things to which agents are capable of evaluating and responding purposively in relation to, with implications for the overarching tendency. This leaves it a matter of individuals negotiating relationships within their settings rather than, as with the detraditionalisation problematic, individuals gradually being liberated from the constraints of social structure (Carrigan 2010; 2014). As Archer puts it, ‘their real relations with others also need retrieving as variable but powerful influences upon the equally variable outcomes that now constitute the lifelong socialization process’ because ‘[o]therwise, the entire concept risks drifting into an unacceptable monadism or slipping into Beck’s portrayal of subjects’ capricious and serial self-reinvention in a social context reduced to “institutionalized individualism”’ (Archer 2012: 97). It’s not a matter of individuals suddenly embracing untrammelled framing as social structuring retreats from view, as much as individual properties and powers being increasingly necessary to negotiate this structuring: leading to what Archer (2012) calls the ‘reflexive imperative’. This involves the ‘necessity of selection’ and ‘shaping a life’: the unavoidable need to select from the variety encountered through the life-course and the difficulties entailed by shaping a satisfying and sustainable way of living from what has been selected. Talking of ‘selection’ and ‘variety’ can easily be misconstrued. This is not a matter of detached choice at sequential moments but rather a temporally extended unfolding of our evaluative orientation towards the possibilities we encounter. Through the developmental process of reacting to environmental stimuli, sifting the pleasant from the unpleasant and the desirable from the undesirable, an awareness of our first-order emotions begins to emerge, which immediately poses questions about their compatibility and incompatibility that invite a deepening of our nascent dialogue about what matters to us (Archer 2000; Sayer 2011). As we elaborate upon this, coming first to exist as a being with these concerns and then as one who to some extent recognises herself as such, these evaluative orientations come to act as ‘sounding-boards, affecting our (internal) responses to anything we encounter, according to it resonating harmoniously or discordantly with what we care about most’ (Archer 2012: 22). This intensifies the aforementioned path dependency, with the elaboration of our evaluative orientations in relation to the novelty we encounter conditioning our future trajectory, as some possibilities are ‘shunned, repudiated or negatively sanctioned’ and others ‘welcomed, encouraged or positively sanctioned’ (Archer 2012: 23). This can be seen as a trajectory of
Growing up in a world of platforms
115
selectivity, with the elaboration of our evaluative orientations serving to filter variety in a progressively more patterned way. Our movement through the world begins to acquire a direction and a style, which we grossly misrepresent if we construe it in terms of an iterative confrontation ‘with a plurality of uncertain life course options’ such that life becomes a ‘reflexive project’ and ‘individuals are continuously forced to organise the future and reconstruct their own biographies in light of rapidly changing information and experiences’ (Mills 2007: 67–8). What such a construal misses is the cumulative manner in which past experience shapes present orientations towards future possibilities, filtering the variety we encounter rather than iteratively presenting us with the open vista of a future to be colonised. The social gets ‘inside’ us through the accumulation of the changes we go through as we make our way through the world (Archer 2012: 51).
Growing up in a world of platforms In the first half of this chapter I discussed three mechanisms with which the ubiquity of platforms13 confronts users in everyday life: the multiplication of interruptions, the pluralisation of communication channels and the switch from scarcity to abundance. I argued this produces two tendencies in how reflexivity is exercised: 1
2
Deliberations are more likely to be interrupted by technological claims upon attention and/or displaced by the cognitive labour involved in managing and minimising these distracting elements. Internal conversations trend towards the staccato, being shorter in duration and less interconnected with prior intra-personal dialogues. Deliberations are more likely to be vulnerable to the intrusion of alternative possibilities because it is becoming increasingly difficult to bound variety. Internal conversations trend towards the stochastic because there’s a greater awareness of the other things we could be doing, the other places we could be going and the other things we could aspire to be.
These are trends primarily correlative with how entrenched digital platforms are within a subject’s life. However, they can’t be reduced to the fact of individual use because there are growing opportunity costs to foregoing their inducements. These might once have seemed relatively trivial (revitalising old connections, expanding personal networks, convenient access to services, 13 I recognise this ubiquity is far from uniform. Internet Live Stats (2020) estimates there are 4.6 billion users accounting for a small majority of the world’s population. However, the conditions of their access and the experience entailed by them will diverge sharply. This is a complex question, which I intend to explore in detail in future work and cannot do justice to within the present chapter, other than to insist that it is not a straightforward matter of developed/developing world (Arora 2019).
116
Mark Carrigan
expanded opportunities for cultural consumption, etc.) but the likelihood that Covid-19 will lead to increased platformisation14 means these opportunity costs will grow in their impact, e.g. being unable to book a meal or visit a pub without using a platform to register or being unable to acquire certain goods without using ecommerce platforms. Furthermore, ‘network weather’ creates a diffuse pressure towards their expanded influence, even amongst those who refuse their use: refusal becomes an increasingly reflexive commitment to not be drawn in rather than a default setting of simply not having opted in. I have suggested we see their influence in adverbial rather than typological (the mode of reflexivity) or substantive (the object of reflexivity) terms.15 As a result of the entrenchment of digital platforms within everyday life, reflexivity comes to be more (1) distracted, in the sense of being more vulnerable to interruption by external contingencies, (2) porous, in the sense of less likely to sustain a focus on a particular object. This has implications for the mode of reflexivity such as those I touched upon in Carrigan (2017), for example suggesting that autonomous reflexives might be more prone towards cognitive triage, a narrowing of horizons to focus on the urgent rather than the important, as a means of dealing effectively (a quintessentially autonomous ideal) with the challenge of focusing under these conditions. It has implications for the object of reflexivity, in so far as coping with distraction and porousness risks occupying deliberative capacity, in an organised fashion (e.g. the aspirations to self-management embodied in something like the Quantified Self movement16) or a disorganised preoccupation with one’s own struggles, which ironically intensifies their impact on the agent.
The platform society as the cultural context for socialisation If we think back to our earliest encounters with the internet, to what extent were we struck by the potential vastness of what it enabled us to access? If we consider this in quantitative terms, it becomes clear this vastness is a fraction of what we now confront, with Internet Live Stats (2020) currently estimating the number of websites globally as 1,781,248,975 and rapidly growing. This doesn’t even take account of the torrents of user-generated content available through social media or the cultural archive available through subscription streaming services. What made this initial encounter so arresting was likely the implicit contrast between ‘old media’, with its time-consuming physicality 14 Firstly because social distancing will be the norm in the continued absence of a vaccine and digital platforms facilitate interaction without physical contact. Secondly because the platform economy seems best placed to survive the economic crisis which the pandemic, as well as the action taken to ameliorate its impacts, has generated. 15 There are other adverbial shifts susceptible to analysis. For example, the expanding literature on social acceleration could be (perhaps unfairly) glossed as saying that people have an increasing tendency towards rushing (Wajcman 2015). 16 See Lupton (2016).
Growing up in a world of platforms
117
and ‘new media’, with its immaterial immediacy. It was the sense that what we wanted could be accessed on demand, more so with each passing year as the digitalised archive becomes an increasingly taken for granted part of social life (Carrigan 2017). It would be misleading to infer from this experience that digital platforms entail an exponential increase in variety for a number of reasons. Firstly, older forms of media die out or become less significant as what Jenkins (2007) calls ‘convergence culture’ comes to be mainstreamed.18 Secondly, there is a great deal of uniformity within ‘new media’ as similar competitive pressures, namely to be amplified by the algorithms of the major social media platforms, lead to the production of similar content (Caplan and Boyd 2018). Thirdly, its clear variety is filtered by the algorithmic operations of social media platforms in complex and not always consistent ways (Margetts 2017b). Fourthly, ‘old media’ and ‘new media’ interact in synergistic ways, as can be seen in the new forms of engagement that surround television e.g. ‘the back channel’ constituted by a Twitter hashtag for a popular TV show (Couldry 2012: loc. 3078–3128). Another example would be how the most prominent political commentators in the blogosphere often acquire their authority (and in part their prominence) through having their stories picked up by the mainstream media (Couldry 2012: loc. 3770–3785). In reality we are dealing with what Chadwick (2017) calls hybrid media and simplistic oppositions between ‘old’ and ‘new’ make it difficult to trace out the emerging interconnections between these media. However, there is a longerterm possibility that platforms are undermining the capacity of content producers to earn a living, suggesting they might eventually reduce variety through their mediation of it (Lanier 2010, Taplin 2017). This is not to deny the overall increase in variety but rather to caution against assuming its exponential growth, as well as drawing the distinction between net variety and what its distribution means for socialisation. It is with regards to the latter issue that social media is at its most interesting, with platforms providing means for self-representations to proliferate. What Plummer (2001: 7) calls ‘documents of life’ have long been ‘hurled out into the world by the millions’: 17
People keep diaries, send letters, make quilts, take photos, dash off memos, compose auto/biographies, construct websites, scrawl graffiti, publish their memoirs, write letters, compose CVs, leave suicide notes, film video diaries, inscribe memorials on tombstones, shoot films, paint pictures, make tapes and try to record their personal dreams. 17 I suggest ‘immaterial’ to capture the experience, as the infrastructure and transmission though it is obviously deeply material. 18 A classic example would be a cross-platform marketing campaign in which a film has a tie in video game and comic, with all three potentially accessible through the same iPad even while they might have independent releases on distinct platforms. Some media die out while others become subordinated to transmedia marketing campaigns even while retaining a degree of independent existence.
118
Mark Carrigan
I suggest these are one of the key mechanisms through which variety is encountered: fragments of life which, as Thompson (1995: 233) describes it, bring ‘new opportunities, new options, new arenas for self-experimentation’. His claim was made about media more broadly, such as the role that novelistic, televisual or cinematic narrative plays in furnishing our sense of the world and what it holds for us. However, it holds as true for documents of life as it does for crafted narratives, as both present subjects with (fragmented or otherwise) representations of things to do, places to go and people to be. These again can be distinguished from resources of the self, such as the selfhelp and productivity literatures, which thrive across the whole range of digital media. This third category fits most obviously within the cultural system, defined by the logical relations that obtain between their (implicit or explicit) propositions about how one ought to live (Archer 1988). However, these three categorisations are analytical distinctions that might overlap in practice e.g. a narrative can be a fable with a clear proposition about how to live or a document of life can imply a message about a way of life being good for others. I offer them here to think about intelligibilia in terms of variety, its mediation and distribution. Unless we recognise the distinctive causal powers a medium19 entails for the distribution of intelligibilia it will be difficult to identify how changes in the media landscape make a difference to the production of variety, not least of all one as significant as the shift towards the ‘platform society’. Furthermore, this engagement between social theory and media theory, something which Couldry and Hepp (2016) observe that neither side has tended to be very good at, provides us with analytical instruments to further address the concerns that Archer (2012) raises about the distribution of variety and its relative neglect within social theory. In the rest of this chapter I focus on documents of life rather than crafted narratives or resources of the self. This is not a repudiation of the interest or significance of the latter two. I have written about these elsewhere (Carrigan 2014; 2016) and see the present chapter as supplementing this analysis. Furthermore, the effect of platformisation upon them is in a sense quite straightforward: it makes it easier for new entrants to develop and build an audience for them (see for example the rise of self-help authors whose careers began with blogs, YouTube or podcasts or best-selling authors whose literary careers began with self-published eBooks) thus increasing the pool of cultural intelligibilia with second-order consequences for consumers (the problem of abundance) and for producers (the need to specialise). It also blurs the boundary between consumers and producers by allowing direct interaction between them and making it easier for the former to join the ranks of the later, even if the excitable proclamations of cultural democracy which accompanied earlier social media have been replaced by a more nuanced 19 I use the term in a minimalist sense to refer to that which mediates the transmission of cultural elements. In this sense social media platforms are a form of media but so too is the codex book.
Growing up in a world of platforms
119
realisation of the attention hierarchies that pervade these platforms: the fact anyone can ‘have their say’ not only doesn’t mean they will be listened to, it actively militates against it by vastly increasing the competition to be heard (Carrigan and Fatsis 2021). This increased availability of an ever more diverse array of crafted narratives and resources of the self provides a powerful source of potentially discordant ideas which can be accessed from a young person’s bedroom or mobile phone. While some intention is necessary to kick off the process, it’s important to recognise the ‘rabbit hole’ effect, in which algorithmically recommended content on the basis of initial selections can soon draw an agent into a place far from where they originally intended to go (Roose 2020). My suggestion is this decreases the feasibility of being an identifier, in Archer’s (2012) sense, representing a further source of disruption to the receipt and endorsement of a normative consensus from the natal context. To the extent platformisation also swells the ranks of the rejectors, it correspondingly reduces the pool of those for whom their natal context supplies directional guidance (Archer 2012: 271).
Possible and potential selves Documents of life present us with a more complicated picture in which quotidian fragments of lived experience can nonetheless exercise a significant influence alongside vast inequalities about who gets heard, who gets to control how they’re heard and who really gets listened to. Following Turner’s (2010) account of reality TV, we might suggest this is demotic rather than democratic: it foregrounds ‘normal people’ without in much meaningful sense empowering them. In fact, as well we shall see, it risks leaving this normality entangled in the attention economy of social media in a way liable to induce the hollowing out of that very normality in the pursuit of online recognition (Johnson, Carrigan and Brock 2019). While we have ‘drawn, carved, sculpted and painted images of ourselves for millennia’ it is nonetheless the case that ‘[w]ith digital cameras, smartphones and social media it is easier to create and share our self-representations’ but the platforms upon which we now rely for creation and sharing incline us towards concerns for visibility and popularity which were previously relatively marginal with this sphere of activity (Rettberg 2014: 2, van Dijck 2013). Under these conditions, we see a breakdown in the familiar distinction between everyday ‘documents of life’ and the glossy representations produced by the culture industries. Far from the claimed openness of social media leading to a proliferation of representational activity, in which all are able to be heard, quotidian self-representation instead comes to be marked by characteristics that were previously reserved for professional production. The hard work of cultivating celebrity comes to be part of everyday life for a growing cohort. While these considerations might always have surfaced through cultural production, incipient within it through the human capacity for instrumental rationality, it was nonetheless a matter for the individual’s own deliberation:
120
Mark Carrigan
the amateur writer or artist might often have aspired to be a professional but these were aspirations nurtured through reflections on their craft rather than inducements built into the tools they were using. In contrast what we see now are inducements inherent in the apparatus of production itself, as a media system built through online platforms and omnipresent devices remove any limits on the scope of self-representational activity within daily life, while also offering powerful inducements towards pursuing recognition within the metricised boundaries of these platforms. It’s certainly possible to resist what Gerlitz and Helmond (2013) call the ‘like’ economy but these feedback mechanisms mediate social evaluation in a way that can prove immensely powerful, inflecting the approval/disapproval of peers through the evaluative machinery of the platform. It implies a commensurability with those far beyond the natal context, with the handful of likes a photo on Instagram receives from a smattering of followers being newly comparable to the thousands someone else’s photo receives from hundreds of thousands of followers. The result is an apparent recovery of the quotidian, a potentially overwhelming torrent of representations concerning everyday life, driven by opaque and disavowed concerns to represent those lives in a way liable to win acclaim on the platforms used to circulate them. The representations seem more real, percolating upwards from the fabric of everyday life, while often being no more faithful to the reality of those lives than their broadcast and print media counterparts and perhaps considerably less so than their quotidian counterparts. The filtering at work here is both technological and cultural, with the latter all the more powerful for its tendency to be naturalised as an apparent feature of activity upon a given platform (Rettberg 2014: 23). The concept of possible selves can help us understand the influence of these changes. This is a psychological construct intended to help us understand how individuals envision their expected or hoped for future. While often treated in an under-theorised way, it has been used by myself and others working in a realist mode to gain purchase on the imagined orientation that subjects have towards their future (Carrigan 2014; Stevenson and Clegg 2011). It implies capacities for imagination and memory, drawing on past experiences and present knowledge to construct future possibilities. These are capacities that are routinely affirmed in literature, as Strahan and Wilson (2006) observe in their discussion of Ebenezer Scrooge’s confrontation with his past, present and future selves in A Christmas Carol. In fact one could argue that personal narrative as such would be unrecognisable in the absence of these capacities. We can recognise the dramatic component in any fictional narrative, such as the dramatic manner with which ‘Scrooge is faced with his past, present, and possible future selves’ as Strahan and Wilson (2006: 2) describe it, while still appreciating the reality of the mechanisms at work. These mechanisms are central to what Alasdair MacIntyre (2013) describes as ‘the unity of a human life’: our actions and utterances become intelligible against the background of our continued existence, finding their place as episodes in a unified life, which assumes the shape it does through the integration of the episodes which
Growing up in a world of platforms
121
comprise it. Even those theoretical approaches, which stress the possibility of endless self-creation and continuous self-reinvention, implicitly recognise the existence of something underlying which is capable of changing in these ways (Craib 1998). Such activity unavoidably involves a relationship to the self, usually one in which the present subject redescribes their past self in a way liable to have future consequences (Bhaskar 1989: 171–173). As Stevenson and Clegg (2011: 19) describe the concept:20 Possible selves are future representations of the self including those that are desired and those that are not. They can be experienced singly or multiply, and may be highly elaborated or unelaborated. They may relate to those selves we desire to become or those we wish to avoid. Possible selves play both a cognitive and an affective role in motivation, influencing expectations by facilitating a belief that some selves are possible whereas others are not and, by functioning as incentives for future behaviour, providing clear goals to facilitate the achievement of a desired future self, or the avoidance of a negative one. More significantly the possible selves construct holds that individuals actively manage their actions in order to attain desirable selves and evade less desirable selves. As representations of the self in possible future states, possible selves give form, specificity and direction to an individual’s goals, aspirations or fears. Our internal mental activities, which C.S. Peirce described as ‘musement’, resist characterisation in terms of definite functions. They are by their nature open-ended, independent vectors of possibility that exist within us and complicate the process through which the external world is interiorised in a way that can lead to our reproducing it through our actions (Carrigan 2014). One mechanism through which this occurs is the generation of possible selves: present representations of future possibilities for what we might do and who we might become, inviting our evaluation in a way that potentially guides our action in the field of possibilities available to us. However, this is an outcome rather than the activity itself. There are many internal activities we engage in, with Archer (2003, 2007, 2012) investigating ten of them in her empirical studies of internal conversation: planning, rehearsing, mulling-over, deciding, re-living, prioritising, imagining, clarifying, imaginary conversations and budgeting. The opaque character of inner life renders it difficult to link actions to outputs in reliable ways. Nonetheless, we can see in the abstract how each of these activities may be liable to generate possible selves in different ways, with its resonance being entirely dependent on the person in question. For example, budgeting might generate an image of a frugal self, empowering for one person (with implicit links to a financially autonomous self to come, as yet still out of sight) while dispiriting for another (for whom 20 Oyserman and Markus (1990: 144) provide a useful counterpoint with has less of a sociological sensibility underlying it.
122
Mark Carrigan
there appears to be no end in sight for necessary financial self-restraint). In each case, these activities take place within a social context in ways we can assume are influenced by that context, suggesting the possibility of a schema through which we can draw out linkages between contextual changes and the specific mental activities of which the generic power of reflexivity consists. In some cases this influence might be uniform, such as the inherent difficulty of sustaining an internal dialogue if an external other is talking to you. In other cases, it might be much more particularistic, even to the extent of being unique to the agent and the situation in which they find themselves. In these terms, the adverbial influences I identified at the beginning of the chapter can be seen as operating across the full range of reflexive activities, being a matter of how they are conducted. When we situate possible selves in this way, it presents us with an obvious set of questions. Why do we imagine some possibilities rather than others? Why do some aspects of ourselves rather than others preoccupy us? Why do some possibilities stick with us while others fade away? Where does the content of these representations come from? How are they encoded and transmitted? How do people encounter them and how do social structures influence this process? Our inner world is populated with the symbolic resources we have taken in from the outer world, a heterogeneous array of elements that expand our imagination and provide fuel for our creativity (Archer 2003: 69). I suggest that possible selves are a key mechanism through which cultural intelligibilia (crafted narratives, documents of life, resources of the self) exercise an influence over how we become who we are (Archer 2007: 20). Each of these categories provides us with what we might term potential selves. Introducing this category helps us focus on the ‘raw materials’ through which these representations are constructed, as mental activities directly and indirectly draw upon a diverse array of cultural resources in contributing to the generation of possible selves. We therefore need to attend to the ‘signs, symbols and languages given to us through paperbacks, soap operas, chat shows, docudramas, film, video, self-help manuals, therapy workshops, music videos’ as ‘the resources from which we tell our stories’ (Plummer 2002: 137). Through doing so we open up an important interface between personal life and cultural change, as can be seen through historical shifts in the character and diffusion of cultural forms. The concept of potential selves helps illuminate the often opaque relationship between culture and subjectivity, opening out a crucial interface between the two and providing an instrument for its analysis (Gill 2009). It also extends the scope of the possible selves constructed, counteracting a tendency to treat individual representations as sui generis and identifying linkages between concerns that have predominantly been the domain of psychology and those of sociology and media. While possible selves refers to the properties and powers of human agents, specifically their first-person representations of potential futures produced imaginatively through any number of mental activities, potential selves refers to the properties and powers of cultural forms.
Growing up in a world of platforms
123
The claim I’m making here rests on Archer’s (1988) account of the objectivity of the cultural system (Archer and Elder-Vass 2012). If we take a hermeneutic rather than constructionist approach to cultural intelligibilia, recognising a relation between an interpreting subject and an interpreted item, it raises the question of the types of relation engendered by specific cultural forms. I’m suggesting that certain kinds of intelligibilia are amenable to being treated as potential selves by subjects,21 expressing their inherent character rather than properties a subject imputes to them e.g. a sitcom revolving around the personal lives of 20-somethings is more amenable to being used in this way than a dictionary of etymology would be. Examples of this category may not be produced with the intention of serving as potential selves22 but their encoding in media (e.g. print, photography, film) enables them to circulate independently of their producers, allowing others to encounter and interpret them in ways that leave them functioning as ‘raw material’ for the generation of possible selves. It is this capacity to be interpreted as a representation of human possibility that grants them the status of potential selves. In this sense it’s a broad category ranging from those which are obviously amenable to being used in this way (e.g. much Young Adult fiction) to those with a more opaque suitability depending on biographical contingencies (e.g. the rulebook for a Taekwondo association23). The category includes those cultural forms which have the capacity to be related to in this way, even if methodologically their identification might be limited to established genres which express this capacity through their explicit representation of human possibility (things to do, things to become) and/or poetic structures, which invite appropriation of these representations by subjects. While there have always been technologies for self-narration, social media represents a mainstreaming of self-narration, encompassing an expansion of the modalities through which such stories can be constructed: an enormous increase in the range of their possible circulation. What Couldry and Hepp (2016: loc 4049) describe as ‘the extended spatiotemporal reach of self-narratives’ should be taken so seriously, though the role of platform architecture in shaping the realisation of that potential reach should not be forgotten. In an important sense, the technology of a diary was private by default, liable to be locked in a draw as easily as shared with close friends (Couldry and Hepp 2016: loc 4065). There are certainly prominent exceptions, such as political 21 This claim risks appearing methodologically lightweight when made in a theoretical article. However, I’m confident it could be empirically refined, including through secondary analysis of existing research. The guiding question would be a straightforward one: what cultural forms do subjects draw upon in making sense of who they are? 22 Though if we look behind market-orientated considerations like the relatability of characters, it becomes possible to discern the commercial incentives for producing potential selves, even if they would not be categorised as such by cultural agents or the firms they are working with. 23 This is a real example from the research presented in Carrigan (2014).
124
Mark Carrigan
diarists; however, even in such cases the diary made public is usually filtered and edited to enhance public appeal, creating a different iteration of a text that was initially private. Van Dijck (2007: 6–7) offers the concept of ‘personal cultural memory’ to make sense of such private-by-default forms of selfnarration, describing them as ‘provisional outcomes of confrontations between individual lives and culture at large’. The media used to record such memories inevitably shapes the choices people make about what to capture and how to capture it. The ‘mediated memories’ they facilitate are used by us ‘for creating and re-creating a sense of past, present, and future of ourselves in relation to others’ (van Dijck 2007: 21). With platformisation, we see a dramatic transformation in the potential range of such mediated memories, leaving us constructing our relationship to our past, present and future on a scale and with a degree of publicity that would have previously been unimaginable. It is in this sense that we can identify a dependency of the self upon platform infrastructures, with potentially radical implications for how we conceive of the socialisation process (Couldry and Hepp 2016: loc 4115). Platformisation has not eliminated technological barriers to accessing what Archer (1988) calls the Cultural System, as much as it has changed their character while also contributing to the exponential growth of that system. There is also a risk of overstating the extent of this shift. There are countless archives throughout the world that have not been digitalised and others that only exist digitally as an index. Much of the archive exists within closed systems, accessible only to select groups such as those within specific organisations. Those aspects that are accessible by default require internet access and basic technological proficiency. Even then, we can inquire into what form that ‘access’ takes: home broadband access across multiple devices is a different proposition from unstable and expensive mobile access or reliance upon public libraries. The nature of this access has important consequences for the biographical implications likely to flow from the digitalisation of the archive. For some, it becomes ubiquitous, a constantly available resource to draw upon as they make their way through the world. For others, it becomes a vector of disenfranchisement, as the assumption of widespread internet access creates difficulties when organisations in general and public services in particular pursue a digital-by-default strategy in their operations. Recognising these continued constraints is important because it ensures we remain aware that abundance exists virtually for us, as a theoretical horizon for our pathdependent activity within the existing media system (Couldry and Hepp 2016: loc 1569). Nonetheless, digital media has made cultural production newly accessible, relying on devices that are widely available and requiring little specialised knowledge, producing artefacts that can by their nature be reproduced in a potentially endless way without any increase in cost or decrease in quality (as opposed, say, to the risks entailed in passing a photographic album or selfpublished book around the entirety of one’s social circle). The mobility of phones and tablets, as well as the rise of locative social media, ties
Growing up in a world of platforms
125
representations to particular places in which everyday life is enacted, while the audio and visual capacities of phones and tablets allows it to be documented in rich multimedia. These representations benefit from the affordances of social media, usefully summarised by Boyd (2014) in terms of their persistence, visibility, searchability and spreadability. These facilities serve to ensure the potential range of their circulation, by maximising the opportunities of others to find them while minimising the costs involved in doing so, even if this is rarely realised due to the enormous increase in the quantity of production that they also engender (while the factors described above mean that the quality is much less heterogeneous than commonly assumed). This brings about a transformation in the whole framework through which potential selves are encountered, rather than simply being a matter of encountering more potential selves through media (which could range from a conversation by letter with a distant acquaintance through to the possibilities represented in a Hollywood film).
The insertion of systemic inducements into the natal context There is more representational activity taking place but it is also primed by default for circulation and reception by others in a way that was never true of its analogue precursors, raising important questions about how this changes how people understand their representational activity. In so far as people are orientated towards the inducements of platforms (likes, retweets, views, etc.), as a generic concern for social self-worth it comes to be mediated by the architecture of interaction, their self-representational activity is likely to become more reflexive. This can be seen most dramatically in the case of influencers, aspiring and otherwise, pursuing online celebrity in order to leverage it for financial gain (Abidin 2018). But these are merely the outliers of a more pervasive trend in which a corporate culture built upon self-branding and selfpromotion influences wider social life by engineering these assumptions into their increasingly popular machinery of interaction (Marwick 2013). The implications of this appear as frequently in popular commentary as they do in scholarly research, even if the former tends to lack a nuance that the latter aims towards e.g. claims about the narcissism of the selfie generation, the decreasing capacity to cope with human imperfection and the anxieties that the barrage of perfect images produces in those who compare themselves to them. Even if we find the moralistic tone and hasty generalisations of these accounts problematic, there is nonetheless a kernel of truth underlying them. If self-representations are increasingly driven by algorithmic imperatives towards maximising their circulation, perfecting their content and most effectively winning the approval of online audiences then consequences for socialisation will inevitably flow from this. Craib’s (1994) warning about the importance of disappointment, the existential necessity of learning to live with frustrations and restrictions, comes to seem ever more relevant against this trend. The reflexive and self-referential character of self-representations
126
Mark Carrigan
transforms the normative character of documents of life in a manner that seems unlikely to be developmentally positive, particularly for the generation who have never known any other form of peer feedback. We should avoid idealising past socio-cultural relations, with their capacity to produce shame and dismay in those unable to live up to their injunctions. Nonetheless, the documents of life against which young people increasingly measure themselves in a platformised world present them with normative standards that are liable to prove impossible to meet, corresponding with new forms of shame and sanction produced by these experienced failures.
The liberation from the geo-local Only a limited array of potential selves circulate within this media system, filtered through the aforementioned constraints inherent in media organisations, as well as the commercial apparatus surrounding them. In contrast, the rich diversity of ‘documents of life’ tended to be restricted to local contexts, due to their reliance on media that did not easily scale (van Dijck 2007; Plummer 2001). The ramifications of those potential selves encountered were restricted in each case: by the inevitable characteristics of existing networks within which ‘documents of life’ were circulating, as described in the previous section, as well as the many filters in operating within existing media organisations. If potential selves are primarily encountered through face-to-face interaction, it is liable to leave the individuals concerned embroiled within the dynamics of what Archer (2003, 2007) calls ‘communicative reflexivity’. Under these conditions, individuals rely on similarly situated others to complete and confirm their internal dialogues about what to do and who to be. This tends to generate consensus, not because of any inevitable uniformity of people who approach life in this way, as much as the ‘common sense’ about ‘people like us’ when substantial swathes of the possibilities a person confronts are likely to be discussed with others. The potential selves encountered might be challenging but this challenge is mediated through interpersonal interaction with others liable to share a common starting point. The individual grapples with implications for their possible selves in dialogue with others who are already prone to sharing the same ‘mental furniture’. In contrast, the potential selves encountered in a platformised world have no such commonality underpinning them, beyond the minimal social integration entailed by being users of the same social media platforms. As van Dijck (2007: 24) puts it, ‘we no longer need to derive our personal tastes or cultural preferences mainly from social circles close to us, because media have expanded the potential reservoir for cultural exchange to much larger, even global, proportions’ (Van Djick 2007: 24). Not only is this likely to be a further blow to the possibility of sustained communicative reflexivity, it raises the question of the form that social integration can take under these circumstances. We face a disturbing possibility that normative consensus might come to take another form: individual atoms privately matching their behaviour in a collective enterprise of modulation driven by opaque algorithms serving corporate vested
Growing up in a world of platforms
127
interests. Pasquale (2015) draws attention to the comparison inherent to social media metrics, as well as the forms of mutual replication that it might lead to if left unchecked.
Conclusion Conceptual distinctions matter if we want to understand the influence of digital technology on the socialisation process, as well as what this means for the kinds of people who emerge under changing conditions. Even though the language of essences is somewhat unfashionable within contemporary social thought, the philosophical issue underlying this disputed terminology is fundamentally this question of kinds. It was asked in a form that remains popular and influential by the radical sociologist C. Wright Mills (2000: 7) who highlighted this relationship in his classic The Sociological Imagination, asking ‘what varieties of men and women now prevail in this society and in this period?’. His interest was in how they are ‘selected and formed, liberated and repressed, made sensitive and blunted’ and what this meant for ‘what varieties are coming to prevail’. The obvious risk in dealing with emerging technologies is we overestimate their social influence, even perhaps reproducing the marketing rhetoric of the commercial firms who have a vested interest in convincing others of their significance. This is why we need to exercise conceptual caution in our analysis of how kinds are ‘selected and formed, liberated and repressed, made sensitive and blunted’ and the role that technology plays in this. It is such care that those examples I discussed earlier in this chapter lacked, with their hasty pronouncements about new kinds of people emerging from technological shifts. This is one form of technologised essentialism: technology precedes essence. It is tempting to abandon essences in the name of empirical adequacy, as a means of ensuring we recognise the variable outgrowths of these technological changes across wildly divergent social contexts. However, to do so would be a mistake because it deprives us of the key means through which we can avoid such over-statements. It is only through a careful reclamation of the human being’s distinct properties and powers that we can begin to unpick their interaction with technology’s influence and how this unfolds within distinctive contexts. In this chapter I have used the example of reflexivity and socialisation in order to make this point. I’ve argued that the raw materials of socialisation and one’s own orientation to them are undergoing a profound change but that the basic challenge of selecting from variety in order to cobble together a life that has sufficient shape to be liveable remains the same. The generations growing up within a platformised world, the younger millennials and the ‘zoomers’ who are coming after them, cannot be adequately understood as either digital natives or digital narcissists. They do, however, confront some unique existential challenges, which the economic, social and political ramifications of the crisis unfolding around us make it even more urgent that we understand. If we read these challenges through the novelty of
128
Mark Carrigan
the technological forms that they involve, intoxicated by the ‘shock of the new’, otherwise evident continuities are left newly obscure: negotiating relations with peers and family in forming a nascent a personal identity, coming to recognise the constraints/enablements of the natal context, appraising what one could be or do in the future etc. The corresponding risk is that we discount the causal influence of the technological, as if socio-technical innovations are mere instruments peripheral to the fundamental aspects of social life. I’ve sought to avoid these corresponding dangers and instead suggested an approach to analysing the influence of technological changes on emerging adults that avoids epochal over-statement while also recognising that significant shifts are underway. Young people aren’t becoming different types of person in a world where social platforms are ubiquitous but how they are becoming persons is undergoing change, with significant implications for their place within the world and what it means for them.
References Abidin, C. (2018). Internet Celebrity: Understanding Fame Online. Bingley: Emerald Publishing Limited. Al-Amoudi, I. and Morgan, J. (Eds.) (2019). Realist Responses to Post-Human Society: Ex Machina. London: Routledge. Al-Amoudi, I. and Lazega, E. (Eds.) (2019). Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Archer, M.S. (1988). Culture and Agency: The Place of Culture in Social Theory. Cambridge: Cambridge University Press. Archer M. S. (1995). Realist Social Theory: The Morphogenetic Approach. Cambridge: Cambridge University Press. Archer, M. S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M. S. (2003). Structure, Agency and the Internal Conversation. Cambridge: Cambridge University Press. Archer, M. S. (2007). Making Our Way Through the World: Human Reflexivity and Social Mobility. Cambridge: Cambridge University Press. Archer, M. S. (2012). The Reflexive Imperative in Late Modernity. Cambridge: Cambridge University Press. Archer, M. S. and Elder-Vass, D. (2012). Cultural system or norm circles? An exchange. European Journal of Social Theory, 15 (1): 93–115. Archer, M. S. (Ed.). (2015). Generative Mechanisms Transforming the Social Order. Dordrecht: Springer. Arora, P. (2019). The Next Billion Users: Digital Life Beyond the West. Cambridge, MA: Harvard University Press. Beer, D. (2016). Why is everyone talking about algorithms? Discover Society (40). Bhaskar, R. (1989). Reclaiming Reality: A Critical Introduction to Contemporary Philosophy. London: Routledge. Boyd, D. (2014). It’s Complicated: The Social Lives of Networked Teens. New Haven: Yale University Press. Braidotti, R. (2013). The Posthuman. Cambridge: Polity.
Growing up in a world of platforms
129
Braidotti, R. (2019). Posthuman Knowledge. Cambridge: Polity. Caplan, R. and Boyd, D. (2018). Isomorphism through algorithms: institutional dependencies in the case of Facebook. Big Data & Society, 5 (1). Carrigan, M. (2010). Realism, reflexivity, conflation, and individualism. Journal of Critical Realism, 9 (3): 384–396. Carrigan, M. A. (2014). Becoming Who We Are: Personal Morphogenesis and Social Change (Doctoral dissertation, University of Warwick). Carrigan, M. (2016). The fragile movements of late modernity. In M.S. Archer (ed.), Morphogenesis and the Crisis of Normativity, pp. 191–215. Dordrecht: Springer. Carrigan, M. (2017). Flourishing or fragmenting amidst variety: and the digitalization of the archive. In M.S. Archer (Ed.), Morphogenesis and Human Flourishing, pp. 163–183. Dordrecht: Springer. Carrigan, M. (2018). The evisceration of the human under digital capitalism. In I. AlAmoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina. London: Routledge. Carrigan, M. and Fatsis, L. (2021). The Public and Their Platforms: Public Sociology in an Era of Social Media. Bristol: Bristol University Press. Carrigan, M. and Porpora, D. (Eds.) (2021). Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory. London: Routledge. Castells, M. (2009). The Rise of the Network Society. London: Wiley. Chadwick, A. (2017). The Hybrid Media System: Politics and Power. Oxford: Oxford University Press. Couldry, N. (2012). Media, Society, World: Social Theory and Digital Media Practice. Cambridge: Polity. Couldry, N. and Hepp, A. (2016). The Mediated Construction of Reality. London: Wiley. Craib, I. (1994). The Importance of Disappointment. London: Psychology Press. Craib, I. (1998). Experiencing Identity. London: Sage. Durkheim, E. (2006 [1897]) On Suicide. London: Penguin. Elder-Vass, D. (2007). Reconciling Archer and Bourdieu in an emergentist theory of action. Sociological theory, 25 (4): 325–346. Facer, K. (2011). Learning Futures: Education, Technology and Social Change. London: Taylor & Francis. Gane, N. (2004). The Future of Social Theory. London: Continuum. Gerlitz, C. and Helmond, A. (2013). The like economy: social buttons and the dataintensive web. New Media & Society, 15 (8), 1348–1365. Gill, R. (2009). Breaking the silence: the hidden injuries of neo-liberal academia. Feminist Reflections, 21. Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12 (3): 347–364. Gorski, P. S. (2016). Reflexive secularity: thoughts on the reflexive imperative in a secular age. In Morphogenesis and the Crisis of Normativity (pp. 49–68). New York: Springer. Harris, M. J. (2014). The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection. New York: Penguin. Hayles, N. K. (2008). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press. Housley, W., Procter, R., Edwards, A., Burnap, P., Williams, M., Sloan, L., and Greenhill, A. (2014). Big and broad social data and the sociological imagination: a collaborative response. Big Data & Society, 1 (2): 1–15.
130
Mark Carrigan
Internet Live Stats (2020). Available online at www.internetlivestats.com [last accessed 2020]. Jenkins, H. (2008). Convergence Culture: Where Old and New Media Collide. New York: New York University Press. Johnson, M. R., Carrigan, M. and Brock, T. (2019). The imperative to be seen: The moral economy of celebrity video game streaming on Twitch.tv. First Monday, 24 (8). Kennedy, H., Poell, T. and van Dijck, J. (2015). Data and agency. Big Data & Society, 2 (2): 1–7. Lanier, J. (2010). You Are Not a Gadget: A Manifesto. London: Vintage Books. Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Lupton, D. (2016). The Quantified Self. Cambridge: Polity. Maccarini, A. M. (2016). The normative texture of morphogenic society: tensions, challenges, and strategies. In M.S. Archer (Ed.), Morphogenesis and the Crisis of Normativity, pp. 87–109. Dordrecht: Springer. Maccarini, A. M. (2019a). Trans-human (life-)time: emergent biographies and the ‘big change’ in personal reflexivity. In I. Al-Amoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina, pp. 138–164. London: Routledge. Maccarini, A. M. (2019b). Deep Change and Emergent Structures in Global Society. Dordrecht: Springer. MacIntyre, A. (2013). After Virtue. London: A&C Black. Margetts, H. (2017a). Democracy is dead: long live democracy! Open Democracy. Available at www.opendemocracy.net/en/author/helen-margetts (last accessed 19 May 2019). Margetts, H. (2017b). Social media (and other platforms). Talk at Imagine 2027. Cambridge, November 2017. Marwick, A. E. (2013). Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Princeton, NJ: Yale University Press. Miller, D. (2011). Tales from Facebook. Cambridge: Polity. Mills, M. (2007). Individualization and the life course: towards a theoretical model and empirical evidence. In C. Howard (Ed.) Contested Individualization: Debates About Contemporary Personhood, pp. 61–79. Dordrecht: Springer. Mouzelis, N. (2008). Habitus and reflexivity: Restructuring Bourdieu’s theory of practice. Sociological Research Online, 12 (6): 123–128. Mutch, A. (2013). Sociomateriality: taking the wrong turning? Information and Organization, 23 (1): 28–40. Oyserman, D. and Markus, H. R. (1990). Possible selves and delinquency. Journal of Personality and Social Psychology, 59 (1). Soojung-Kim Pang, A. (2013). The Distraction Addiction. New York: Little, Brown & Co. Pasquale, F. (2015). The algorithmic self. The Hedgehog Review, 17 (1): 30–46. Plummer, K. (2001). Documents of Life 2: An Invitation to a Critical Humanism. London: Sage. Plummer, K. (2002). Telling Sexual Stories: Power, Change and Social Worlds. London: Routledge. Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9 (5): 1–6. Rainie, H. and Wellman, B. (2012). Networked: The New Social Operating System. Cambridge, MA: MIT Press.
Growing up in a world of platforms
131
Rettberg, J. W. (2014). Seeing Ourselves Through Technology: How We Use Selfies, Blogs and Wearable Devices to See and Shape Ourselves. Dordrecht: Springer. Roose, K. (2020). Rabbit hole. The New York Times. Available online at www.nytimes. com/column/rabbit-hole [last accessed 2020]. Sayer, A. (2011). Why Things Matter to People: Social Science, Values and Ethical Life. Cambridge: Cambridge University Press. Srnicek, N. (2017). Platform Capitalism. Cambridge: Polity. Statista (2020). Smartphone usage in the United Kingdom (UK) 2012–2019, by age. Available online at www.statista.com/statistics/300402/smartphone-usage-in-the-ukby-age/ [last accessed 2020]. Steigler, B. (1998). Technics and Time: The Fault of Epimetheus (Vol. 1). Stanford: Stanford University Press. Stevenson, J., & Clegg, S. (2011). Possible selves: Students orientating themselves towards the future through extracurricular activity. British Educational Research Journal, 37(2), 231–246. Strahan, E. J., & Wilson, A. E. (2006). Temporal comparisons, identity, and motivation: The relation between past, present, and possible future selves. In Possible Selves: Theory, Research and Applications (pp. 1–15). New York: Nova Science. Taplin, J. (2017). Move Fast and Break Things: How Facebook, Google, and Amazon Have Cornered Culture and What it Means for All of Us. New York: Pan Macmillan. Thompson, J. B. (1995). The Media and Modernity: A Social Theory of the Media. Cambridge: Polity. Turner, G. (2010). Ordinary People and the Media: The Demotic Turn. London: Sage. Van Dijck, J. (2007). Mediated Memories in the Digital Age. Stanford, CA: Stanford University Press. Van Dijck, J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford: Oxford University Press. Van Dijck, J., Poell, T. and De Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press. Wajcman, J. (2015). Pressed for Time: The Acceleration of Life in Digital Capitalism. Chicago: University of Chicago Press. Weller, M. (2011). The Digital Scholar: How Technology is Transforming Scholarly Practice. London: Bloomsbury. Williams, J. (2018). Stand Out of Our Light: Freedom and Resistance in the Attention Economy. Cambridge: Cambridge University Press. Wright Mills, C.W. (2000). The Sociological Imagination. Oxford: Oxford University Press. Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. New York: Alfred A. Knopf. Zimbardo, P. and Coulombe, N.D. (2015). Man (Dis)connected: How Technology Has Sabotaged What it Means to Be Male. New York: Random House.
7
On macro-politics of knowledge for collective learning in the age of AI-boosted Big Relational Tech Emmanuel Lazega and Jaime Montes-Lihn
Introduction1 In an impressive article, a New York Times journalist, Monica Potts (2019), writes about her hometown in rural Arkansas, in which 70% of the electorate voted for Trump in 2016. One of the topics in the article is a fight over the future of the local community library. The people who did not frequent the library argued that they did not want to pay taxes for it because the community did not really need one any longer. One of her interviewees argues that “after all, if you have internet, you can get whatever you want in a day”. This of course overlooks the social function and reality of community libraries. In addition to delivering major cultural services and events for which they are systematically organized, libraries and librarians are vital for social integration. They offer opportunities to create relationships and densify social networks, as well as neighborhood social services (for example help with homework for children, inclusion of isolated members of the public, etc.). In the minds of these interviewees who no longer care about such institutions, new technologies and communications also lower the very minimal level of social solidarity that they are willing to tolerate. Such debates about closing or changing libraries occur in many places.2 For sociologists, these controversies echo twentieth-century political philosophers and sociologists who have long recognized that the state and associated public institutions, including educational and cultural institutions, experience legitimation crises and deficits (Arendt 1951; Habermas 1976). Moreover, with digital technologies, societies are undergoing a historical breakdown of their systems of cultural authority (Archer, 2016), as they did for example with the invention of the printing press – which was followed by all manner of popular uprising, chaos, and ferment. A new system tries to operate without gatekeepers to culture and official knowledge, such as teachers, librarians, journalists, traditional university professors, and professionals of all stripes. 1 2
We are grateful to the editors of this volume for comments and suggestions that considerably improved our initial contribution. See for example in France: https://bbf.enssib.fr/focus/le-monde-d-apres-repenser-la -bibliotheque-10-07-2020
DOI: 10.4324/9780429351563-7
Macro-politics of knowledge
133
Increasingly, such debates assume that much of culture and education are based on a general trajectory of technology development that societies experience today, in particular big data accumulation and artificial intelligence (AI) methods of statistical analyses for such data. This chapter argues that, although it does not account for the amazing chaos that spreads at the moment, massive capture and analyses of data plays a major role in this chaotic context. In order to show this, we recontextualize the debate using what we call micropolitics of knowledge, i.e. framing contexts in situations of uncertainty in which we collectively learn “enough” to make what we consider to be informed decisions, including regulatory ones (Lazega, 1992). In these micropolitics of knowledge, knowledge claims have at least two dimensions: relational (exclusive or not) and realistic (with or without reality checks). Contemporary societies are neo-liberal and organizational societies dominated by giant technocratic bureaucracies, both powerful public authorities and highly profitable and entrenched private firms. In this context, we argue that the fact that big tech companies control an enormous amount of information is not just data capture. If these companies and the output of their awesome algorithms make someone in the Midwest believe that they no longer need libraries and that Big Tech can “make America great again”, then this control goes well beyond data accumulation. We argue that it is equivalent to reshaping appropriateness judgments, knowledge claims and collective learning in society (Lazega, 2020a, 2020b). How can this be the case? In this context of attacks on cultural authority, we argue that traditional epistemic authority is being weakened through depersonalization and polemical knowledge claims without reality checks. The cooptation of epistemic authority takes place as a privatized digitalization of knowledge: a changing of the guard of epistemic authority is taking place during contemporary transitions, a change that is driven by AI-boosted technology applied to Big Relational Tech (BRT) databases.3 The two processes, weakening of personalized and traditional epistemic authority and privatized digitalized takeover and consolidation, take place at the same time and coexist. We separate them only analytically. In fact we hypothesize that this transition might rely on the same personnel, i.e. actors and expertise: BRT will to keep the professors but switch their allegiances, as in the Invasion of the Body Snatchers. Disentangling the ways in which the two processes are separate, then combined, is pertinent to discussions about this era of truthless falseness full of fake news in which nobody can find their way. The combination of these two processes may reflect a new scale of power for strong organizations influencing society by defining the framework in which individuals elaborate and interpret 3
We call BRT the global (currently for example US and Chinese) private companies and public administrations concentrating big data on billions of individuals’ relational profiles, groups, and organizational affiliations, production, performances, economic, and socio-demographic information (Lazega, 2020c).
134
Emmanuel Lazega and Jaime Montes-Lihn
information, i.e. reflect the emergence in society of a new master regime of epistemic control. In such discussions, it is hard to decide if there is no control or total control,4 an actual decentralized system or a system with a new overarching center. The implications of this view are that, in democracies, increasing epistemic control by strong actors reshaping society require more regulation and new institutions. This regulation must take into account the learning processes at the heart of which we find these meso/macro-politics of knowledge. This contextualization is important in a period when education needs to adjust to a world where people are required to think about a meaningful life without work because of the way society and its technology are changing. A vast majority of people now live in a society where work is not available to them. In this situation the very purpose of education would then become one where one thinks about how to give somebody a sense of life, autonomy, interest, and sense of citizenship in the new situation. All of those would be included in the definition of instruction. We argue here that when the stakes are so high for culture and education, tracing the control of appropriateness judgments and collective learning in micropolitics of knowledge is important. Here my limited purpose is to show this by exploring the lessons from sociological analyses of micropolitics of knowledge in human advice networks. We argue that this exploration provides a better understanding of how AI-boosted BRT reshapes collective learning by systematically disqualifying reality checks and promoting polemical knowledge claims and massive polarization and radicalization of society. These implications at the macro-level need to be regulated accordingly.
Powerful actors From the structuralist branch of symbolic interactionism (Fine and Kleinman, 1983) collective learning as a social process can be based on different conceptions of “knowing well” what actors can rely upon when facing uncertainties and contextualizing practical decisions by communicating about them. These different conceptions can be decomposed as varying “appropriateness judgments”, i.e. framings taking place in suspended moments of 4
Generalized use of AI percolating massive data analyses will indeed have the same reorganization effect on society and the economy as the invention of printing did in the Middle Ages. In an organizational society, systematic use of AI represents an additional technocratic risk of invisible and pervasive social control. In China today some cities have developed real-time systems of surveillance that identify individuals crossing red lights and receiving the fine on their phone. Beyond minor offences, this technology is used to routinize decisions that would not have previously been considered routine. Decisions that would have required inquiry, critical review, deliberation, or political collective decision making are systematically quantified, modelled statistically in real time, in what becomes a form of technocratic bureaucratization of thought upstream of control and routinization of behavior.
Macro-politics of knowledge
135
symbolic interaction and characterized analytically in at least three operations. As represented in this theory, these operations are responses brought by the actors to “generic” questions that constitute the social premises of contextualized action that is deemed appropriate. Sharing such premises is the basis for “co-orientation”, defined as a stabilization of epistemic interdependencies, a key moment in the coordination of collective action. First, for a given or proposed action (for example appropriation, production, selection of exchange partner), which is the epistemic reference group (or “epistemic community” in the sense of the group sharing the same appropriateness judgment) that has priority for the actor? “Appropriate” knowledge is close to “satisfactory” knowledge with respect to the pragmatic requirements of action, but also with respect to social control – which always comes before efficiency. What, therefore, is this instance of social control from which actors await epistemic approval? If action is the result of socialization and individual dispositions that it creates within the collective, it is also the result of influences and social sanctions. One may thus ask from which collective identifications in a hierarchy of allegiances actors themselves recognize as the sources of their own actions (Stryker, 1980), expecting from them the validation or approval for the choice of a rule and subsequent actions? For symbolic interactionism, actors negotiate their identity with boundary work that ranks several possible reference subgroups, from horizontal role differentiation, and thus several instances of social control. This ranking constitutes a first epistemic premise of action. A second social premise lies in normative choices (in polynormative contexts), a choice that the sociological tradition often considers key to adding a social dimension to human rationality (Reynaud, 1989). This choice of a rule among competing rules allows for legitimization of action on behalf of the prioritized reference group. To which rule, cultural norm or precarious value (Selznick, 1957) does one refer in a situation of normative ambiguity or polynormativity, i.e. when several rules could be culturally recognized as legitimate within an organization? From the point of view of the previouslyrecognized instances of social control, actors acquire new inputs for their reflexivity and become able to problematize their own actions, for example to anticipate or prevent their induced consequences and their possible delegitimization. It is this choice of a rule that establishes the legitimacy of an act and the manner in which actors invoke social control in the orientation of their actions or exchanges. Legitimacy can be produced through deliberation, critical debate, and justification; but it nevertheless remains highly dependent on authority relationships and a prior distribution of access to authority arguments (hierarchical or expert) in organizations. A third social premise lies in the choice of a representative for the priority reference group, a form of social status and ex ante leadership recognized by members. Who states and interprets the rule in a context of normative ambiguity, of “precarious” values (Selznick, 1957)? Because leaders need the support of members of the collective for the main objectives and priorities that
136
Emmanuel Lazega and Jaime Montes-Lihn
they define. In the relationship between a leader and followers, there always remains a certain ambiguity. Since s/he seeks the support of the members of the organization for the objectives that s/he defines, s/he finds it easier and more comfortable to confuse it with support of his/her person (Bourricaud, 1964). Such personalization creates an articulation between norms and social structure. This step corresponds to a personalization of the authority to which one is accountable for one’s actions, from which one seeks social approval. At the same time, it concerns the way in which actors recognize forms of social status and localize social control in the structure of their collective. The influence of norms on action is indeed mediated by this articulation between norm and social structure. In an organized social setting, where authority arguments are carefully allocated, the choice of a representative also indicates, ipso facto, the choice of the authority argument having the last word, to which actors yield as a last resort. It follows that social rationality is inseparable from authority relationships, particularly in organized collective action. The contextualization of action cannot avoid encountering the power and authority structure as established in the organized group. At a high level of generality, this form of contextualization of behavior instantiates one possible combination of reflexive action produced in/with structure and culture (Archer, 2014). In particular, rooting appropriateness judgments in rules via socialization and culture does not reduce the complexity and importance of culture (White, 2008). Combined with the notion of “cultural holes” (Breiger, 2010; Pachucki and Breiger, 2010; Schultz and Breiger, 2010), appropriateness judgments create room for consensual rules and regulatory activity as stemming from “weak culture”. Indeed Breiger (2010) and Schultz and Breiger (2010: 1) propose that the tie that binds an actor to a cultural taste, for example, “might be strong (purposive, intensive in time or commitment, fostered by a tightly integrated community bounded by social symbols and representations) or weak (banal, non-instrumental, non-demanding, non-exclusive)”. They find that weak culture can be “strong” in several different respects, for example “by bridging across otherwise disconnected social groups, or by bonding actors to a wider collectivity than is possible on the basis of strong-culture commitments”. They report research findings indicating that weak culture, which requires no strong commitment from actors, tends to span preferences and does not need strong approval. Despite being weak, “weak culture has a strong and significant impact on shaping attitudes about … values” (Schultz and Breiger, 2010: 21). Their reasoning is that, with its capacity to help create heterophilous ties, weak culture regenerates structure by bridging across diverse social milieux. In our view this process can help actors in recreating a hierarchy of identities and allegiances and in bringing together competing reference groups. Contextualization of action in such appropriateness judgments leaves traces in communication behavior (Berger and Luckmann, 1966) and points to different ways of “knowing well” and types of “reality claims” needed to coordinate collective agency. For this microsociology of knowledge, actors
Macro-politics of knowledge
137
incorporate such appropriateness judgments into messages in a “pragmatic” way to make such reality claims. The appropriateness of the action is socially (culturally, structurally) negotiated by interlocutors in the act of elaboration of the informative value of the message. Reality claims are based upon the negotiation of the appropriateness of the messages that carry them. This negotiation links boundary work to forms of endogenous knowledge. These dimensions of appropriateness judgments indicate how knowledge claims present themselves as legitimized or authorized in controversies, for example in regulatory conflicts (Lazega, 1992). Taking into account the second and third dimensions, we can identify in this negotiation two analytical steps: firstly, the choice of a norm with respect to acceptable assertions, either by substantive reality checks or by symbolic and procedural alignments; and secondly, the choice between whether all the members of the social space vs. only a subset of ex ante leaders (or coalition within the social space) are allowed to participate as voices of their epistemic community and endorse the claim. Both dimensions can be used to cross-classify these knowledge claims, suggesting a two-by-two typology of knowledge claims that shows how actors in controversies protect their normative choices, or policy arguments and narratives, against delegitimization. Thus, the study of the main steps of an appropriateness judgment can identify four types of homogeneous and discriminant knowledge claims. Each of these types is a form of “endogenous knowledge” (knowledge produced from within the social space in which the controversy takes place). Repertoires of modes of collective learning and co-orientation can be derived from a typology of knowledge claims as performed by members of organized social settings. For example, such a repertoire can be based on two dimensions. Firstly, the kind of identity criterion used to identify the actors as the source of their action; this identity criterion can be inclusive or exclusive. Secondly, whether or not the knowledge claim refers to a possible reality check for the members of the setting, i.e. whether or not it is open to possible pragmatic verification of the claim. Four types of knowledge claims emerge from the use of these dimensions in a two-bytwo table: realist, expert, polemical, and initiated knowledge claims. A first type of claim, called realistic, assumes a substantive legitimation based on reality checks and an open endorsement ignoring potential divisions within the social space where all actors are welcome to participate in defining the situation. Such claims present themselves as challengeable by all. A Table 7.1 Typology of knowledge claims derived from the characteristics of appropriateness judgments Reality check
Boundary work
Inclusive Exclusive
Present
Absent
Realist Expert
Polemic Initiated
138
Emmanuel Lazega and Jaime Montes-Lihn
second type, the expert claim, assumes a substantive legitimation and a closed endorsement. Like the preceding type, it naturalizes members’ knowledge; however, it does not involve all the members of the social space in the definition of the situation. In that sense, although such a claim draws legitimacy from referring to a common reality, it does not consider this reality as accessible to everyone. Statements are appropriate because they are endorsed, for example, by recognized competence and expertise. Technocratic assertions are emblematic of such claims that are not considered to be everyone’s business or responsibility. A third type, the polemical claim, assumes a procedural legitimation/alignment and an open endorsement. Unlike the preceding types, this claim does not emphasize the existence of a common reality. Everyone is allowed to assert their statements, provided that they are on my side of the controversy. Such claims undercut challenges and delegitimation by compartmentalizing and polarizing the social space. A polemical claim can eventually assert anything provided that the actor shows allegiance to the “right” pole. Polemical claims undermined (Varman and Al-Amoudi, 2016; AlAmoudi and Lazega, 2019) specific options in collective action and mobilize away from reality checks. A last type, the initiated claim, assumes a procedural legitimation and a close, often private endorsement. Like the preceding type, it reflects appropriateness judgments that ritualize the assertion of a claim. In addition it limits its own exposure to epistemic challenges by attributing the privilege of endorsement only to some members, who claim to be different and share an exclusive identity, while all others face a “mystery” (their eyes having not been opened, their minds unconverted to a revelation). The different types show how knowledge is asserted and how its claims carry traces indicating how its producers manage relationships with one another. Claims may be successful and convincing in one context and not in another because managing relationships is indirectly an important dimension of knowledge construction, and ultimately of decision making and behavior. The question of the “success” of such claims becomes a structural one, based on creation and re-creation of epistemic communities, i.e. on boundary work shrinking or expanding local micropolitical coalitions. By definition, a micropolitical and symbolic interactionist approach to appropriateness judgments focuses on actors’ ways of managing relationships and exercising social control on epistemic claims, and deal with common or conflicting affiliations in epistemic communities. This analytical decomposition of appropriateness judgments facilitates the introduction of structural “forms” in a Simmelian sense, or relational infrastructures such as social niches and social status (Lazega, 2003, 2020), into contextualization of action by actors striving to share some social co-orientation. It is the observation of social networks, in particular advice networks, which helps here with observing this microsocial dimension of learning and co-orientation processes. Co-orientation, epistemic alignments, and the relationship between knowledge and authority become more easily observable in such advice networks because seeking advice from someone means attributing a form of social
Macro-politics of knowledge
139
status (Blau, 1964; Krackhardt, 1990) and epistemic authority (Lazega, 1992; 2020d) to this advisor.5 We use the fact that advice relationships in organized settings open a small window for the observation of appropriateness judgments to explore changes potentially introduced by AI in local collective and regulation using a case in point. Appropriateness judgments depend heavily on how actors manage their epistemic interdependencies. In particular, the reconstitution and analyses of advice networks is a first step in this direction. Advice networks are not the only networks through which this tension between authority, norms and identity can be observed. But the use of advice networks to approach appropriateness judgments is based on their use as an indicator of epistemic interdependencies. Advice does more than transmit information. What is being transmitted in a pragmatic manner in an advice relationship is also a framework for the evaluation of this information, the elements necessary for the evaluation of its appropriateness.
Illustration: a study of knowledge claims in advice networks and collective learning among organic and biodynamic wine growers Collective learning is an important social process in all social settings including industries and production markets, facilitating for example coorientation and collective action between competitors. In technologically intensive societies and economies, which value research and innovation exploiting and developing this technology, collective learning in the exchange of tacit knowledge and sharing experience (Polanyi, 1967) represents a crucial process. It has long been studied in management (Cohen and Levinthal, 1990; Nonaka and Takeuchi, 1995) and a rich literature reports research on the process of learning in strategic alliances. Organizations seeking quantitative and qualitative competitive advantages mutually monitor one another (White, 1981). Enterprises establish alliances because they hope to benefit, among other purposes, from the learning resources to which such links give access. An example of modeling collective learning among competing entrepreneurs based on information sharing and advice relationships is provided in Montes-Lihn’s (2014, 2017) research on wine producers in Côte de Beaune (Burgundy, France) as they collectively and collegially make vital prevention decisions very quickly against mildew and oïdium. In “biodynamic” agriculture, without such chemicals, natural preventive treatments are considered the only ways to manage grapevine diseases (mainly mildew and oïdium) that 5
This sociological tradition considers that seeking advice from someone confers a form of social status on the advice giver even when advice is sought out but not followed. Indeed advice seeking can be used to placate others or signal that one is more democratic and open to suggestions than one actually is (at least some bosses seek advice from subordinates, depending upon what they are seeking advice about). This does not mean that inference of status from the observation of patterns of advice seeking is the only way to learn about co-orientation and epistemic alignments.
140
Emmanuel Lazega and Jaime Montes-Lihn
can ruin a year’s crop in a day. Our framework helps to look at how wine producers manage the transition to organic and biodynamic farming by relying on networks of informal advice among competing peers. Farming based on ecological alternatives prohibits synthetic chemicals. This restriction represents a technical challenge and leads to the introduction of a new set of agronomic practices. The adoption of organic practices is seen by wine producers as a risky decision with strong economic and symbolic (prestige-related) consequences in case of failure. Identity logics matter to them: variables having an effect on advice exchanges among winegrowers include being a pioneer (certified domain before the rise of biodynamics); having stopped using synthetic phytosanitary products for more than ten years; having a tie in the milieu that pre-exists the conversion. In that sense, information that they may share to make appropriate technical decisions through advice networks is key. Montes-Lihn shows that advice sharing and discussion among over 69 wine producers in the Burgundy Region in France are part of a larger socialization process that requires a strong relational and ideological investment, signaling a commitment to shared ecological values. Adoption of green practices is far more than a mere technical issue. Collective learning depends on existing relational infrastructures and knowledge claims show that know-how is only shared with colleagues who are perceived to be sharing the same worldview and “biodynamic” perspective on their trade, with derived normative choices. Wine growers’ professional milieu can be illustrated by the pattern (based on blockmodeling) in Figure 7.1. Each node represents a set of wine growers with similar relational profiles in the advice network (approximate structural equivalence in the social network terminology): they share opportunities and
2
3
1
4
5 6
Figure 7.1 Pattern of advice exchanges among positions of members in the social milieu of “biodynamic” winegrowers of the Côte de Beaune. They participate in two parallel collective learning processes depending on the temporality – long or short term – of their technical decisions
Macro-politics of knowledge
141
constraints based on their position in the advice network. Each position from 1 to 6 reflects a specific role in the informal division of work emerging from the study of the advice network. Arrows represent dense advice ties across different positions. An arrow going from position A to position B means that members of position A seek intensively technical advice from members of position B. Circles represents wine growers’ social niches. Each social niche includes between three and 12 wine growers with dense ties with each other. In addition to having a similar relational profile, they share in a privileged way different resources such as advice or material support. They have also built strong friendship ties within the boundaries of their social niche: dense networks of different resources have been identified within each social niche. A first learning process takes place within the boundaries of social niches 1 and 5. It is characterized by the creation of new knowledge and the experimentation with new techniques. Their members are at the cutting edge of biodynamic and organic farming. Initiated knowledge claims are mobilized: a ritualized legitimation and a close endorsement are required to participate in this learning process. Long-term technical choices, such as that of not using synthetic chemical products for over a decade is an important factor explaining the constitution of these boundaries. They define who may or may not participate in this learning process. The members of niches 1 and 5 also have a social status reflected by their centrality in multiple networks including advice and friendship. However, members of these two social niches are not isolated from the rest of the milieu. A second learning process involves high status wine growers and the whole professional milieu when short-term agricultural technical decisions need to be made under time pressure when the annual crop is at risk. It is the case for example when it comes to picking the date to apply preventive treatments against mildew. This decision is key for the quality of the harvest. A bad decision jeopardizes a whole year of work. In these circumstances, statistical and ethnographic analysis show that wine growers in different positions from the whole professional milieu have access to multi-status pioneers and align their short-term technical choices – picking the date of preventive treatment – with those of multi-status pioneers. Expert claims are mobilized in this second learning process where the entire milieu participates in its legitimation and a closed endorsement is given exclusively to multi-status pioneers. The figure shows that wine growers from all different positions have access to technical advice from multi-status winegrowers from niches 1 and 5. Members of niche 1 are called upon by members of niches 2, 3, 4, and 5. Members of niche 5 give advice to members of niches 1, 2, and 3. Even wine growers in position 6, who are less involved in advice exchanges, have direct access to technical advice from multi-status members of niche 5. They also align with them on short-term technical choices. In other words, the first learning process is based on homophily among wine producers within the boundaries of social niches. The social boundaries defining the profile of the participants in this specific learning process are
142
Emmanuel Lazega and Jaime Montes-Lihn
determined by common values. In the second learning process, knowledge is shared beyond the boundaries of social niches. It maintains a ratchet effect in ecological transition. This learning process is coupled with a socialization mechanism because experienced wine producers tend to initiate novices into the implicit social norms on which the professional milieu is founded. Ethnographic work shows that this socialization mechanism (learning and reminders of social norms that come attached) present in the second process is led by the experienced wine producers. Socialization aims to preserve the collective, coupling exclusive ways in which knowledge is shared and the values that have guided experienced producers’ own ecological transition. To use the vocabulary of micropolitics of knowledge, several learning processes take place simultaneously in the socio-professional environment of these organic wine growers. A local, endogenous division of epistemic work has emerged among Côte de Beaune wine growers in which each position promotes a different learning process, although the milieu is to some extent characterized by a dominant epistemic mode of knowledge claiming. The first of them corresponds to an initiated, or insider, type of learning: it is deployed in the most prestigious niches, where members work together jointly to advance on cutting-edge issues in organic or biodynamic agriculture, with the objective of going beyond the current level of knowledge. Members of these niches are clustered around pioneers who are the backbone of collective action in this milieu. The knowledge figuring in their exchanges is not accessible to all members of the community and the legitimacy of claims can only be challenged within these social niches. A second learning process involves those with epistemic legitimacy as “experts” within the community and those with “inferior” status. The aim of this process is twofold. On the one hand, it must ensure successful conversion, through the assistance that experts provide to novices to help them overcome the technical problems inherent in the early stages of organic farming. The epistemic claims mobilized are of the expert type because the mode of legitimization is substantive, while the whole collective is not considered competent. An additional objective of this second learning process is normative: its goal is also to reinforce the foundations and representations that cement the collective through the institutional work undertaken by the most experienced winegrowers. This social mechanism helps to ensure the ratchet effect of the conversion by relying doubly on the wine growers who have the longest experience of organic farming: they are first there to validate the new member’s decision concerning the transition from chemical to organic farming, then to accompany and help him/her overcome the day-to-day technical difficulties. But at the same time, they carry out this institutional work of secondary socialization. The “old” members thus participate in two learning processes: a state-of-the-art learning process, which takes place between peers, and in which they have an initiated legitimacy, and a learning process that aims to integrate the new members, for which their legitimacy is first and foremost that of experts.
Macro-politics of knowledge
143
Articulating both learning processes depends on the temporality of these collective actions and decisions, especially when relational infrastructures are recursively endogenized by individuals in specific situations. Indeed, in order to make informed decisions, these wine producers rely, alternatively, and depending on the temporality of the technical decisions that they need to make, either on members of their social niche or on actors with much higher status. When they face an urgent, short-term, risky decision, they tend to rely on individuals with high status (experienced, multi-status pioneers, identified with centrality in multiplex networks) across all social niches and the periphery. However, when they need to validate a non-urgent, often long-term or ordinary decision they turn to peers of their social niche. Here social niches matter: wine producers seek advice about organic and biodynamic farming practices from colleagues and close neighbors sharing similar values. Thus, the relationship with the pioneers’ complex is not exclusively technical, but coupled with a socialization mechanism for the new generation of organic wine farmers. These results explain in part how farmers learn collectively and locally when facing the ecological transition. One final observation is that the Côte de Beaune wine growers operate collegially (among peers) but as a collegial pocket in a much more institutionally bureaucratic context. Institutions such as the local Chamber of Commerce and multinational agro-companies, as well as the Institut National de la Recherche Agricole (INRA) and regional academia, all participate in the regulatory debates and offer technocratic advice. In particular INRA had official programs of help and advice to local producers, although most of these programs, at the time of the study, did not promote solutions free from synthetic phytosanitary chemicals and were perceived by biodynamic growers as critical and negative towards their business model and biodynamic commitment. Tensions between the former’s expertise trying to rein in the biodynamic wine growers and the latter’s local expert and initiated knowledge claims were high. For organic and biodynamic wine growers, being part of this collegial community of practice was also perceived as a way of getting back control over their work and an empowerment over technical advisors from agricultural supply companies who used to sell them ready to use formulas and tell them how and when to apply these synthetic products regardless of the observation and interpretation of the signs of nature. In this collaborative context they acquire a knowledge and an observational capacity underappreciated by technical advisors (Compagnone, 2004, 2018). They show that the existence of technology does not by itself force people to change the way in which they work. Max Weber’s Economic History and his account of the birth of manufacturing comes back to mind: control came before efficiency. Here again, in these collegial settings, the general Weberian insight still holds (Lazega, 2001, 2020). The way in which collective work is defined (as routine and bureaucratized vs. as innovative and collegial) coevolves with its organization and regulation. When small niches of wine growers were tempted to align with INRA, the coalition that they were able
144
Emmanuel Lazega and Jaime Montes-Lihn
to mobilize was not strong enough, neither in size, nor in centrality, nor in ideological representations and reframing narratives (biodynamics vs “reasonable” production) to convince their colleagues to reinterpret or redefine their interests and norms when making vital prevention decisions collectively. Biodynamic wine growers had alliances with wealthy consumers supportive of their ecological norms. Perhaps this collegial resistance and configuration were comparatively easier in a high-end wine market (White, 1981), in which developments are tracked, for example, by vocal, easily mobilized system of starred and media-savvy chefs representing the current trends in French cuisine (Éloire, 2010). Using our wine growers’ case in point is not meant to make any general statement about the world agriculture and food systems. As underlined by these judgments, BRT will transform AI work into “truth machines”, themselves increasingly bureaucratic tools of government but politically unidimensional, losing their multi-faceted character and reality checks, pushing aside alternative biodynamic practices eliding observable diversity, and blacklisting sociotechnical policy options such as those based on agroecology and biological synergies. Their models “are designed for prediction and prescription rather than for supporting public debate” (Dorin and Joly, 2020). Different models of world agriculture “can be constructed as a ‘learning machine’ that leaves room for a variety of scientific and stakeholder knowledge as well as public debate … highlighting ‘the need for epistemic plurality and for engaging seriously in the production of models as learning machines’”. The point is that this study offers a chance to think about how this conceptual framework bringing together appropriateness judgments, epistemic authority, and collective learning is useful to think about the knowledge managed by AI. Practical knowledge and collective learning are produced by diversifying knowledge claims in appropriateness judgments related to concrete decisions.
AI and the digitalized navigation of the micropolitics of knowledge It is already observable that, in the situation of collapse of epistemic and cultural authority presented in the introduction, new powerful actors develop and take advantage of AI to push for epistemic technologies based on generalized quantification and modeling using generic machine learning solutions. This is part of contemporary change in production and work that have long been routinized and automated, and today digitized and robotized. Technological changes affect collective action capacity in addition to many other dimensions of society.6 But innovating collectively away from routines is 6
Exploring the consequences of the digital turn requires renewed understanding of how multilevel relational infrastructures combine routine and innovative work, in particular collective routine and collectively innovative work as carried out by bureaucratic and collegial collective agency. The role of these infrastructures has been shown to be central in these developments (Lazega, 2020a), although not uniformly across different dynamic configuring fields. The issue of this role is
Macro-politics of knowledge
145
based, in part, on the existence of collegiality –as in the case in point above – and, in the context of organizational morphogenesis, on the ways in which collegiality and bureaucracy co-constitute each other in already strongly bureaucratized environments. In other words, does this powerful trend take control of appropriateness judgments, knowledge claims, and collective learning? If this trend reshapes our micropolitics of knowledge, how does it take place? In particular, does our approach to collective learning help understand how contemporary data-driven AI provides opportunities for strong actors, including BRT, to reframe or orchestrate collective learning processes and thus to intervene and reshape society? It is important to dispel a misunderstanding. There are two ways of using AI and machine learning that very much depend on for what purposes it is used. One way is a very generic one, where the same models are routinely applied to massive and heterogeneous datasets built on mergers of previous databases with very general questions about issues showing extreme social complexity, with algorithms that always give an updated answer. This contrasts with another way of asking specialized questions of reliable datasets helping researchers work out a way of synthesizing, identifying, creating focused solutions based on that kind of reliable data and data analysis. The second may have a very useful function and ability to solve problems in society. It can probabilistically find, much more effectively than human judgment, whether in fact somebody has a given illness. AI and machine learning are adding a level of technological efficiency to problem solving in society. The issue then is whether and how the first use of machine learning, driven by unreliable and proprietary data and broadly applied on a massive scale, will be used by powerful actors to force people to give up their appropriateness judgments and value judgments in favor of their AI proprietary algorithms. Once this trend takes off, technocrats will “let AI decide”, say, “Who are the best performing teachers in the country”7, what proprietary statistical algorithm is used to identify them, how the models are specified and estimated, what thresholds are used, etc. AI as statistical intelligence is then exposed as problematic in many ways. The relational dimension of knowledge claims can be used to exclude people with different knowledge claims, often because polemical posturing can hide other forms of action that are made invisible by the absence of reality checks. Distinguishing between different
7
therefore not only an academic or a managerial one, but should become a matter of public debate. The forms that this debate should take are necessarily conditioned by data on such multilevel relational infrastructures. O’Neil (2016), in Weapons of Math Destruction, shows that the latter have already been used in various places in America in order to evaluate teachers, with the most awful results. The best teachers, the most connected, who care most about education, are said to perform the worst on the metrics used, and are not being promoted or not getting paid as much, because all of those things are used. It is the problem of the nature of these statistical uses rather than the problem of the nature of the mathematics that is at stake.
146
Emmanuel Lazega and Jaime Montes-Lihn
kinds of use of AI will help define appropriate new conventions around it. As yet, however, no one knows the algorithms that can be modified in discrete ways, who will be in charge of these algorithms, what kind of accountability and compliance powerful actors are prepared to accept, if any. Recall what our case in point shows: when there is no pressure of time, growers share with people who have the same basic ideas as them, neighbors, friends, everyday relations embarking in the same conditions. As soon as there is time pressure and the stakes are high, they switch knowledge claims and turn to the pioneers for critical urgent solutions. In contemporary societies where local community institutions are threatened, and individuals are increasingly atomized, it becomes difficult to rely on neighbors or friends to undertake common projects in the medium or long term. It is rather the short-term and the decisions taken under time pressure that become the norm. In this context, informal epistemic authorities become key drivers to reframe and evaluate information, this time not at a local and contextualized level but as “influencers” of a specific target of a social media audience. Identifying such central “influencers” with epistemic status in real time is easy for BRT, based on applying key player routines that are available in any social network analysis software. BRT and its AI will have exclusive knowledge about the division of epistemic work and people’s epistemic status in any social milieu. This is where AI-based selection of knowledge claims becomes manipulable as statistical analysis becomes fast, dynamic, and recursive. We will be drawn into collective learning processes and navigate them like drivers caught in a traffic jam who all use the same application to look for directions. Instructions come from this application’s platform that uses reports from drivers who are 50m, 500m, 2km, etc. ahead of each driver in the jam, whose relative geopositioning is updated and used for selection of solutions in real time. These permanently updated statistical analyses compute average behaviors interpreted as norms and instructions that people should follow, in collective learning processes recursively updated in real time (Lazega, 2020c). One hypothesis about the likely implication of this generalized AI-boosted navigation of such processes online is that orchestration of micropolitics of knowledge by BRT will facilitate polarization of epistemic social spaces by systematizing and generalizing polemical claims. Hegemonic platforms managing collective learning processes can use AI in ways that help actors maximize a form of epistemic utility that has no built-in reality checks. When time is not available for realistic inquiries, and experts are disqualified by setting them up against each other, two types of knowledge claims from our typology are left as induced forms of appropriateness judgment: polemical and initiated. Each brings its load of so-called fake news and conspiracy theories that polarize the public more or less antagonistically at the macrosocial level. Indeed, one of the easiest tools of AI statistical machinery is factor analyses, increasing contrasts between positions, beliefs, opinions, and behaviors. Once the dividing lines have been defined, polemical claims do not
Macro-politics of knowledge
147
care about contents and changes in controversial positions “as long as you are on our side”. Polemical claimants are not embarrassed by counterarguments, not even by their own changes in moral values, falsifications, and intoxications. AI-based polemical knowledge claims to update their own contents without much care for coherence and consistency, by updating analyses and narratives in order to bring more people on board on the “right” side of the non-discussion. For example, regardless of Trump’s failures, outright lies, daily contradictory improvisations, racist, sexist, and anti-democratic actions, media criticism of Trump often reinforces Trump voters in feeling that he is on the right track, whatever “the right track” means. Of course, once polemical claims and radicalization have been generalized, BRT can also offer arbitration services. For example, Twitter serves Trump’s polarizing tweets that he posts himself. But since launching a policy on “misleading information” in May 2020, Twitter has clashed with Trump. When he described mail-in ballots as “substantially fraudulent”, the platform told users to “get the facts” and linked to articles that proved otherwise. After Mr Trump threatened looters with death – “when the looting starts, the shooting starts” – Twitter said his tweet broke its rules against “glorifying violence”. The American president can then argue that “social media platforms totally silence conservatives’ voices.” And so on. This does not mean that the media should stop criticizing Trump but that, minimally, the media should also find ways to strengthen other knowledge claims that take into their accounts the dynamics of polemical knowledge claims. In particular, to stop AI-boosted massive polarization of opinions and publics by BRT (or manipulation of BRT), which should therefore be recognized and regulated at least like any company belonging to the media industry.
Resisting polemical claims, massive polarization, and radicalization with education Although online exchanges have an influence on actors’ appropriateness judgments, the nature of this influence remains to be further understood. Users evolve in heterogeneous and fragmented informational landscapes. In this landscape, filtering is complex, based on individuals’ sociodemographic characteristics, horizons and selections of sources; but also on secondary filtering performed by indirect sources, thus creating different and multilevel confinements (Stein et al., 2020). For example, with respect to values and their ranking in terms of priorities, because people tend to seek like-minded others in discussions related to key sociopolitical issues, for instance, Twitter users’ homophilous following based on ideological hashtags and their ad hoc publics (Xu, 2020), the Twittersphere as a public arena keeps influencing actors’ normative choices. It can for example reinforce the priority of a specific value by reducing their exposure to contradiction and by locking them up in echo chambers (Crick, 2020). Such echo chambers are believed to facilitate myopic misinformation spreading and contribute to radicalizing the
148
Emmanuel Lazega and Jaime Montes-Lihn
political discourse. Quantifying their presence could help gauge the effects of polarization over the spread of information and identify the political leaning of users in the strongly connected component of the networks ideally suited for polemical claims. Another example concerns the influence of “bots” on information elaboration. Humans are no longer the only actors in online social networks. There are a variety of computer-based and algorithmic actors broadly known as “bots”. Bots have become central to online phenomena such as social movements and open-source software, and are reshaping how we think of social actors in these situations. Bots are loudspeakers of actors who try more or less effectively to influence the conversations, thoughts, opinions, and ideas within these online social spaces by broadcasting the voices in ways that cut across boundaries (internal fault-lines of organized communities and external fault-lines across communities) to reinforce provocative polemical claims. As analyzed by Gardner and Butts (2020) in their study of the social influence of bot networks in Twitter discourse around the time of the 2016 US presidential election – bot accounts (but not on the bots’ “own volition”!) masquerading as gun-owning housewives, young Black Lives Matter activists, Twitteradjuncts for obsolete local news companies, non-existent political organizations – such artificial agents engage in roles specific to their social space and, by performing these roles, strengthen legitimation tactics and the spread of fake news. Thus, bots do have an effect on people’s appropriateness judgments, hence on knowledge claims and redefinition of assertions as truth or fake news, influencing social and political events. The increased and stealthy dominance of bots in online networks, as well as the advancement of their underlying technology, shows the need to further understand how their engagements change the overall dynamics of collective learning in online social networks (Lalor et al., 2020). The extent and conditions under which they are efficient in reinforcing cohesive but fragile pre-existing ties or brokering across structural/cultural holes cutting across socio-cultural boundaries – thus leading to influence on actors’ choices of identity criteria – remains to be examined. It also remains to be seen when they are simple loudspeakers of their masters’ opinions – reinforcing the latter’s status – and when they trigger rejection, for example when users with larger spreading capacity are able to escape their echo chambers by reaching individuals with more diverse leanings (Cota et al., 2020). As suggested by the threat to libraries mentioned in the introduction, these trends represent a threat to educational institutions. Seeking information online is not learning, it is just seeking ready-made information. But massive use of these techniques will have consequences, precisely because they drive people to switch knowledge claims, always in the same direction: to more polemical ones. Institutions of learning and their knowledge claims are thus in danger of being undermined in several ways. It is safer for algorithms to manipulate appropriateness judgment and information elaboration that interfere with collective learning when the epistemic status of educational
Macro-politics of knowledge
149
professions (teachers, librarians, public documentalists, academics, etc.) is weakened and the hegemonic platform decides which knowledge will be capitalized upon and accessed by the public. One way in which academics are next in line is not just with AI and Big Data companies interfering with their knowledge claims, but because their own institutions consider that they are not innovative enough, or that they slow down the innovation process, not fast enough in picking up innovations in their fields and teaching them. This disqualification can become a justification for interference in academic curricula and freedom. Such alliances between neoliberalism and AI-boosted BRT platforms remain to be studied because they could lead to a privatization of the social sciences in a data-driven mode that is intimately connected to the valorization of computational social sciences, data sciences, and big data in the new university. The two worlds are tightly connected in ways that are known to academic insiders but remain to be identified by the public. Upping these stakes to education could be part of rethinking it more broadly so that citizens are stronger in making their appropriateness judgments, in holding their own in places where they can meet different people and get to argue and challenge beliefs and epistemic authorities based on reality checks carried out in the company of alters, in building new institutions, and more generally in holding their own in these new meso-/macro-politics of knowledge, whether in private or public lives. Keeping controversies public and accessible is as essential in complex organizational and class societies as it ever was, but more difficult with AI-boosted BRT. Educational systems therefore need to respond with developing what could be called “epistemic network literacy” so that citizens are able to switch knowledge claims, especially with reality checks, i.e. are able to associate knowledge with both reality checks, relationships, and epistemic communities. Public schools and public libraries must be reinforced as institutions of learning and practicing fact checking with others and diverse (future) citizens. There will be no counterpower to AI-boosted BRT without the development of a critical mindset in the public space.
Resistance from independent “local” AI and AI regulation Thus, in today’s period of challenge to traditional cultural and epistemic authorities, some powerful public and/or private actors weaponize AI for their polemical claims to strengthen their political control over society through increased sophistication and efficiency in management of such digital communication media that does not want to acknowledge itself as a medium. AI can help identify and target specific audiences and sub-populations with intoxicating “fake news” and “angry mood manipulations”, as coined by journalists, police, and military institutions in charge of monitoring new kinds of conflicts, if not wars. The implication of this view is that, in democracies, increasing epistemic control by strong actors reshaping society requires more regulation that is aware of these micropolitics of knowledge
150
Emmanuel Lazega and Jaime Montes-Lihn
Resistance to weaponized AI may depend on smaller collectives using AI algorithms in ways different from large scale data hegemons. At the level of the education system, not just that of advice networks, collective power can then be mustered by bringing together scattered local actions to protect teachers against use of technology that is detrimental to education. But will local communities be able to use their own local (different from generic and global) AI solutions? Efforts to institutionalize peer-to-peer (P2P), decentralized solutions using opensource software are often cited as exploratory terrain for new forms of collective responsibility and innovative organizations/professions relying on different conceptions of efficiency and control. Such efforts provide one example of jointly local and global collective action that resists goal drift in contemporary digitalization. Self-managed production of algorithms by a particular internet community of producers as users, who of course need expertise and help from the outside but try to control the information themselves that they receive, produce their algorithms, develop their own platforms and use them without depending on BRT. The future development, autonomy, and spread of such solutions suggests that systems need to promote resistance at three levels simultaneously (micro-, meso- and macro-), as in guild-like organizations. It is there, and with its local and global capacities it will be able to keep, compete with the big ones, but big ones that truly dominate. This is equivalent to saying that resistance to such epistemic and cultural breakdown requires regulation, including institutions that addresses the issue of regulating the regulators when regulation become increasingly anormative. The development of many “simple” algorithmic decision support tools could be applied to all kinds of organizational procedures. Alongside the “simple” algorithms already present, so-called deep learning algorithms have been developed. As often explained everywhere, these tools, starting from a learning phase based on a large number of examples, are able to “learn” from the data and provide an output with which their designers feed them, i.e. to find correlations in a large mass of data to perform classification tasks. Algorithmic tools are in constant development to mechanize the search for such correlations, even when AIs turn out to be biased decision aids. On the one hand, it is the data integrated in the system and, on the other hand, the processing carried out by the machine from this data that must be queried. This means that beyond the choice of data, data processing, responsibility and algorithmic ethics as so many points of vigilance for regulatory work, it is also necessary to develop supervision functions for learning algorithms. Systematic practice of testing the algorithmic tools used requires a traceability of the whole decision process in the software where AI is used. With this requirement of traceability, from a political and regulatory standpoint, the bar is high for public authorities caring for the public good. In the context of platform capitalism as in previous eras, emergence of technology as well as regulation of technology in general, as Weber showed about the nature of work and emergence of manufacturers and factories, are driven by struggles for social control (Weber, 1921; Kranzberg, 1986; Lazega, 2001;
Macro-politics of knowledge
151
Hyman, 2018). Control before efficiency, again. Regulatory struggles around platform capitalism are precisely driven by the relational infrastructures of institutional entrepreneurship as combined with mobilization through AIdriven ideological, polemical knowledge claims and downstream polarization that frame interpretations of interests in controversies and coalitions formation (Culpepper, 2008). Although new powers need new checks and balances, it is difficult to regulate these AI activities by large hegemons. How these platforms operate, how they select their knowledge claims, how their algorithms – which are legally defined as private – work, is not public knowledge. Many algorithms are not even patented in spite of the legally expensive IP regime that only few actors can afford to use to their advantage. The real know-how used to control innovation is not public and remains under the platforms’ control. Private and secret algorithmic activity is hard to regulate, unless public-minded algopolice, algo-regulators, algo-judges work with computer scientists find ways of monitoring it. Indeed, AI recalibrates its instruments and revises its analyses on a continuous basis, following permanent data updates streaming in, without any requirement of coherence and relevance vis à vis previous analyses. Thus, it adjusts to shifts in the signals and data it collects and elaborates, but it makes itself increasingly unaccountable at the same time. The capacity for continuous updates in response to new massive data cannot yet be reverseengineered to understand and be able to check/verify, unless perhaps some AI algorithms are specifically designed to check on other AI algorithms to avoid the loss of control of these processes. Thus, BRT as an unprecedented power in terms of social development “intervention” does not yet have real counter-powers. For the time being, democratic societies have not yet created the institutions of regulation adapted to this new reality and balance of power. Perhaps this will be made possible by turning the creation, diffusion, and institutionalization of algorithms into a multi-stakeholder social process (Maccarini, this volume) in which the epistemic professions have an important role to play together with other institutional actors. The social sciences themselves may be in a position to provide regulatory institutions with expert “capacity subcontracting” on the micropolitics of knowledge in the social process of algorithm creation diffusion and institutionalization. Here there is a whole research agenda about institution building and morphogenesis of these social processes in and through educational institutions (Archer, forthcoming). Now that relational data is here, almost at the level of humanity, regulating its ownership, use, and control in epistemic and cultural authority becomes a matter of human rights, not just for the protection of privacy, as attempted by new laws. Regulators around the world are also moving to limit the power of the tech giants and regulate social media content,8 but their approach is 8
As shown by congressional hearings in the US (see www.nytimes.com/live/2020/07/ 29/technology/tech-ceos-hearing-testimony), BRT abuse their market dominance,
152
Emmanuel Lazega and Jaime Montes-Lihn
mostly based on antitrust laws that were created a century ago and that are imperfect for corralling internet firms. In addition BRT itself is allowed by public authorities to design its own self-regulation in a familiar pattern (Lazega, 2016), even in a context where competition for algorithms can no longer in practice be stopped, and where algorithms can circulate and be reused for purposes for which they were not meant to be used in the first place (Al-Amoudi, this volume). Powers behind these technological developments, such as military and industrial, will not make it easy to build these institutions. Military institutions and private companies including BRT self-regulate in the management of algorithms that are considered proprietary and almost impossible to open as they are black box internal operations, i.e. that individuals and regulators cannot follow. They reach a level of control of technology equivalent to control of infrastructure and access to infrastructure as a public good. This kind of institutional capture is perhaps the functional equivalent of total control discussed above, an extreme situation that current theories of regulation may not be able to address without a theory of meso-/macro-politics of knowledge.
Conclusion To conclude, a neostructural approach keeps in mind that transitions in the recent past meant mass deskilling by Taylorism and social Darwinism, as a condition of possibility of subordination of the masses through routinization of work. Comparing human advice networks with AI response routines underlines the importance of trained capacity for relational contextualization of knowledge and action through boundary work, critical switchings in reflexivity (appropriateness judgments), suspended moments of symbolic interactions, and micropolitics of knowledge. Even when AI responses will be personalized to speak to our uncertainties based on routine analyses of our perplexity logs, they will not mobilize the same capacities. Rooting the analysis of collective learning in combined relational infrastructures and normative (i.e. cultural) controversies shows the need to think in these terms of micropolitics of knowledge in order to understand the effects of AI technology, here AI-boosted BRT, on social life, in particular on the collective learning that will be necessary for future transitions. Among the implications of this approach, it is essential that the public space in which political controversies take place be preserved as public (not private), if regulation of AI is to be developed, and new efforts to promote and spread epistemic literacy that values reality checks be sustained. It is also necessary to remind the increasing number of applied mathematicians, physicists, computer scientists, and management scholars studying social processes that the latter cannot be accounted for purely mechanically. for example in online advertising and policy consulting in data management and analyses, with anti-competitive practices (predatory plots to take out competitors, buying start-ups to stifle them and for unfairly using their data hoards to clone and kill off competitors) and are politically biased (muzzling viewpoints, facilitating misinformation, election profiteering and labor issues interference).
Macro-politics of knowledge
153
Collective learning takes place when actors switch knowledge claims, challenging each other in practical coordination of collective action where social cohesion comes attached to epistemic coherence; not by brute computational force undermining these switches and challenges, and operating in permanent immediacy without history or perspective. This changing of the epistemic authority guard, with its unidimensional, polemical, polarizing, and radicalizing epistemic authority constructed by AI-boosted BRT, does not offer collective learning, only concentration of power and epistemic entrenchment through repetitive boundary work that neutralizes collective action instead of making it durable. This ends up incapacitating society and its collective learning in public controversies without challenges and reality checks, whether lay or expert. Recall the lesson in the observation of knowledge claims by the wine producers’ study: practical knowledge and collective learning are produced by diversifying knowledge claims in appropriateness judgments related to concrete decisions, not by creating divisive routines of unidimensional knowledge claims that can only lead to obsessive posturing against “the other side”. The challenge, disqualification, and replacement of epistemic authorities by BRT takes place via a mechanism that neostructural micropolitics of knowledge can account for: changes in appropriateness judgments that are based on derealization, polarization, and unaccountability. Mechanical epistemic authorities are the three together. This is hardly surprising in societies where shrinking and selfsegregated elites concentrate increasing amounts of power and wealth in similar ways (Lazega, 2015). Many think that this AI-driven recalibration will be lifechanging for individuals and their ability to promote self-control and switch knowledge claims, but also for societies in terms of inequalities, and for humanity in terms of genetic engineering of human beings threatening a permanent divide between Über- and Untermenschen. As many commentators have asserted, regulation of such an epistemic regime and its epistemic authorities is therefore a true challenge for democracy. Societies seem to face more than change of epistemic authorities; this change seems to be part of a more general breakdown of cultural authority, that of the professions, including educational and knowledge intensive such as the librarian mentioned in the introduction (Laurent and Pestre, 2015). Upping the stakes to forms of education that deny the relational and collegial dimension of collective learning to emphasize its interactional, hierarchical, and technocratic dimension requires new organizational thinking (Lazega, 2020a and 2020b). Indeed, developing this awareness of collective learning processes requires education again, including about the micropolitics of knowledge, the elaboration of appropriateness judgments, reality checks, knowledge claims, combined with epistemic network literacy (that was taken away from citizens, in 2015, undermining a phenomenal social development of network literacy9). 9
In 2015, BRT took away the API that allowed individuals using their services to visualize their networks, in particular ties among their contacts, and thus to become social network analysts just as they read and write.
154
Emmanuel Lazega and Jaime Montes-Lihn
This also requires the creation of institutions of regulation of uses of AI, i.e. institutions that are able to identify generic technics that apply across all fields, but also to monitor the constructions of the datasets on which machine learning algorithms are trained, so as to track the decisions that are made for that purpose. Preventing the routinization of decisions that should not be routinized in the management of public affairs, especially not based on highly selective and intoxicating machine learning, needs more switching between knowledge claims. Preventing this routinization also requires the possibility for human beings to turn these machines off or put them on stand-by if needed. In other words, make sure that human beings have the possibility to politicize and regain control over these deliberations. Otherwise, in contemporary technocratic bureaucratization, political discussions of norms of equality and justice will be further evaded in the permanent calculation and estimation of probabilities that pretend to substitute for these deliberations. What is essential to being human in the contemporary new wave of digital bureaucratization? From the perspective of organizational morphogenesis, the answer is institutions of algorithmic regulation that are not designed by BRT hegemons of the neo-liberal organizational society. Taking micropolitics of knowledge into account when looking at the effect of AI on controversies leads to the conclusion that regulation of AI-boosted BRT is essential for societies not to break down and does not belong to BRT itself. Among the implications of this approach, it is essential that public space in which political controversies take place be preserved as public (not private), regulation of AI be developed, and new efforts to promote and spread epistemic literacy that values reality checks be sustained. In these efforts, collective learning and education will be of vital importance in the coming transitions. For example, in a world where people are required to think about a meaningful life without work (unavailable) because of the way society and its technology are changing, the very purpose of education would be to give somebody a sense of life, autonomy and interest, and a sense of citizenship. Beyond instrumentality, employability, and ruthless competition, these values would be fostered in the nature of education, regardless of a commodified system where all the intellectual property of the education system is owned by hegemons and everything came top down through imposed curricula. The case of wine producers, who learn collectively and locally in order to face the ecological transition, shows that there is hope in bringing reality checks back into the discussion. It illustrates that at a local level it is possible to decrease the dependency on routinized external prescriptions in a context of a collegial collective learning process that enables individuals to get solutions adapted to their situations by switching between different types of knowledge claims. Our approach to collective learning has implications for new thinking about education and how AI tools could be used to come to terms with this new situation. Education as a subset of cultural change must include a sophisticated epistemic network literacy combined with an understanding of micropolitics of knowledge. In that respect, much remains to be done.
Macro-politics of knowledge
155
References Archer, M. S. (ed.) (2013 [1979]). Social Origins of Educational Systems. London: Routledge. Archer, M. S. (ed.) (2016). Anormative Regulation in the Morphogenic Society. Berlin and Heidelberg: Springer. Archer, M. S. (ed.) (2014). Late Modernity: Trajectories Towards Morphogenic Society. Berlin and Heidelberg: Springer. Archer, M. S. (forthcoming) in M. Carrigan, D. Porpora and C. Wight (eds.), PostHuman Futures, Vol. III. London: Routledge. Arendt, H. (ed.) (1973 [1951]). The Origins of Totalitarianism. Boston, MA: Houghton Mifflin Harcourt. Al-Amoudi, I. and Lazega, E. (eds.) (2019). Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Berger, P. L. and Luckmann, T. (1966). The Social Construction of Reality: A Treatise in the Sociology of Knowledge. London: Penguin. Blau, P. M. (1964). Exchange and Power in Social Life. New York: John Wiley. Bourricaud, F. (1964). Sur deux mécanismes de personnalisation du pouvoir. In L. Hamon and A. Mabileau (eds.), La Personnalisation du pouvoir. Paris: Presses Universitaires de France. Breiger, R. L. (2010). Dualities of culture and structure: seeing through cultural holes. In J. Fuhse and S. Mützel (eds.), Relationale Soziologie: Zur kulturellen Wende der Netzwerkforschung (pp. 37–47). Berlin: Springer Verlag. Brock, T., Carrigan, M. and Scambler, G. (2016). Structure, Culture and Agency: Selected Papers of Margaret Archer. London: Taylor & Francis. Carrigan, M. (2016). The fragile movements of late modernity. In Morphogenesis and the Crisis of Normativity (pp. 191–215). Cham: Springer. Cohen, W. M. and Levinthal, D. A. (1990). Absorptive capacity: a new perspective on learning and innovation. Administrative Science Quarterly, 35 (1): 128–152. Compagnone, C. (2004). Agriculture raisonnée et dynamique de changement en viticulture bourguignonne: connaissance et relations sociales. Recherches Sociologiques, 35 (3): 103–121. Compagnone, C., Lamine, C. and Dupré, L. (2018). The production and circulation of agricultural knowledge as interrogated by agroecology: of old and new. Revue d’Anthropologie des Connaissances, 12 (2). Cota, W., Ferreira, S., Pastor-Satorras, R. and Starnini, M. (2020). Quantifying echo chamber effects in information spreading over political communication networks. Paper presented at the INSNA Sunbelt Conference, July 2020. Crick, T. (2020). Using group detection and computational text analysis to examine mechanisms of disinformation on Reddit. Paper presented at the INSNA Sunbelt Conference, July 2020. Culpepper, P. D. (2008). The politics of common knowledge: ideas and institutional change in wage bargaining. International Organization, 62: 1–33. Desrosières, A. (2016). Gouverner et Prouver. Paris: La Découverte. Donati, P. and Archer, M. S. (2015). The Relational Subject. Cambridge: Cambridge University Press. Dorin, B. and Joly, P. B. (2020). Modelling world agriculture as a learning machine? From mainstream models to Agribiom 1.0. Land Use Policy, 96, 103624. Éloire, F. (2010). Une approche sociologique de la concurrence sur un marché: le cas de la restauration lilloise. Revue française de sociologie, 51(3), 481–517.
156
Emmanuel Lazega and Jaime Montes-Lihn
Fine, G. A. and Kleinman, S. (1983). Network and meaning: an interactionist approach to structure. Symbolic Interaction, 6, 97–110. Gardner, R. E. and Butts, C. (2020). Passing while bot: fake news network role analysis from twitter data. Paper presented at the INSNA Sunbelt Conference, July 2020. Glückler, J., Lazega, E. and Hammer, I. (2017). Knowledge and Networks. Basingstoke: Springer Nature. Habermas, J. (1976). Legitimation Crisis. London: Heinemann. Habermas, J., Honneth, A. and Joas, H. (1991). Communicative Action (Vol. 1). Cambridge, MA: MIT Press. Hyman, R. (2018). It’s not technology that’s disrupting our jobs. Available at: www. nytimes.com/2018/08/18/opinion/technology/technology-gig-economy.html Krackhardt, D. (1990). Assessing the political landscape: structure, cognition, and power in organizations. Administrative Science Quarterly, 35, 342–369. Kranzberg, M. (1986). Technology and history: “Kranzberg’s laws”. Technology and Culture, 27: 544–560. Lalor, J., Berente, N. and Safadi, H. (2020). Bots versus humans in online social networks: a study of Reddit communities. Paper presented at the INSNA Sunbelt Conference, July 2020. Laurent, C. and Pestre, D. (2015). Régimes de connaissance et modèles de développement. In Colloque La Théorie de la Régulation à l’épreuve des crises (pp. 9–12). Paris: Université Paris-Diderot-Inalco. Lazega, E. (1992). Micropolitics of Knowledge: Communication and Indirect Control in Workgroups. New York: Aldine-de Gruyter. Lazega, E. (2011). Pertinence et structure. Revue Suisse de Sociologie, 37: 127–149. Lazega, E. (2014). Appropriateness and structure in organizations: secondary socialization through dynamics of advice networks and weak culture. In Brass, D. J., Labianca, G., Mehra, A., Halgin, D. S. and Borgatti, S. P. (eds.), Volume on Contemporary Perspectives on Organizational Social Networks: Research in the Sociology of Organizations (pp. 377–398). Bingley, UK: Emerald Group Publishing. Lazega, E. (2015). Body captors and network profiles: a neo-structural note on digitalized social control and morphogenesis. In Archer, M. S. (ed.), Generative Mechanisms Transforming the Social Order (pp. 113–133). Dordrecht: Springer. Lazega, E. (2018). Networks and institutionalization: a neo-structural approach. Connections, 37: 7–22. Lazega, E. (2019). Swarm-teams with digital exoskeleton: on new military templates for the organizational society. In Al-Amoudi, I. and Lazega, E. (eds.), Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Lazega, E. (2020a). Bureaucracy, Collegiality and Social Change: Redefining Organizations with Multilevel Relational Infrastructures. Cheltenham: Edward Elgar Publishers. Lazega, E. (2020b). Traçages et fusions: du danger d’enrichir les bases de données de réseaux sociaux. La Vie des Idées, https://laviedesidees.fr/Tracages-et-fusions.html Lazega, E. (2020c). Embarked on social processes (the rivers) in dynamic and multilevel networks (the boats). Connections, 40: 60–76. Lazega, E. (2020d). Perplexity logs: on the social consequences of seeking advice from an Artificial Intelligence. In Carrigan, M., Porpora, D., Wight, C. (eds.), Post-Human Futures. London: Routledge. Maccarini, A. M. (2019). Deep Change and Emergent Structures in Global Society. Berlin: Springer International Publishing.
Macro-politics of knowledge
157
Montes-Lihn, J. (2014). Apprentissage inter-organisationnel au sein des réseaux interindividuels: le cas de la conversion de viticulteurs à l’agriculture biologique. Thèse de Doctorat, Université Paris Dauphine, sous la direction de E. Lazega & C. Compagnone. www.researchgate.net/publication/344872059_THESE_JAIME_MONTES_ LIHN Montes-Lihn, J. (2017). Collective learning and socialization during the ecological transition: the case of organic and biodynamic wine producers of Côte de Beaune. Política & Sociedade, 16 (35): 403–431. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown. Pachucki, M. A. and Breiger, R. L. (2010). Cultural holes: beyond relationality in social networks and culture. Annual Review of Sociology, 36, 205–224. Polanyi, M. (1967). The Tacit Dimension. Garden City, NY: Anchor. Potts, M. (2019). In the land of self-defeat. New York Times, Oct. 4, 2019. Reynaud, J.-D. (1989). Les Règles du jeu: L’Action collective et la régulation sociale. Paris: Armand Colin. Reynolds, C. W. (1987). Flocks, herds and schools: a distributed behavioral model. ACM SIGGRAPH Computer Graphies, 21: 25–34. Schultz, J. and Breiger, R. L. (2010). The strength of weak culture. Poetics: Journal of Empirical Research on Culture, the Media, and the Arts, 38, 610–624. Selznick, P. (1957). Leadership in Administration. Evanston, IL: Row, Peterson & Co. Shibutani, T. (1978). The Derelicts of Company K: A Sociological Study of Demoralization. Berkeley: University of California Press. Stein, J., Poiroux, J. and Roth, C. (2020). User confinement on Twitter: where structural and semantic communities intersect. Paper presented at the INSNA Sunbelt Conference, July 2020. Stryker, S. (1980). Symbolic Interactionism: A Social Structural Version. London: Benjamin/Cummings. Varman, R. and Al-Amoudi, I. (2016). Accumulation through derealization: how corporate violence remains unchecked. Human Relations, 69 (10): 1909–1935. White, H. C. (1981). Where do markets come from? American Journal of Sociology, 87: 517–547. White, H. C. (2008). Identity and Control: How Social Formations Emerge. Princeton, NJ: Princeton University Press.
8
Can AIs do politics? Gazi Islam
Discussions of AI have returned, time and again, to the question of the uniqueness of human beings vis-à-vis AI (e.g. Cantwell Smith, 2019). Discussions of this uniqueness have taken varied forms, for instance, asking whether AI systems can match the extent or kinds of intelligence shown by humans, or whether they can demonstrate similar perceptual or computational skills (e.g. Insa-Cabrera et al. 2011). Beyond the cognitive, however, questions have arisen as to whether AI can develop or master the emotional faculties of humans, such as humor, love, or desire (e.g. Levy 2007). Related to the latter question is whether the social capacities of human beings, such as empathy, love, or the ability to care (cf. Thrun 2004; Turkle 2011) are amenable to machinic simulation or even authentic manifestation. Such discussions are central in broader questions of the personhood and ethics of AI, as discussed, for example, in the draft report of the EU Commission on Civil Law Rules on Robotics (Delvaux 2017). Yet, even discussions of the “capacity for empathy”, for example, define the human in terms of individual capacities, whether cognitive or socio-emotional, that are imagined as features of human brains. Most comparisons of AI and humans ask if AI can match humans by focusing on these individual aspects of “human nature”1 (Rubin 2003; Sack 1997). By contrast, what if human nature involved aspects that are not limited to individual capacities, or that lean on those capacities while also forming analytically distinct systems? Language, for instance, is both a human cognitive capacity and an autonomous system of rules and practices; while language enrolls individual speakers and operates through their cognition, it would be a truncated view of language to reduce it to individual behaviors (see Graham 2010). Similarly, social practices and systems that seem to emanate from human habitats, such 1
My initial use of scare quotes around the term “human nature” signals my agnosticism as to the existence or status of such a nature in a strong sense, and to note that the debate about AI–human relations depends to some extent on the conception of the human that is adopted. The discussion below of the political nature of being human is meant in an exploratory spirit, and not to assert an adherence to the “zoon politikon” conception as definitive of an essential human nature. This approach clarified, for ease of reading, I subsequently refrain from using scare quotes around this term.
DOI: 10.4324/9780429351563-8
Can AIs do politics?
159
as culture, myth, and religion, are both rooted in and move beyond individual cognition and emotion. In this chapter, I consider one such description of human nature that incorporates yet eludes individual capacities, the concept of the “zoon politikon” or political animal, that is central to Aristotle’s concept of the human (e.g. Tuncel 2012). Given that the definition of the human as a political animal has a long pedigree, reaching back to classical roots, it is surprising that recent discussions have not taken up the uniqueness of the political aspects of humans versus robots. Particularly in an era where algorithms, computerized systems, and “bots” are deeply embedded in the political (Orr 2002; Zuboff 2019), questions about the political nature of AI are highly salient to understanding how such systems complement, enable, or alternatively threaten or even substitute for human nature. The current chapter addresses this issue, stated broadly as the question “Can AIs do politics?” Stating the question in this somewhat surprising form draws attention to the fact that our considerations of the “humanness” of AI have left out a large part of human nature, one that seems odd to imagine a mechanical system performing when considered from one angle. Yet on the other hand, our politics are already deeply penetrated by mechanical elements, and not only is it possible to imagine, but many scholars are predicting a deep interpenetration of AI systems with political decision making (Andreou et al. 2005). This sense of simultaneous incredulity and inevitability begs for an explanation of the ways in which AI can (and can’t) do politics. The rest of the chapter will unfold as follows. First, after briefly summarizing the notion of the “zoon politikon”, I unpack several possible ways to think about the political that will lead to different possibilities for the role of AI. Summarizing these as “system” versus “social” views of politics, I develop these broad characterizations as having engaged in a tug-of-war around the sphere of the political, with implications for how “mechanical” the political can be considered to be. Next, I broach whether and how the emergence of AI technology challenges the boundaries between these two conceptions of the political, arguing that rather than “doing” the political in one sense or the other, AI tends to blur the distinction between the two and leads to new configurations of the political sphere. Finally, on the basis of that discussion, I reframe my initial question somewhat, to ask how politics would be shaped by a world of interpenetrated humans and AI, outlining three broad possible scenarios and discussing their implications.
Discussing AI and politics To begin, I should state as a caveat that my consideration of AI will not focus on the technical specificities of the latest AI technologies, in which I am not an expert. Many considerations of the social ramifications of AI have become entwined in the finer details of when a system is “truly” AI, as opposed to machine learning, robotics, algorithms, or other kinds of technology (Kok et al. 2009).
160
Gazi Islam
While I am sensitive that such differences can be important to some debates, for the current purposes I am using a broad and “layman’s” definition of AI as a mechanical system that, due to its advanced processing configurations or capabilities, is able to produce novel or surprising results that are not easily predictable by its users or programmers. Further, several of the points to be made will apply to advanced information processing machines more generally. Whether such technologies are ultimately AI or simply unpredictably complex technologies may have some philosophical consequences, but for the current chapter, I am more concerned with imagining what such systems could potentially do socially, as opposed to their actual technological features. Humbly ignorant on the latter point, I must limit my argument to an idea of AI that is schematic and stereotyped, but hopefully useful to make my broader point.
Humans as political animals The Aristotelian statement that “man by nature is a political animal” (Aristotle, Pol. 1.1253a) goes on to characterize human beings as unique from animals in their capacities for speech and thus their command of morality. Continuing, Aristotle argues that “the city-state is prior in nature to the household and to each of us individually”. At least on the face of it, therefore, the idea of the political animal is both linked to the capacity for speech and moral judgment, and the part-whole relationship of the individual to society. For, as Aristotle goes on to explain, the individual outside of the political bond ceases to function, as a name of a part removed from the whole loses its sense, and the self-sufficient human “must be either a lower animal or a god”. In this sense, the political nature of the human is not merely a capacity in the sense of individual intelligence or skill but is something conferred from belonging to and participating in collectives. At the same time, because the political aspect is related to speech, the “rational animal” and “political animal” characterizations of humans are bound up with each other. This interweaving of the rational and the social capacities of humans makes it difficult to understand how an artificially intelligent system could assume such a form of being, because the capacities involved seem to point beyond a purely calculative capacity. Thus, it is questionable whether the ability to process data and, on that basis, optimize algorithms would allow a system to simulate or generate the kinds of properties implied in such a conception of the human, or to invent a unique version of them appropriate to its own form of socially intelligent being.
Conceptions of the political The question of how to assess if and how an AI system could do politics may require exploring with more precision the ways in which “doing politics” can be manifested. Even the Aristotelian characterization mentions both common activities such as eating and speaking together, as well as being part of the
Can AIs do politics?
161
polis, a combination of both micro-interactive activities with a more “structural” account of part-whole relations. The wide variety of social configurations that can be described as “political” makes this task difficult, and it would be futile to attempt to provide an exhaustive or all-inclusive definition of the political. Yet, by giving a (non-exhaustive) characterization of a few of the relevant ways of understanding the concept, it may be possible to see, in broad outlines, the challenges with which an AI system would have to deal in order to do politics, and to understand where the limits of such a system could be. One way to make such a characterization would begin from the conceptions of politics that are more obviously based on power and coercion, a view of politics as social control (cf. Lukes 2005/1974), and move progressively towards more participative conceptions, and even politics as collective resistance or rebellion against control. Intuitively it would be very different, for example, to ask whether an AI system could effectively control social behaviors through technical means, from asking whether or not it could participate in a democratic debate or town hall meeting, or even whether it could engage in a protest, demonstration, or acts of civil disobedience. So, providing an overview of putative conceptions of the political could help specify the possible parameters of such a query. Political as rule through power In this most unsubtle form of politics interpreted as “rule”, the political is defined as that which directs or governs (i.e. “govern” from kybernetes = steersman) the collective, mobilizing resources from a central agent, which could be a person, a party, or perhaps a technical system (cf. Wiener 1948). Because politics in this case is defined through its effects, the nature of the intentionality behind political effects would be immaterial, and politics would be tantamount to the exercise of power. The forms of power in such cases could vary across systems, from the coercive effects of the threat of punishment, to the direct use of police force, to the “architectural” power of structuring spaces and technologies to allow or disallow certain activities. In each of these cases, however, what is excluded from politics is the co-creation of meanings through discussion or argumentation, processes that are either considered part of a separate “cultural” sphere, or in the case of totalitarian systems, excluded entirely. What is left is a form of architecture of power that seems compatible with a purely mechanical view of politics, as will be developed below. Political as administration/governance Related but distinct from “raw” power, politics as administration involves the steering or governing capacity but backed up with what could be called the “rule of law”, that is, an ideological layer through which power is enacted as legitimate; in such a system, politics may continue to operate with the tacit
162
Gazi Islam
possibility of coercive power while claiming legitimacy in the everyday interface between the ruler and the ruled (cf. Weber 1958). This form of politics has a hegemonic quality, in that it seems to form coalitions or alliances to coordinate collective action while retaining asymmetric access to coercion or resources from within such coordination (Anderson 2017). Yet the technocratic variants of such administrative systems may find in automated systems a strong support for legitimacy claims, aligned with purported values of objectivity or rational calculation that may be attributed to information systems. Thus, while information systems in the previous conception of politics may operate through the designed architecture of automated systems, in the administrative frame, they may additionally work through systems of calculation, automated decision making, and analytical procedures, which may not be transparent to the human actors involved (or not) in their operation. Political as collective praxis and will formation In contrast to the above political conceptions, whose main features involve “effects”, either direct or legitimacy-mediated, another conception of politics locates its nature in the processes of collective action and coordination (Cecez-Kekmanovic et al. 2002). Sometimes referred to as “collective will formation”, this conception shares with the administrative view that convergence around a given conception of social conventions and norms is a basis for political life. In this view, the zoon politikon is political because of its collective capacity to decide things together, rather than its ability to converge around top-down directives. Yet the notion of convergence remains central to politics, and the construction and maintenance of unity is a paramount political goal. While such a conception seems, because of its focus on the co-construction of society by its members, more democratic than the previous two conceptions, it is not for that matter more pluralistic; collective will formation shifts the locus of decision making onto distributed agents while retaining the demand for these agents to collectively form a unified actor or people, whether as a nation, a community, or a tribe. Armed with decentralized information technology, inputs from such a distributed will-forming agency could boost its convention-forming processes through mechanically mediated ways. Political as communicative action Drawn from a tradition of communicative and discourse ethics (Apel 1980; Habermas 1990), communicative action begins with the ideal of collective will formation aiming at consensus, although consensus is conceived of as a “regulatory ideal” for actors, rather than a precondition for politics. While the difference between the two may seem slight, it is important because while collective will formation evaluates politics to the extent that it can secure consensus, communicative action is more “procedural” in that it evaluates the
Can AIs do politics?
163
communication process in terms of its orientation towards mutual comprehension and a truth-seeking orientation. In that sense, achieved consensus would be a felicitous outcome of such processes but not a criterion for success. Moving between a critical concern with the preservation of a democratic public sphere and a concern with truth-seeking drawn from American pragmatism (e.g. Dewey 1919), Habermas’ concern is the maintenance of a “lifeworld” built around communicative activity, but threatened by an overarching “system” of techno-governance mechanisms whose consensus-imposing regulations do not respect the autonomy of communicative processes. Whether such a lifeworld, and its accompanying communicative processes, can build a public sphere in an area of digital information technology and increasingly computer-mediated communication is an area of intense scholarly interest (cf. Boros and Glass 2014). Political as recognition order Moving steadily along the continuum to more “lifeworld” oriented processes, the culmination of such a view of politics as social and relational would see individuals as mutually constituted by their common interaction and recognition (Honneth 1995). While the communicative view of the public sphere imagines collectivities arising out of the communicative processes of truth-seeking and public reason, a recognition view would deepen and extend that view to the affect-laden interactional processes of mutual regard (Honneth, 1995). Drawing upon a dialectical view of the self as constituted through mutual recognition, this micro-interactional view of collective activity scales up to social structures through the construction of “recognition orders”, institutional structures in which recognition norms are fixed and social identities are concretized through repeated recognition practices (Honneth 1995). Drawing a distinction between the more reason-oriented discourses of communicative action and the more self-forming processes of recognition is important in the question of information technology because it raises the issue concerning the behavioral limits of the machine. The question of whether a machine, even in its manipulation of symbols, can “speak” in the full sense has been famously exposed by Searle (1999). Yet as technology advances it becomes increasingly difficult to argue that communicative processes such as argument formation, opinion adaptation, and problem-solving are entirely unreachable through computation. At the same time, it is much more difficult to argue that the processes of subject formation through mutual regard are achievable in conjunction with information technology, even as the more “cognitive” aspects of facial recognition, identification, and surveillance become increasingly central to the activities of the zoon politikon (Zuboff 2019). Political as living with difference/dissensus The above concern with mechanical forms of control versus “lifeworld” processes all take place within the horizon of something called society, which
164
Gazi Islam
Aristotle would have identified with the city-state or polis and which provides the “whole” within which the parts take their meaning and form. However, it is also possible to imagine the political in terms of the co-existence of difference, of living together despite the lack of a unified common horizon. Recent theorizing in “radical” democracy, for example, has identified the notion of dissensus, rather than consensus-seeking, as the political practice par excellence (Rancière 2015). Concerned with the promotion of a vibrant public sphere and emphasizing pluralism, such views see politics not in the achievement of but in the avoidance of totalizing projects that would close off political possibilities (e.g. Mouffe 2004). Affirming that the institutionalization of political life tends to silence alternative voices in the project of establishing consensus, radical democratic approaches attempt to maintain the openness of such spaces (Rhodes et al. 2020), opposing democratic institutions that merely refrain from undermining them but rather positively seek to preserve their democratic quality. While this paradoxical system-maintenance through openness may seem difficult to imagine in a mechanistic format or logical program, it may have some relation to the self-correcting aspects that characterize advanced AI systems. Political as protest Finally, continuing with the logic of opposition, the notion of protest has been treated as political behavior par excellence. Taking the notion of dissensus as a starting point, the political in this conception is not what happens at the center of a unified system, but precisely what acts from the margins and takes the center as its object of action (Laclau 1996; Mouffe 2004). Although this could be considered as an extension of the idea of radical democracy and dissensus, it is worth distinguishing protest specifically because of its emphasis on praxis, or action based on a critical consideration of the social order (cf. Foster and Wiebe 2010). To the extent that it is a form of praxis, protest requires both critical consciousness of and active intervention in collective reality. While the “power” framing of politics also focuses on intervention in the world, its emphasis on effects is somewhat different from the idea of practices and it does not contain the key critical element of protest. In that sense, protest is more than just powerfrom-below but combines the idea of concrete intervention with that of the manifestation of a lifeworld of meanings whose processes of self-reflexivity are grounded through struggle. This combination of consciousness and concrete struggle seems difficult from the current moment to imagine as exhibited by a mechanical system. But by taking the diverse conceptions of the political into account as a whole, it may be easier to understand what is at stake in such an assessment, and it is to this consideration that I turn next.
Three criteria for political activity The exploration developed above around ways to conceive the “political”, although frustratingly brief in the context of such a monumental issue, is
Can AIs do politics?
165
useful for specifying some preliminary concepts that can be useful in assessing the possible roles of AI in “doing politics”. First, the conceptions differ in terms of how much they define politics as a functional system – that is, on the basis of interactions between causes and effects leading to outcomes – versus defining politics as “social” – that is, composed of relational practices and communication involving systems of shared meaning that are built through interaction. This distinction draws very close to classical distinctions between “system” and “social” integration (Lockwood 1964) that have been deeply elaborated elsewhere (e.g. Archer 1996; Domingues 2000; Habermas 1981). Depending on how this distinction is conceived, it refers to the systemic and social aspects of collectives as either ontologically distinct, separable methodologically, or linked through temporal separation (cf. Archer 1996 for a comparison of these positions). Moreover, as this distinction was adapted by Habermas (1981), it maps onto a difference between functionalist “steering” mechanisms (based on power) versus lived social experience (based on meaning). Seen from that angle, the distinction between the different visions of politics comes down to a distinction between politics as considered “from the outside” as a mechanical system, and politics seen “from the inside” that is within a horizon of social experience. Yet, the limits to this “system and lifeworld” view of politics are evident from the dissensus and protest variations, which seem oriented neither to system integration nor to the construction of common meanings, but to the sustained ability to maintain spaces of political openness in the face of totality or closure, whether the closure derives from functional systems or collective meanings. In this sense, the three visions of politics underlying the above catalog of possibilities involve two forms of potential convergence – through functional systems and through shared meanings – as well as actions oriented toward divergence, on what could be called an “agonistic” principle as embodied in dissensus and protest activities. The manner in which these three forms of politics coexist and relate, it is to be supposed, that they vary historically, as well as in their relation to each other as mutually supporting, complementary or opposing. Yet, although this brief presentation does not allow a full argument to develop the point, we can suggest that to say that politics is being done involves assessment by these three criteria in some form, taking the following questions as guides: does the activity involved tend toward influencing a social system through mechanical or “steering” mechanisms (system)? Does the activity involved contribute to the development of meanings or practices that shape shared experience (lifeworld)? Does the activity involve influencing spaces for ongoing proliferation of differences or force openings in the system or lifeworld that can support ongoing change (agonism)? How these criteria are weighted, or to what extent they are all equally necessary, has not been established in this argument. However, I am supposing that, regardless of their relative weight at any historical or conjunctural moment, some aspect of each is needed to call a phenomenon “political”. If an activity shapes meaning but has absolutely no effect on the polity, I
166
Gazi Islam
propose that this is not political activity. Similarly, if it affects the polity in a way that is totally outside of the lived reality of the participants, neither would this be political. Finally – and perhaps more controversially – if an activity shapes the shared world of the polity but leaves absolutely no residue of ongoing possibility for future, continuing political activity, I do not consider this as political either. Such activity would in fact be depoliticizing, or the end or closure of politics as such, in this view. From this set of criteria, it is possible to explore the possibilities of AI to “do politics”, by exploring the systemic, social, and agonistic possibilities of AI, which I turn to in the following section.
AI as a political actor We are now in a position to ask ourselves whether it is possible for a mechanical system such as AI to act politically in the ways described above. As a first observation, many of the ways in which the “datafication” of social life has been discussed seem to throw into doubt some of the distinctions between system and lifeworld, as well as those between political closure and opening, and indeed information technology may support aspects of both of these aspects of datafication (e.g. Islam 2021). A closer look would seem to be warranted. First, as some have described it, the mediated public sphere that, according to Habermas, serves as the platform from which contemporary lifeworlds are built (Habermas 1992) takes radically different forms in the light of information technology, some of it using self-learning algorithms to sort and deploy data and even create media output (Greenfield 2017). While Habermas (1992) already noted that mass media systematically distorts the lifeworld, the forces of mass media as ones furthering concentration are increasing displaced by the fragmenting dynamics of social platforms, which replace conformity with individual targeting based on big data analysis (van Dijck 2014). In this sense, such media, rather than constituting a “colonization of the lifeworld” by the system (Habermas 1992), resemble much more a fragmentation or dissolution of the lifeworld. At the same time, some have argued that this very fragmentation is politically fertile in that it can create new possibilities for agonism and dissensus, showcasing and amplifying political voices that would previously have been shut out of mass media (Kostakis 2018). Yet, to the extent that the entry of such voices is not oriented toward the ongoing renewal and reconstruction of polities, but rather demonstrates the removal of all such possibilities for construction, it is difficult to interpret this fragmentation as dissensus or protest in the political sense described above. Mere fragmentation is not the same as political agonism. Given that AI systems involve ongoing self-improvement through feedback processes based on exposure to data inputs, when such systems are exposed to social information, it would be expected that they increasingly adapt
Can AIs do politics?
167
themselves to respond and adapt to that input, as well as acting upon the environment on that basis. Such adaptation and feedback appear similar to developing shared “meanings” in one sense, because the system is developing categories that are revisable in relation to the categories of other social actors. Further, the ability of AI to affect other social actors seems beyond doubt, since the “functional” aspect of politics is the most machine-like anyway. However, whether such similarities are merely the success of AI in producing a simulacrum of political activity or whether this activity is political in a more thoroughgoing sense requires a further inquiry, specifically concerning the unique and complex relation between the social and agonistic aspects of political activity. Specifically, it is important to consider these two criteria together, because of the paradoxical ways in which they seem both to reinforce and oppose each other. Convergence around meanings seems to foreclose ongoing agonism, while the latter seems to block cultural consensus. Yet what makes these meanings, i.e. not simply categories or definitions, but parts of a lifeworld, is that they are categories for somebody whose standpoint towards them is that of a first-person. One consequence of this meaning-for is that each meaning is experienced as something that could have been or could be different, that is, as the positive moment of something whose negative moment is always implicit as a part of that experience. In that sense, the notion of difference or dissensus is always implicit in the construction of a shared lifeworld, and even when the two terms stand opposed within the ongoing activity of agreement and disagreement, their opposition creates the frame from within which each can take shape. Compare this kind of polity-building activity to the activity of an automated system, whose actions are to receive and process data and adapt on the bases of that data. In the most advanced versions of such systems, they would even be able to “re-write” themselves on the basis of such data. On that basis, the automated system could definitely have power-shaping effects on a polity. However, because the categories built from those data sets are the results of information processing rather than sense-making, there is no longer an internal relation between the data input in such a system and the possibilities that such data could be “otherwise”. Put another way, because a computerized system only has the positive moment of data without the negative moment of difference, it cannot “imagine” in a way that grounds political creativity (cf. Castoriadis 1987/1975), a process that requires both a moment of positive institutionalization and a moment of potential negation of that institutionalized category. To summarize, AI and other information systems’ capacities to exhibit political activity may depend on which aspect of that activity is highlighted, and the extent to which that aspect can be supported by the functional, information processing capabilities of information technology. This compatibility is most obvious in the power conception of politics, in which there is little reason to believe that a complex information processor could not make
168
Gazi Islam
strategic political decisions and initiate applications based on those decisions, much as artificial intelligence systems have excelled at strategic games like chess and Go. Concepts of politics around shared meanings and practices may seem achievable by computerized systems to the extent that these “cognize”, but insofar as information processing and meaning-making differ in fundamental ways, there remains a gap that is as difficult to close as it is to articulate. Nevertheless, AI and other information technology are fundamental supports for meaning-making, as the discussion of the mediatized public sphere made clear. If machines cannot themselves “think”, they can still be essential “to think with” (Islam 2015), and so the importance of AI technologies for this aspect of politics should not be underestimated. Similarly, although the idea of dissensus seems radically opposed to the problemsolving functionality of an AI system, the ability of AI to maintain a continuous learning algorithm may not be so far off the idea of radical democratic openness, and so in principle this aspect of politics does seem compatible with AI. However, similar to lifeworld politics, AI might be more interesting to understand as a support for and enmeshing in political life rather than an agent per se, and it would be premature to move from the skepticism about AI as a political actor to discounting the role of AI as an increasingly central component of political action. Changing the (political) subject Given this distinction between doing politics and supporting political action, it may be pertinent to shift the question of “Can AI do politics?” to the somewhat broader question, “What would politics look like in a world interpenetrated by humans and AI?” This slight change in the subject of political activity can help to emphasize that in a world populated with AI and similar technologies, the question of who is exactly “living” political action from the point of view of a subjective lifeworld may be increasingly moot, as the relevance of that subjective standpoint changes in how far it is a necessary or sufficient condition for the shaping of the social world. For instance, in a world of “boosting” algorithms on social media networks, a single actor or small group can have an oversized impact on shaping the lived experience of large groups by amplifying their messages through political bots, organized in networks, “liking” and reposting their communications (Woolley and Howard, 2019). Conversely, a shared horizon of meanings may mean little in terms of political efficacy when those meanings have been centrally produced and distributed by powerful firms and targeted according to group membership to play to the subjective biases of group members. In such cases, experience, rather than an autonomous source of political agency, appears more as a vehicle for ideology (cf. Jay 2005). AI and the ability of big data analysis to target messages to individuals and groups is a precursor to both of these phenomena, and regardless of the “intelligence” of AI in this context, such analyses allow processes of subject formation that would be impossible otherwise.
Can AIs do politics?
169
With this slightly shifted question, I conclude this discussion by examining some of the possibilities for how “doing politics” is transformed by the deep penetration of AI and similar information systems into social life. These possibilities involve a dystopian thesis of a unidirectional “colonization” of the political by mechanical systems, a division of labor in which the political sphere is partitioned into human and non-human components, and a hybridization thesis in which each of these constituent parts of that division – human and machine alike – are transformed in this process into something new altogether.
Doing politics with AI: three scenarios The first possibility I would like to consider is in some ways both the simplest and most dystopian and follows from the Habermasian (1992) “colonization of the lifeworld” thesis. In this scenario, the functional–instrumental aspect of technological systems becomes increasingly embedded into the sphere of public reasoning and social communication, and thus increasingly converting the social bases of politics into matters of technical governmentality and administration. In this scenario, AI technology increases the technical efficacy of administrative systems, removing the uncertainty of “human” communication and replacing it with technical solutions. These AI systems may be “black boxed”, meaning that the processes by which particular administrative decisions are made may be opaque to observers and even programmers, who are unable to trace the exact learning trajectory made by a particular system or algorithm. This untraceability, intrinsic to strong AI systems, effectively takes decision making out of the sphere of rational debate and communicative action; an optimized decision made by such a system may resist scrutiny and appear as a “take it or leave it” decision whose logic may be difficult or impossible to reconstruct. In the case of black-boxed AI systems with decision-making authority over ever-larger areas of social life, the risk is that of developing an entirely administered society in which politics in the “thick” sense of public reasoning and deliberation is replaced by public “management”, that is, the automated distribution of goods and services across populations. The power conception of politics becomes the dominant one, and the “human” in the traditional sense is displaced from the center of the governance model to a degree that requires tending and perhaps care, but is no longer a locus of political agency. In a (slightly) less dystopian vein, the displacement noted above may indeed occur, but lead to the division of political tasks rather than the colonization of the political as such. Thus, the second possibility is the “division of labor” thesis. In this scenario, the political sphere is increasingly divided into activities requiring calculation and optimization, which become delegated to information processing systems. What is left after the subtraction of the calculative are issues of “values”, political ends, and preferences, which remain human. Yet, the scission between the calculative means and the value-oriented
170
Gazi Islam
ends, and their division into different centers of activity, changes the nature of both. Specifically, states of end value are increasingly disembodied and considered as abstract values, rather than situated and contingent political states of a community, while means are increasingly instrumental and lose the common sense arising from their embedding in human networks of etiquette, practice, and interactional norms. The result is a world of computer-generated solutions to human preferences that themselves seem alien to contexts of actual practice, and whose ultimate legitimacy is difficult to establish once they are removed from the specific forms of life from which they originally took meaning. The division of labor thesis, by dividing the political sphere into technical and “human” sub-spheres, may give rise to what I refer to as “secondary politicality”, borrowing and modifying a term used by Ong (1982). Ong, in discussing the emergence of writing from oral cultures, notes how what the oral does and means changes in literate societies (cf. Dean 2016). Ritual, documentation, and social structuring communication – much of that which is “formal” in social life – becomes delegated to writing, leaving orality as an “informal” source of friendship, socialization, and communication outside of official circles. Thus, orality itself is transformed (and impoverished) by losing much of its functionality to writing technology. One possibility is that the delegation of increasing swaths of political agency to AI systems will leave human political activity in the derivative position of being a quaint, old-fashioned, and endearing vestige of politics, bereft of the social structuring role it previously had. In this secondary politicality, the nature of political action as such will be shifted by having such a large part of it absorbed into another action sphere. An alternative possibility for secondary politicality is that the freeing of human agency from the calculative exigencies of complex public administration could ground a deepened, more idealistic form of politics in which final ends are seen as more central to the political project. These two possibilities are not mutually exclusive, as the deepened and idealistic forms of human politics, divorced from real effects, may continue in ivory towers of political contemplation whilst a largely automated AI-infrastructure continues unperturbed by their pontifications. The third possibility I consider is what I call the “hybridization thesis”, which involves a vision of political activity in which mechanical and human elements are entangled in hybridized “cyborg” practices (e.g. Haraway 1985; Wolfe 2010). In this scenario, rather than a division of action spheres into human and machine specialization, micro-practices of politics are achieved through a combination of human and mechanical elements, such that it is difficult to empirically distinguish the contributions of each actor. In fact, the entangled nature of such activities might mean that human and machinic aspects could be meaningless when taken separately, and only constitute action in their combined form. For instance, the deployment of automated “bots” in diffusing political messages tends to combine human and non-
Can AIs do politics?
171
human messaging sources, circulating messages between political “botnets” until they are picked up and boosted by human users, effectively turning “fake” messages into “real” messages as they become anchored in unsuspecting human carriers (cf. Woolley and Howard, 2019). Similarly, AI systems made to work and interact with humans may lead to adapted human behavior as humans form attachments to and learn to interact with machines (Turkle, 2011). Human-machine interactive systems, from surveillance and military systems to algorithmically powered social networks, change the infrastructural bases of human interaction and thus shape human ways of life (Greenfield 2017). Differently from the colonization thesis, in this post-human scenario, it is difficult to ascertain whether humans have been mechanized or machines have been humanized; new terminologies would likely emerge to replace these modern-era concepts.
Conclusion I began by asking whether AI can do politics, to explore a dimension of the human that is often left out of AI-human comparisons – the aspect of the political being of humans. The attempt to delineate the multiple facets of that concept, however, laid bare several competing conceptions of the political and its relation to the human. These differing concepts, further, held different prospects for the possibilities of AI. This observation established a starting point for exploring the plurality of possible AI-politics relations that could emerge in a world in which human and machines both interpenetrate in the political sphere. Which of these futures, if any, comes to pass is an open question, but what is certain is that human politics will be deeply affected by the emergence and shape of information technologies, and the shapes it takes will depend to a large extent on the political imaginaries that are realized in the design and implementation of AI systems. Rather than think of these systems as increasingly sophisticated information processors only, it is necessary to consider them in their broader social and political roles, and to reflect on their design and scope with these roles in mind.
References Anderson P. (2017). The H-word: The peripeteia of hegemony. London: Verso. Andreou, A.S., Mateou, N.H. and Zombanakis, G.A. (2005). Soft computing for crisis management and political decision making: The use of genetically evolved fuzzy cognitive maps. Soft Computing, 9 (3): 194–210. Apel, K.O. (1980). Towards a transformation of philosophy. London: Routledge & Kegan Paul. Aristotle. Politics. 1.1253a. Archer, M. (1996). Social integration and system integration: Developing the distinction. Sociology 30 (4): 679–699. Boros, D. and Glass, J. (eds.) (2014). Re-imagining public space: The Frankfurt school in the 21st century. London: Palgrave Macmillan.
172
Gazi Islam
Cantwell Smith, B. (2019). The promise of artificial intelligence: Reckoning and judgment. Cambridge, MA: MIT Press. Castoriadis, C. (1987/1975). The imaginary institution of society. Cambridge, MA: MIT Press. Cecez-Kekmanovic, D., Janson, M. and Brown, A. (2002). The rationality framework for a critical study of information systems. Journal of Information Technology, 17: 215–227. Dean, J. (2016). Faces as commons: The secondary visuality of communicative capitalism Available at: onlineopen.org/download.php?id=538 (accessed 11 February 2019). Delvaux, M. (2017). Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)). www.europarl.europa.eu/sides/getDoc. do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC% 2BPDF%2BV0//EN (accessed 27 Mar 2020). Dewey, J. (1919). Reconstruction in philosophy. New York: H. Holt & Company. Domingues, J.M. (2000). Social integration, system integration and collective subjectivity. Sociology, 34(2): 225–241. Foster, W.M. and Wiebe, E. (2010). Praxis makes perfect: Recovering the ethical promise of critical management studies. Journal of Business Ethics, 94: 271–283. Graham, G. (2010). Behaviorism. In E. N. Zalta (ed.), The Stanford encyclopedia of philosophy. Stanford, CA: Stanford University Press. Retrieved from http://plato.sta nford.edu/archives/fall2010/entries/behaviorism. Greenfield, A. (2017). Radical technologies: The design of everyday life. New York: Verso. Habermas, J. (1981). Modernity versus postmodernity. New German Critique, 22: 3–14. Habermas, J. (1990). Discourse ethics: Notes on a program of philosophical justification. In Habermas, J., Moral Consciousness and Communicative Action, pp. 43–115. Cambridge, MA: MIT Press. Habermas, J. (1992). The theory of communicative action. Vol. 2: Lifeworld and system: A critique of functionalist reason. Cambridge: Polity Press. Haraway, D. (1985). Manifesto for cyborgs: Science, technology, and socialist feminism in the 1980s. Socialist Review, 80: 65–108. Honneth, A. (1995). The struggle for recognition: The moral grammar of social conflicts. Cambridge: Polity Press. Insa-Cabrera, J., España Cubillo, S., Dowe, D.L., Henánez-Lloreda, M.V., and Hernández Orallo, J. (2011). Comparing humans and AI agents. In Schmidhuber, J., Thórisson, K.R. and Looks, M. (eds.), Artificial General Intelligence. AGI 2011. Lecture Notes in Computer Science, vol. 6830. Berlin and Heidelberg: Springer. Islam, G. (2015). Extending organizational cognition: A conceptual exploration of mental extension in organizations. Human Relations, 68(3): 463–487. Islam, G. (2021, in press). Business ethics and quantification: Towards an ethics of numbers. Journal of Business Ethics, onlinefirst. Jay, M. (2005). Songs of experience: Modern American and European variations on a universal theme. Berkeley: University of California Press. Kok, J. N., Boers, E. J., Kosters, W. A., van der Putten, P. and Poel, M. (2009). Artificial intelligence: Definition, trends, techniques, and cases. Artificial Intelligence, 1: 1–20. Kostakis, V. (2018). In defense of digital commoning. Organization, 25 (6): 812–818. Laclau, E (1996). Emancipation(s). London: Verso.
Can AIs do politics?
173
Levy, D. (2007). Love and sex with robots: The evolution of human-robot relationships. New York: HarperCollins. Lockwood, D. (1964). Social integration and system integration. In Zollschan, G. and Hirsch, W. (eds.), Explorations in Social Change. London: Routledge. Lukes, S. (2005/1974). Power: A radical view. London: Palgrave. Mouffe, C. (2004). Pluralism, dissensus and democratic citizenship. In Inglis, F. (ed.), Education and the Good Society, pp. 42–53. London: Palgrave Macmillan. Ong, W.J. (1982). Orality and literacy. New York: Routledge. Orr, D. (2002). The nature of design: Ecology, culture, and human intention. New York: Oxford University Press. Rancière, J. (2015). Dissensus: On politics and aesthetics. London: Continuum. Rhodes, C., Munro, I., Thanem, T. and Pullen, A. (2020). Dissensus! Radical democracy and business ethics. Journal of Business Ethics, onlinefirst. Rubin, C. (2003). Artificial intelligence and human nature. The New Atlantis, 1: 88–100. Sack, W. (1997). Artificial human nature. Design Issues, 13: 55–64. Searle, J. (1999). The Chinese room. In R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge: MIT Press. Thrun, S. (2004). Toward a framework for human-robot interaction. Human-Computer Interaction, 19(1): 9–24. Tuncel, A. (2012). The ontology of Zoon Politikon. Synthesis Philosophica, 27(2): 245–255. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books. van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance and Society, 12(2): 197–208. Weber, M. (1958). The three types of legitimate rule. Berkeley Publications in Society and Institutions, 4(1): 1–11. Wiener, N. (1948). Cybernetics or control and communication in the animal and the machine. Cambridge: MIT Press. Wolfe, C. (2010). What is posthumanism? Minneapolis: University of Minnesota Press. Woolley, S.C. and Howard, P.N. (2019). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford: Oxford University Press. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: Profile Books.
9
Inhuman enhancements? When human enhancements alienate from self, others, society, and nature Ismael Al-Amoudi
When human enhancements impede human flourishing Human enhancements (thereafter HEs) enhance, by definition, one or several human powers. All other things being equal, they constitute an enhancement of the human condition. The problem that interests us in the present paper is, however, that while HEs may enhance certain powers in their carrier, they may also surreptitiously set back other powers that are essential to being human. Whether any HE constitutes on the whole an improvement or a setback for human flourishing depends on the nature of the enhancement, on how it is used by its carrier, and on the enhancement’s social context of production and consumption. In principle, however, HEs may be said to be dehumanising when, all things considered, they set back human beings’ capacity to flourish. The conceptual move from human powers to flourishing to de/humanisation is not radically novel. It can be traced under various guises from Aristotle to Marx and more recently to realist philosophy (Archer 2000, Bhaskar 1998/1979, Collier 1999, Lawson 2014, see also MacIntyre 1999). I relied on it in the first volume of the Future of the Human book series when I discussed the dehumanising tendencies of neo-liberal managerial practice and theory. I mobilise it again here, though with special emphasis on people’s ability to flourish (or wither) through their interactions with themselves, with others, with society and with nature (c.f. Bhaskar 2008/1993). I follow a two-folded purpose in discussing how HEs may generate setbacks for humans’ capacity to flourish. On one hand, I intend to offer a reasoned critique of the potential side-effects of HEs as they are currently being invented, produced and (destined to be) consumed. On the other hand, I intend this discussion to teach us something about human nature in an epoch described by many observers as post-human (Hayles 2008/1999). At any rate, my point is not to reject all forms of HEs en bloc. It is, rather, to provide a balanced appreciation of when and how devices and procedures intended to enhance humans in some ways may also harm them, in other ways, to the point of dehumanisation. My paper is structured in two parts. The first defines basic concepts and locates my argument within realist social theory, whereas the second mobilises DOI: 10.4324/9780429351563-9
Inhuman enhancements?
175
the conceptual bases laid down in the first part to discuss how HEs may alienate people from themselves, others, society, and nature.
Conceptual framework Human essence: a broad, realist, and relational conception The Future of the Human book series has investigated, from a broadly humanist and realist perspective, the significance of living in societies in which the boundaries of humanity are stretched, challenged, and transgressed more intensely than ever before. In the book series’ first volume, I took stock of the dehumanising tendencies of neo-liberal management, a vast ideological and organisational phenomenon that bears on many aspects of early 21stcentury lifeworld, including the production and consumption of HEs and AI machines. I argued that neo-liberal management bears dehumanising tendencies in three respects: it denies human flourishing by repressing the development of specifically human powers; it vilifies subalterns; and it automates social processes to the point of making humans gradually irrelevant. These three aspects or dimensions of neo-liberal dehumanisation are relevant to critically appreciate the foreseeable effects of HEs and AI. In previous volumes, I have examined with the help of John Latsis and Gazi Islam how HEs and AI could be dehumanising along the second and third dimensions of dehumanisation.1 The present chapter’s observations and discussions complete the picture through a critical discussion of how HE may also be dehumanising along the first dimension, that is through the denial of human flourishing and the repression of specifically human powers. As in my previous contributions, I rely on a somewhat vague concept of “human nature”. On one hand, I insist on using this expression and claim it is right to do so for the sake of theoretical clarification and tactical political struggle against potentially “dehumanising” tendencies of neo-liberalism and human augmentation (see esp. my chapter in Vol. 1, Al-Amoudi 2019). On the other, I do not 1
I have explored the third dimension (automation of social processes) in the second volume of the book series and the second dimension (dehumanisation of subalterns) in the series’ third volume. In the second volume, John Latsis and I examined how reliance on AI algorithms creates unprecedented obstacles to moral discussion within human organisations (Al-Amoudi and Latsis 2019). In this respect, we examined how the production and consumption of artificial intelligence automates social processes of moral justification to the point of making humans irrelevant to decisions that affect their lives deeply. The third volume of the series was dedicated to exploring possible, and plausible, post-human futures. Gazi Islam and I decided to address a broad moral question that is not pressing at present but that may become so in the foreseeable future: how does the advent of HE technologies affect the moral obligations of unequally enhanced humans towards each others? In doing so, we explored and discussed how novel forms of subaltern vilification may emerge from HEs, especially in the context of neo-liberal societies (Al-Amoudi and Islam 2021).
176
Ismael Al-Amoudi
venture a positive definition of human essence, let alone a list of necessary and/or sufficient characteristics. Indeed, Margaret Archer and Andrea Maccarini’s introduction to the present volume is quite convincing about the futility of attempting definitive definitions of human essence, whether they are based on creationism, speciesism, sortals, capabilities, or dignity (Archer and Maccarini, this volume). The attempt to list essential and exhaustive human characteristics seems to fail for two reasons. Firstly, because explicit lists of essential characteristics are open to counter-examples and inconvenient limiting cases (e.g. human hands with six fingers). Secondly, because the human attributes that matter axiologically may, in principle, be shared by other definitely non-human beings. Think for instance of Ali-the-robot in Archer’s chapters (Archer 2019; 2021) and of extra-terrestrials in Porpora’s (2019). In both authors’ papers, the discussion is not centred on the concept of a human but on that of a person. A person may be human or otherwise but is characterised by three essential features: a first-person perspective, an ability to identify their own concerns, and an ability to reflect and act upon the latter. Archer’s and Porpora’s papers have something to teach us about the human condition in the age of HE because they question how the latter bears on personality, that is on a human feature that is widespread and valuable in human beings, without the need to decide whether all and only humans can be called persons. But within the Centre for Social Ontology’s writers’ collective, discussion of personality has not been the only way of interrogating the specific features of human beings. Others, such as Pierpaolo Donati and Andrea Maccarini, have explored another route that is different without being incompatible. They focus the discussion around human beings’ ability to engage in specific types of relations with others. Broadly inspired by Buber, their analysis puts human beings’ relationality at the centre of the picture. Doing so allows Maccarini to question the selfdefeating post-human quest for perfect relationships (Maccarini 2021) while it inspires for Donati a characterisation of human essence as “an indefinite re-entry of its relational distinctions” (Donati, in this volume). In the present chapter, my discussion of HE and human nature displays similarities with the aforementioned CSO authors. Like the four of them, I examine what HE teaches us about human nature by discussing the former’s significance for human features that are widespread and valuable. My focal point, however, is different from both personhood (Archer 2019, 2020, Porpora 2019) and inter-subjective relationality (Donati, 2021, Maccarini 2021). Instead, I discuss human beings’ ability to engage in (eudaimonic) relations with themselves, with others, with society, and with nature. In the next section, I explain how this distinct approach is justifiable from a critical realist perspective. Realist assumptions Acting in a social cube My discussion of the potentially dehumanising effects of so called HEs is organised following four dimensions, also known after Bhaskar (1993) as the
Inhuman enhancements?
177
“social cube”. The “social cube” refers to the idea that human agency necessarily involves relations of re/production in relation to oneself, to others, to society, and to nature at large. The expression was coined by Bhaskar (1993). As he has it: Four dialectically independent planes constitute social life, which together I will refer to as four-planar social being, or sometimes human nature. These four planes are (a) of material transactions with nature; (b) of inter-personal intra- or inter-action; (c) of social relations; and (d) of intra-subjectivity. (Bhaskar 1993: 153) The interchangeability of “social being” and “human nature” in the above passage should not surprise readers familiar with relational theories of society and humanity. To be human is precisely to be involved in inescapable, though alienable and degradable, relations with oneself, others, society, and nature. Thus, it is appropriate to ask how HE is dehumanising and how it alienates humans along the social cube’s four dimensions. But my usage of the concept of social cube is heuristic rather than explanatory. As for most of meta-theory, the social cube explains nothing on its own. It provides, however, an insightful intellectual template for organising investigations of social phenomena. If anything, it encourages researchers to consider social phenomena in their ontological complexity, and without the comfort of collapsing one dimension of being into the others (e.g. the individualist, collectivist, central–conflationist and idealist elisions criticised by Archer 1995, Bhaskar 1998/1979, and other critical realists). My discussion of HE in relation to each of the four planes of human social reality is not exhaustive. Moreover, while the social cube provides a useful analytical distinction between domains or dimensions of social reality, it is worth keeping in mind that social phenomena generally involve a combination of complex entities and processes that span across the four planes. Thus, I follow the social cube rather simplistically to organise my discussion of how HEs affect the ways human beings relate to the world. But I also keep in mind that the four planes of reality are seldom hermetically distinct from one another. Psychic embodiment in a social cube The ontological stance that informs my discussion of human de-ehancement considers the human psyche as both distinct from the human body and emergent from it. This ontological stance is also known as synchronically emergent powers materialism and is quite widely shared among (critical) realist authors. As Bhaskar and Norrie (1998) put it: in the idea of “synchronic emergent powers materialism”, emergence involves the generation of new beings (entities, structures, totalities,
178
Ismael Al-Amoudi concepts) “out of pre-existing material from which they could have been neither induced or deduced” (Bhaskar 1993, p. 49). (Bhaskar and Norrie 1998, p. 564, in-text reference modified)
Theorising the mind as an entity that is synchronically emergent from material bodies has implications both for my discussion of HEs and for the (realist) reading of Carl Jung on which I base some of my arguments (more on this below). Regarding HEs, if the mind is synchronically emergent from the body, then alterations to the latter can be expected to generate alterations to the former. It thus makes sense to interrogate how HEs of all kinds may also have (side) effects on the psyche of their bearers. Moreover, the theory of the mind’s synchronic emergence from the body is but a special case of the four-planar conception of social agency. The latter implies, indeed, that mind is ontologically distinct though causally related to material being but also to inter-subjective and role-based social relations. Thus, it is not merely HE tout court that interests us. Rather, we examine the psychic harms potentially stemming from HEs as they are produced and consumed in Late Modernity. But, as Collier (1999) astutely notes, the CR theory of synchronically emergent mental powers also provides a robust ontological foundation to psychoanalytical thinking, or at least to some realist approaches within psychoanalytical thought. Indeed, realist social theory has guided my reading of Jung and helped me integrate some of his insights into a coherent theoretical understanding. I now present, through a realist lens, a few ideas of Jung that influenced the present discussion of HE. HEs as technologies: four characteristics It goes without saying that HEs are technologies. But while all techniques and technological artefacts can be theorised as extensions of our human capabilities (C. Lawson, 2010), the HEs that interest us in the present discussion are characterised by specific features that, taken together, distinguish them from other technologies and artefacts. I list below four characteristics that are distinctive of HEs as technological devices. Although there may be more, these four characteristics are helpful when discussing HEs in general. They are: mobility/portability; irreversibility; connection to the carrier’s body and mind; and embeddedness into complex systems of persons, artefacts, and institutions (c.f. Figure 9.1). HEs are portable or at least comprise a portable element that follows the (thereby) enhanced carrier. For instance, while a train does not constitute a HE, an electronic ticket associated to a chip injected in the passengers’ arm would count as one. Moreover, and relatedly, HEs are largely irreversible. HEs are either impossible to remove or it is costly to do so in terms of economic cost, risk, pain, etc. While a pair of glasses or a walking stick can be left home or thrown away by their carrier, the same can’t be said about laser eye surgery
Inhuman enhancements?
179
– Mobility / Portability – Irreversibility – Close connection to body, nervous system and mind – Embeddedness into complex systems of persons, artefacts and institutions
Figure 9.1 Characteristics of HEs
and about grafted prostheses. This feature is consequential as it indicates that strong path dependencies are to be expected in the domain of HE and that choices made today are likely to bear lasting consequences over time. If anything, this means that we can expect to witness the lock-in effects associated with path dependent technological artefacts such as typewriters2 and portable computer operating systems (see Al-Amoudi and Latsis 2014 for a realist discussion of lock-in effects). For this reason, too, it is vital to contemplate the potentially harmful effects of each novel HE before the latter becomes widely, and irreversibly, spread. But HEs are not only portable and irreversible. They are also closely connected to our bodies, nervous systems, and minds. The above example of an intra-dermic chip holding an electronic ticket is a rather simple device that is literally inserted within our bodily envelope. But enhancing devices can be much more sophisticated and intimately connected to our bodies and minds. For instance, as I write in 2020, there already exist devices that allow paralysed persons to control a computer or a mobile phone through their minds. While we already know enough about how mental states can affect physical devices via neurological activity, we still know little, however, about how such physical devices can affect in turn our mental states. It is reasonable to expect, however, that the connection of physical devices to people’s central nervous systems is likely to be bidirectional rather than unidirectional. Hence, HEs are likely to generate new feedback loops from mind to device to mind again. Finally, HEs are embedded in complex systems of production, commercialisation, consumption, and maintenance. This characteristic is important for both ontological and political reasons. Ontologically, it means that it is an individualistic illusion to draw the boundaries of HE within the confines of the carrier’s body. While the latter may contain the device strictu sensu, the 2
The QWERTY keyboard has become a paradigmatic example of path dependence. Although it was slower than alternative keyboards, QWERTY became the standard because it was too costly to retrain all proficient users into a different keyboard.
180
Ismael Al-Amoudi
continuous operation of the device necessitates a network of other devices, persons, and institutions that extends far beyond the carrier’s body. Politically, it means that the freedom associated with the use of HE is double-edged. On one hand, the carrier’s freedom is enhanced to the extent that she becomes capable of actions that were otherwise impossible. But on the other hand, the carrier also becomes dependent on the whole network of persons, devices, and institutions that are needed for the enhancement’s operation and maintenance. Back to our example of devices allowing handicapped persons to operate a computer, it is not clear what will happen the day when the company operating the enhancement goes bankrupt, or when its marketing directors simply decide to stop providing maintenance in order to focus on newer devices that are presumably more profitable for the company.
When HEs become dehumanising Keeping the four characteristics of HEs in mind, I now examine in turn how they can alienate their carriers from other people and from society, from themselves, and from nature. Alienations from others and from society HEs cannot be theorised independently of the social context within which they are designed, produced, and consumed. This context largely conditions which types of HEs get to be produced, which types of persons get to carry them, and how enhanced humans use their enhancements. As of 2020, HEs are being produced through ecosystems of largely private companies including internet-based companies (GAFA, Baidu, Alibaba, Tencent, Xiaomi), but also pharmaceutical behemoths, industrial groups, venture capitalists, and many a scientist willing to spin off a start-up firm. As an enthusiastic observer puts it: Transhumanism will soon emerge as the coolest, potentially most important industry in the world. Big business is rushing to hire engineers and scientists who can help usher in brand new health products to accommodate our changing biological selves. And, indeed, we are changing. From deafness being wiped out by cochlear implant technology, to stem cell rejuvenation of cancer-damaged organs, to enhanced designer babies created with genetics. This is no longer the future. This is here, today. (Istvan 2017) Istvan’s account is interesting because of its banality. Along with myriad similar accounts, it indicates not only that HEs are being designed and produced, but also that they are being designed and produced in a particular
Inhuman enhancements?
181
3
Late Modern context. Following Archer (1995, 2014, see also Lockwood 1964), in Late Modernity, most human societies feature a combination of low social integration (between agents) and high systems integration (between elements of the social structure). This configuration of “contingent complementarity” (Archer 1995) is propitious to globalised neo-liberal capitalism in which people from vastly different cultures are nonetheless able and willing to collaborate via well-integrated institutions that operate independently of whether actors share a common purpose or understanding. The combination of high systems integration and low social integration is historically contingent and could have been otherwise. However, it bears heavily on how HEs are currently being designed and consumed, and on how they will affect carriers’ relations with other people and with social institutions. Most existing or imminent HEs we hear about seem destined to help (understandably anxious) individuals fit in societies characterised by competition for limited positions and resources. Thus, we seldom hear of enhancements increasing people’s ability to experience empathy for distant others, or to appreciate beauty under its various guises, or to find the courage to speak the truth when appropriate, or to enter deep meditative states conducive to religious sentiments. Conviviality, poetry, righteousness, and religiosity are not priorities as long as there is no profitable market for them. We do hear a lot, however, about enhancements that improve people’s ability to engage in economically productive behaviour. These include, for instance, nutrition that enhances concentration and/or memory; exoskeletons that can displace very heavy objects; electronic devices allowing people to work at a distance, etc. Even when enhancements are not destined directly for a professional usage, they are nonetheless marketed as products that enhance user performance in a context of consumerist leisure. Hence, the best-selling drug over the past 15 years purports to improve sexual performance whereas cosmetic surgery and nutrition supplements help ever more people in shaping a body deemed aesthetically more desirable than competition. Unless they are steered through personal reflexivity and collective action, the above tendencies seem on course to generate a perverse positive feedback loop whereby enhancements intensify both systems integration and competition between individuals. HEs as they are being designed and marketed increase systems integration because they help interconnected capitalist firms and state administrations operate more smoothly and more efficiently than ever. Indeed, enhanced workers are destined to be more productive, make fewer mistakes and communicate more data. While enhanced humans are likely to communicate more data through sensors and in-built communication devices, this doesn’t mean their relationships with 3
I follow here the Centre for Social Ontology’s characterisation of Late Modernity in terms of social morphogenesis unbound (c.f. Archer 2014). This characterisation leads to dating Late Modernity from the neo-liberal deregulations initiated in the 1980s to the present period (included).
182
Ismael Al-Amoudi
other human beings will be richer or more enriching (i.e. conducive to relational goods). Indeed, as Lazega (2019) has shown in the context of swarm-teams of soldiers, when individuals are subjected to a regime of enhanced visibility, their propensity to engage in forms of local solidarity and collegiality is impeded rather than enhanced (see also Lazega 2020). It is so for organisational reasons, because the data produced and generated by enhanced humans is typically transmitted to computer servers, remote analysts, and technocrats rather than to close trusted colleagues who share a common lifeworld. But it is so for ontological reasons as well, because the nature of the information transmitted via enhancement devices is impoverished relative to what human beings can and do communicate when they engage in meaningful inter-personal relations. The communication of subjective mind states, subtle emotions, and other qualia requires social integration, either under the form of a common cultural context and/or through inter-subjective relations threaded through familiarisation over time. Yet HE alone does not provide the conditions of social integration that enable rich inter-personal exchanges. In principle, I see no reason why HE should, on its own, impede social integration. However, as they are being designed in the early 21st century, HEs seem destined to reinforce Late Modernity’s desocialising tendencies. Not only do the economic and cultural contexts of HE impede social integration (see Al-Amoudi 2014; Al-Amoudi and Islam 2021), but HEs, as they are being produced and destined to be used, seem to reinforce the feedback loop of social malintegration. Thus, I have argued with Gazi Islam in a previous volume (2020) that solidarity between enhanced and unenhanced humans is likely to be problematic in neo-liberal societies characterised by an ethos of individualism and competition. The problem stems in part because HE introduces unprecedented power differentials between individuals, but also because of the absence of institutional and cultural safeguards against massive inequalities between human beings. The erosion of solidarity constitutes in itself an alienation of the human person vis-à-vis others. But there are also other forms of alienation that are likely to be intensified by the diffusion of HEs. For instance, Maccarini (2019) argues that reliance on robotics and social media incites people to seek “pure relationships” that are devoid of ambiguity and that follow one clearly decipherable symbolic code. In many regards, HEs make possible a world in which people make fewer mistakes, have engineered bodies and emotions, and are rarely obliged to have a conversation with a stranger. Even mundane activities such as buying groceries are increasingly digitised and standardised through devices such as augmented-reality glasses (and forthcoming retinas, Cronin and Scoble 2020) and remotely controlled drones. But this state of affairs also means losing countless opportunities to learn from one’s mistakes and to widen one’s sociological imagination by having a chat with a stranger. More problematically perhaps, it also means that the circle of people counting as “not strangers” is likely to shrink as people do not need to know each other to be able to collaborate.
Inhuman enhancements?
183
To recap, HEs seem destined in their forthcoming guise to intensify both systems integration and social malintegration. On one hand, they integrate people ever more into large impersonal systems of exchange and production. On the other hand, although they may help enhanced humans to collaborate and communicate efficiently, they may also dispense them from sharing a common lifeworld and understanding each other. Alienations from the embodied self I have argued so far that, in Late Modernity, HE is potentially alienating in terms of solidarity and sociality. But current trends also indicate that HE might also be alienating at the carnal level, in terms of our relations to our own and other people’s bodies. As for other forms of alienation, whether and how HEs alienate people from their biological bodies depends largely on the social and cultural context within which enhancements are produced and consumed. In the following paragraphs, I take brief note of the trans-human contempt for the human body and argue for a theory of the mind/body relation that reinstates the human body’s significance. Doing so clears the ground for a discussion of various ways in which HE may alienate us from bodily emotions, full presence to loved ones and intimacy with our beloved ones. The body as “meat” Trans-human philosophy considers the human body as a historical given that is replete with limitations and is destined to be transcended by humans. Moreover, transcending the body is not a mere technical possibility but is also, from the perspective of trans-humanism, an ethical obligation. Thus, in a jolly adolescent letter to “Mother Nature”, trans-humanist guru Max More expresses the following grievances: No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die – just as we’re beginning to attain wisdom. You were miserly in the extent to which you gave us awareness of our somatic, cognitive, and emotional processes. You held out on us by giving the sharpest senses to other animals. You made us functional only under narrow environmental conditions. You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves! What you have made us is glorious, yet deeply flawed. You seem to have lost interest in our further evolution some 100,000 years ago. Or perhaps you have been biding your time, waiting for us to take the next step ourselves. Either way, we have reached our childhood’s end. (More & Vita-More, 2013: 449)
184
Ismael Al-Amoudi
In More’s letter, the human body is not necessarily evil or ugly, but neither is it particularly good or beautiful. And in any case, the body is not sacred and the human subject is sharply distinguished from it. I believe that More’s attitude towards the human body is rather common amongst trans-humanist circles. Actually, it is prefigured in starker terms in Gibson’s cyberpunk Matrix trilogy that set the tone for the cyberpunk movement in the 1980s and that arguably imprinted trans-humanist imaginaries since then. In the latter, the human body is referred to as “meat”, perhaps because it is slightly disgusting in addition to being eminently modifiable and commodifiable. In cyberpunk, the world is ugly but transhuman transgression is cool. Beyond neuro-reduction The problem of the relation between a person and her body is not peculiar to contemporary or even exclusive to Modern Western civilisation. The biological body’s spiritual role and value have been problematic since antiquity. Besides the quarrel between Plato and Aristotle on the soul’s immortality, one can also remember the Gnostic practices of bodily contrition or the attempt by Thomas Aquinas to rehabilitate the worth of the human perishable body, and more broadly of the physical world, within Christian theology. I would like to suggest, however, that HEs seem on course to amplify one cultural mechanism of bodily alienation that is quite specific to Western Modernity: neuro-reduction, or the reduction of mental states to brain activity (for a philosophical critique, see Bhaskar 1998, pp. 97–101). Today, this cultural mechanism is particularly salient in the various, typically exaggerated, claims made in the name of neuroscience. The popularity of neuroscience, and of phrenology before it, attest if anything that mind/brain equivalence has been widely accepted by most Westerners since the 19th century. Hence CJ Jung’s initial surprise during a dialogue with Ochwiay Biano, a Puebloan Native American he met in the United States around 1925: “See,” Ochwiay Biano said, “how cruel the whites look. Their lips are thin, their noses sharp, their faces furrowed and distorted by folds. Their eyes have a staring expression; they are always seeking something. What are they seeking? The whites always want something; they are always uneasy and restless. We do not know what they want. We do not understand them. We think they are mad.” I asked him why he thought the whites were all mad. “They say they think with their heads,” he replied. “Why of course. What do you think with?” I asked him in surprise. “We think here,” he said, indicating his heart. (Jung, Memories, Dreams, Reflections, pp. 247–253, cited in Sabini 2016)
Inhuman enhancements?
185
Ochwiay Biano’s commentary invites us to reflect on what we mean by “thinking” and whether we are truly capable of thinking without our head, or our heart, or our guts … or without our mobile phone! This amusing thought, however, covers a number of serious questions that are relevant to understanding how HE can alienate us from our bodies. Is it possible to attribute thinking to the collaboration of several organs, including artefacts? And what is at risk when an artificial device replaces or augments organs made of flesh? The attribution of thinking to organs located outside the head should not surprise us. Ontologically, this theory is fully compatible with the synchronic emergent powers materialism discussed in the first section above. The individual mind can thus be theorised as a non-material entity that emerges from a dynamic ontologically heterogeneous totality constituted of biological organs forming the human body, of material artefacts such as pen, paper, and mobile phones, and one may also add, of immaterial entities such as pre-existing (cultural) symbols and social relations. But while critical realism offers my preferred metaphysical framework for theorising the mind outside the head, it is by no means the only voice arguing in that direction. Indeed, phrenology generated critiques in its day, just as neuroscience does today (e.g. Lindebaum et al. 2018 for a critical discussion of neuroscience and leadership training). More broadly, Hayles celebrates the cultural transition from liberal humanism to posthumanism by pointing at the limits of locating thought within the human body. As she has it: embodiment makes clear that thought is a much broader cognitive function depending for its specificities on the embodied form enacting it. This realization, with all its exfoliating implications, is so broad in its effects and so deep in its consequences that it is transforming the liberal subject, regarded as the model of the human since the Enlightenment, into the posthuman. (Hayles 1999: xiv) In spite of our theoretical differences, I would say with Biano, Hayles, and Jung – but against Modern Western common wisdom, phrenology and neuroethics – that the mind is not located exclusively inside the head. Moreover, critical realist ontology allows us to consider the mind as a real immaterial entity whose existence and operation depend continuously on various material (incl. biological), cultural, and social entities. The implication for the present discussion is that, to understand how HE may alienate carriers from their own bodies, we should look into how HEs transform the ontologically heterogeneous substratum from which our minds continuously emerge. A painless body: emancipation or alienation? In a recent best-selling book, Harari (2017) remarked that HEs come with the promise of emancipation from pain. This development is made likely both
186
Ismael Al-Amoudi
because of the advent of artificial organs and because of pharmacological progress made in the realm of painkillers. In many respects, a life without pain is rightly desirable. Firstly because pain is, by definition, unpleasant. And secondly because pain also distracts the mind and inhibits our ability to think, feel, and connect to others to the full. Yet, at the risk of defending an unpopular thesis, I would also like to suggest that the potential eradication of pain might come with a price. My point is not that we should masochistically combat each and every attempt to remove or diminish pain. But rather that we should do so while being conscious of what is being lost in the process. The first nuancing consideration is that while artificial organs and powerful painkillers might eradicate pain, they may also eradicate pleasure. More generally, sensations are likely to be dimmed or distorted. Thus, it is unclear how a body made in part of plastic and silicon will feel to the person. Will the latter still be able to feel her own body? Will she still be able to feel that she is breathing? That she is alive? Moreover, while it is a common biological trope that pain is useful because it provides feedback signals in situations of otherwise undetectable damage to the body, we might also add that pain acts as a powerful correcting device for the education of one’s behaviours and attitudes. A person feeling pain to her stomach might be inclined to adjust her diet, usually in a more healthy fashion, and a person feeling back pain after a day’s work might adjust her posture so as to protect her muscles and skeleton. Thus, long before 21st century neurologists, Aeschyllus could sing: Nothing speaks the truth Nothing tells us how things really are Nothing forces us to know What we do not want to know Except pain And this is how the Gods declare their love Truth comes with pain (Aeschylus, Oresteia) But pain being a nearly universal human experience, it also constitutes the basis for bonds of sympathy between fellow human beings. Ego and Alter may have different bodies. And when Alter is hurting, Ego may be free of pain. And yet, because Ego has already felt some form of pain, s/he can recognise the pain of Alter, at least to some extent and through analogy. It is unclear, however, how sympathetic bonds will develop between human beings who, in virtue of possessing highly differentiated artificial bodies, are likely to experience qualitatively different feelings of pain. While the question of the communication of pain raises philosophical problems in the case of two persons made of flesh and bone (c.f. Archer’s discussion of pain and first-person authority in Archer 2000), these problems are likely to be exacerbated in the
Inhuman enhancements?
187
case of persons equipped with different kinds of (partly artificial) bodies and attempting to communicate what they feel to one another. Emotion alienation: my artificial heart makes me feel anger? While distortions to bodily sensations constitute the most obvious form of alienation following HE, it would be misleading to ignore the possibility of HE also altering people’s ability to form emotions. Indeed, the distribution of mental activities throughout various bodily organs (and not only the brain) bears on how HE might affect our relationship to ourselves. Indeed, if our hearts and livers contribute to the formation of emotions, it is unclear how artificial hearts and livers will transform our emotions and how we produce them. For instance, we know already that persons carrying a pacemaker regulate emotions differently from those without one. But while significant research and development efforts are being dedicated to producing pacemakers that regulate blood flows (and thus emotions) in connection with the patient’s context, there seems to be no discussion of the ethics and politics of such emotional engineering. Should carriers seek to recreate the same emotional responses they used to have before surgery? Or should they seek to improve their emotional responses relative to the status ex ante? And if so, according to which ideals? And within which limits? My point here is two-fold. On one hand, it is that we should acknowledge that HEs do bear on the formation of emotions. While heart pacemakers might be the most obvious non-brain devices to do so, we should not underestimate the transformations to emotional responses brought by artificial livers, kidneys, stomachs, eyes, and so on. The regulation of emotions is a highly complex, and obscurely understood, process that involves potentially any organ connected to the nervous system. While it is arguably illusory to attempt to predict beforehand the exact transformations for a carrier’s emotionality, such transformations can be expected to happen and it may be possible to control them to some extent. On the other hand, my point is that the control of those emotional transformations that follow HE is not philosophically neutral. Rather, it raises crucial ethical and political questions. Some of these questions are as ancient as the quarrels between stoics who sought to harness emotions and hedonists who sought to express them to the full. What is perhaps novel, however, is the transition from emotional regulation based on training and education to emotional regulation based on artificial organs’ engineering and fine-tuning. Back to the example of the pacemaker; there exist since Greek and Chinese antiquity exercises and diets designed to regulate emotions, making people, for instance, less prone to anger or more capable of empathy. But in these practices (aka technologies) of the self (Foucault 1988) based on diet and exercise, the emotional subject is also the prime subject of her emotional transformation over time. In the context of artificial organs that bear on emotionality, the person is considered as an emotional object rather than an
188
Ismael Al-Amoudi
emotional subject. Actually, for Foucault, these practices of self-transformation are privileged sites for the formation of subjectivity, they require time and effort, and happen in typically small communities. In the case of emotions being transformed by HE, however, emotional transformation happens as a consequence of the design or tuning of the enhancing device. The pacemaker is re-tuned slightly differently and, suddenly, the person becomes more or less prone to anger or anxiety, though in ways that are typically hard to predict precisely.4 While the relationship of the enhanced human to her emotions is undeniably transformed, it is difficult to say whether this transformation constitutes an alienation or an improved connection. At any rate, emotional transformation following the use of artificial organs shifts the first-person account from “I feel anger in my heart and liver” to “my artificial heart and liver make me feel anger”, thus pointing to a process of process of desubjectification. Unconscious mind processes in partly artificial bodies Taking a lead from the synchronically emergent powers materialism, if there are such things as unconscious mind processes then they must be emergent on a substratum that comprises, inter-alia, bodily organs. It follows that transformations to bodily organs (and a fortiori their substitution with artificial organs or transgenic DNA modification) are likely to generate transformations in unconscious mind processes. One difficulty is that HE are so novel that we still lack hindsight on how they affect our deep psyche and unconscious drives. We can venture, however, analogies with past situations in which the human body and human mind faced stresses for which natural evolution had not prepared them. Rapid urbanisation might offer, in this regard, a case in point. Its stresses on human bodies and psyches have been widely studied and theorised (Simmel 1976/1903). CG Jung’s reflections on the dangers of urban modern life are of potential interest as they document and theorise how bodily stresses generate unconscious drives and fantasies. As he has it: We are suffering in our cities, from a need of simple things. We would like to see the great railroad terminals deserted, the streets deserted, a great peace descend on us. These things are being expressed in thousands of dreams … and it is in our dreams that the body makes itself aware to our mind. The dream is in large part a warning of something to come. The dream is the body’s best expression, in the best possible symbol it can express, that something is going wrong. The dream calls our mind’s attention to the body’s instinctive feeling. 4
See for instance: “Pacemakers – for anxiety”: www.webmd.com/balance/featur es/pacemakers-for-anxiety#1 (accessed Nov. 2020)
Inhuman enhancements?
189
If man doesn’t pay attention to these symbolic warnings of his body he pays in other ways. A neurosis is merely the body’s taking control, regardless of the conscious mind. We have a splitting headache, we say, when a boring society forces us to quit it and we haven’t the courage to do so with full freedom. Our head actually aches. We leave. (Jung 1931, ‘Americans must say no’, reproduced in Sabini 2016, pp. 150–151) While it is arguably too early to take stock of the various ways HE will affect our unconscious drives, we can reasonably venture, however, a few hypotheses: 1
2
3
5
Left unchecked, HE will likely cause undesirable pressure leading to imbalances in the human psyche, including at the level of unconscious drives. Psychoanalytical methods of investigation are likely to provide insights about how specific HE tend to generate specific unconscious drives. In particular, we might conjecture that novel unconscious drives are likely to be expressed through symbolically charged dreams and pathological behavioural symptoms. Novel forms of anxiety, stemming from HE technologies, are likely to appear and become quite common as HE devices are diffused through society. Readers can perhaps reason by analogy with the feelings they might have experienced whenever a technological object refused to work properly. Think, for instance, of how infuriated and/or abandoned one can consciously feel whenever the personal computer, or the mobile phone, does not switch on. Aren’t these feelings of anger and abandonment indicative of a continuous, if diffuse, feeling of dependence on technological devices? And could they also indicate, usually repressed, anxieties about the latter’s possible malfunctioning? While some of the novel feelings generated by HE might be explainable through conventional categories and schemas of interpretation, others might require the invention of new categories and the discovery of new generative mechanisms. For instance, psychiatrists Gibson and Kuntz (2007) conducted an extensive study of how patients with implanted cardiac defibrillators manage anxiety. Their investigation, and associated findings and recommendations, are organised according to already known psychological mechanisms. Those patients who report anxiety5 fear the device will not function when needed, or they fear the pain involved in the discharge, or the social embarrassment, or the fear itself. They enter into spirals of withdrawal from social and physical activities, and thus suffer from increased loneliness. In another study, the Danish Trygfoden foundation estimates that about 25% of carriers develop anxiety, among whom 15% develop depression (https://sciencenordic. com/anxiety-denmark-depression/pacemakers-make-heart-patients-anxious-and-dep ressed/1428010).
190
Ismael Al-Amoudi
Gibson and Kuntz’s study does not contemplate, however, the possibility of deeper, unconscious mechanisms. For instance, they focus on patients’ reported fears but do not examine whether other aspects of their psychic lives are affected: the formation of mental images, of taken for granted beliefs, and of concerns. And neither do they investigate mechanisms more complex, or subtle, than the fear of pain. While the present realist (emergentist) theoretical discussion lacks the resources to point at specific candidate explanatory mechanisms in the specific case of pacemakers, it nonetheless encourages us to investigate how the implementation of an artificial device might bring complex bodily changes that are, directly and in themselves, responsible for psychic alterations. For instance, whether and how artificially-set levels of blood pressure are likely to affect mental states? Is the pacemaker likely to interact with biological systems (eg. hormonal) that bear in turn on psychic life? Is the device likely to affect the carrier’s image of herself ? Is the device, and associated maintenance, likely to develop narcissistic self-centredness in the carrier? Is the device conducive to behaviour, thoughts, or outbursts neither patient nor clinician can explain? Is the vital dependence on an external device generating in the carrier’s mind complex (largely unconscious) feelings that mix love and hate. And so on. Many of the tentative considerations in the above passage are inspired by first-person testimonials of ICD carriers.6 Indeed, a Google Video search on the terms “pacemaker patient story” returned over 160,000 videos, most of which consisted in patients telling the story of their ICD and of the issues they faced. Unfortunately, and perhaps unsurprisingly, the concerns expressed by patients, and the way they made sense of them, differed significantly from the experts’ accounts provided by surgeons and psychologists. In sum, critical realist philosophy leaves room for the (reasonable) expectation that HE will affect deep, unconscious, mental processes in various complex ways. These affections are likely to fall beyond the restrictive scopes of surgery and of approaches to psychiatry based on standardised questionnaires and disease classifications. Approaches based on analytical psychology that start from carriers’ accounts and that assume complex mindbody interactions seem more promising. More generally, HE affect people’s relations to their bodies in multiple, often complex ways. The latter are hard to grasp if we remain within the confines of a conception of human nature that dismisses the human body as “meat” instead of acknowledging the complex role it plays as the material substrate for synchronically emergent powers. My proposed approach, rooted in critical realist meta-theory, hints at a potential mind/body alienations brought about by HE. These include alienations from sensation, from 6
See for instance: “My first year of living with a pacemaker / EMOTIONAL” by Sara Naser. Accessible on www.youtube.com/watch?reload=9&v=gOs_YT0RuL8 (Nov 2020).
Inhuman enhancements?
191
emotion, and from unconscious drives that formed over millennia of evolution, including the pre-human. But the question of possible alienations from the body also prompts more general questions about alienations from nature. To these we now quickly turn. Alienations from nature From a strictly naturalist perspective, relations to our embodied selves, to known others and to society are also relations with nature, for the latter includes the former. But relations to nature also encompass relations to nonhuman entities such as animals and plants, seas, forests, mountains, and oceans. The artificial is thus part of the natural, and (critically naturalist, see Bhaskar 1998/1979) concepts of nature sublate artificiality and (anti-naturalist understandings of) nature. With this in mind, what can we say about HE and wo/man’s relations to nature? Because HEs are never produced, consumed, and maintained in a cultural void, they ought to reflect and re/produce the rapport to nature prevalent in the systems of production, consumption, and maintenance in which they are embedded. Reproduction is not a curse and creative production is always a possibility, but whenever it happens, it does so gradually and over time. It is therefore reasonable to expect, in the near future, HE devices that will reflect and reinforce the relation to nature presupposed by the systems of HE production, consumption, and maintenance. Moreover, the ideology of the communities within which HE are produced carries the deficiencies identified by philosophical critiques of technology (Heidegger 1954, Lawson 2017). Thus, following Heidegger, we can remark that the production, consumption, and maintenance of HEs considers natural entities as resources ready for optimisation and control. In this regard, Late Modernity is not very different from Modernity. We might add, with Clive Lawson, that the Modern ideology of technology is isolationist, in the sense that it is based on an atomistic natural and social ontology. Not only are natural phenomena considered as being, in principle, understandable in isolation from each other, but so are people. Thus, any attempt to reflect holistically on nature, or on people and societies, is dismissed as unscientific or, at best, proto-scientific. But by assuming that nature is a resource that belongs to people, the ideology of HE forgets the naturalistic truth that it is not nature that belongs to people but people who belong to nature. Thus, the ideological contradictions identified by philosophical critiques of technology also generate (discursive, performative) effects on the situations they are deemed to describe. Ultimately, they result in severed or alienating material alienations for carriers of HE (see Porpora 1993 on material relations). The evils of technological alienation from nature include inter alia alienations from nature under various guises: environmental destruction, totalitarian controls of living beings (incl. humans) qua populations, desacralisation
192
Ismael Al-Amoudi
of human life, commodification of nature by business corporations, and hyper-individualistic human communities. But what can we say about the specific alienations from nature brought by HE? We can remark, first and foremost, that a carrier of HE is almost definitely severed from wild nature. This is so because, as noted in section 1.3 above, HEs are inseparable from their carriers’ bodies. Hence, whenever the HE carrier will go, so will her HE. This is problematic because of the remarkable regenerative powers of wild nature for the psyche (Bly 1990, Jung 1963, Pinkola Estes 1996). Indeed, Jung (1963) and authors influenced by his thinking such as Bly (1990) and Pinkola Estes (1996) both identified neurosis as the central ill of Modern, urban life, and prescribed flights into the wilderness as its best cure, with reportedly good levels of success. Arguably, pure wilderness is to some extent a Modern fantasy in the sense that, besides the floor of the oceans, very few areas of the planet are undiscovered, and the latter are generally inaccessible to most. But Modern wo/men could at least count on escapes into wilderness as defined by Aldo Leopold in 1925 as “a two weeks’ pack trip” or “a wild roadless area where those who are so inclined may enjoy primitive modes of travel and subsistence, such as exploration trips by pack-train or canoe” (Leopold 1968/1949). Or even, more modestly, a hike in the countryside without a mobile phone. But even the latter option is not available anymore to an enhanced human whose enhancing devices need constant connection to the internet. Whereas primitive wo/man has no concept of wilderness for s/he is continuously immersed in it, Modern wo/man needs and craves occasional flights into the wilderness to mitigate the damaging effects of urban technological life. But enhanced wo/man is definitely alienated from wilderness. Finally, while I lack the space and knowledge to undertake an extensive discussion of the novel psychic ailments that we might expect from HE carriers’ alienations from nature, I would nonetheless point to the following realistic possibilities. Firstly, while Jung remarked that in Modernity archetypes took the form of archetypical images that mirrored technological objects, we might expect HE to generate their own sets of archetypical images. For instance, images of castration and impotence might transform from losing one’s teeth, to losing one’s keys, to losing control over one’s cybernetic eyes. Secondly, we may expect that some psychic transferences normally reserved for powerful figures might become directed towards technicians and organisations in charge of the maintenance of HE devices. Thirdly, and in relation to the above section on unconscious mind processes in partly artificial bodies, we might expect entirely novel forms of mental ailments stemming from further disconnect from the natural rhythms and practices for which recently upgraded wo/men had evolved over millennia.
Concluding remarks: human nature in the mirror of HE Going back where we started, the point of this paper is not to throw an indiscriminate anathema on HE. HE offers many promising enhancements for human
Inhuman enhancements?
193
powers and, in this specific regard, constitute morally worthy eudaimonic advancements. Empowered human subjects are ceteris paribus likely to be better equipped to engage in meaningful relations with themselves, with others, with society, and with nature. Moreover, a key assumption of my discussion of HE is that there is very little to say about their intrinsic worth abstracted from the social and cultural contexts of HE production, consumption, and maintenance. Most existing or imminent HEs we hear about seem destined to intensify the current trend towards systems integration at the expense of social integration. The former include nutrition that enhances concentration and/or memory; exoskeletons that can displace very heavy objects; electronic devices allowing to work at a distance, etc. The best-selling drug over the past 15 years purports to improve sexual performance whereas cosmetic surgery and nutrition supplements help ever more people shaping a body deemed aesthetically more desirable. Conversely, conviviality, poetry, righteousness, and religiosity are not the priority. But HEs as they are currently being produced and destined seem likely to change their carriers’ relation to their own bodies, with potentially undesirable alterations in terms of sensitivity, emotionality, and deep unconscious drives. Moreover, the current technological ideology is likely to be perpetuated and even reinforced by HE in the foreseeable future, thus resulting is deeper alienations from nature. While avoiding a checklist approach to human essence, I have mobilised a relational theory of the human as a thoroughly relational being inescapably involved in relations with nature, society, others, and oneself. This stance is essentialist to the extent that it affirms that four-planar relations are essential to being human. It stands in stark contrast to the ontological presuppositions of much trans-human philosophy and practice, including conceptions of human nature held implicitly by some surgeons, psychiatrists, business investors, engineers, tech journalists, etc. The category of in/correctness does not apply to meta-theory, for it only applies to substantive theories inscribed, explicitly or otherwise, within metatheories. Yet, meta-theoretical stances can and must be evaluated in terms of their ontological coherence, of their epistemological insightfulness, and of their axiological conduciveness to emancipation. More specifically, the meta-theoretical assumptions I have posited about human nature cannot be empirically discovered, nevertheless they can be evaluated against alternative meta-theories of the human and in terms of how much insight they generate for understanding, and acting upon, significant social phenomena such as human enhancements. But it is ultimately up to the community of readers to decide, in good faith and in light of the best evidence, whether a realist four-planar philosophical anthropology is coherent, insightful, and conducive to credible critique. In no way is my four-planar approach contradictory with the philosophical anthropologies defended and employed by some in the present book series
194
Ismael Al-Amoudi
(The Future of the Human). I believe actually that every critical insight about HE that I could develop through a four-planar approach could be expressed within other perspectives. My approach has, nonetheless, a few practical advantages. Firstly, it forced me to stay focused on questions of relationality without dismissing any of the four dimensions I have been examining. Secondly, it allowed me to criticise HEs in terms of how they sever relations that are nonetheless essential to human flourishing. Third, it hinted at possible areas for activism re. HEs. The latter are neither good nor bad in themselves. However, they must be critiqued and resisted whenever they impede human flourishing by severing our relations to nature, society, others, and ourselves.
References Al-Amoudi, I.( 2014). Morphogenesis and normativity: problems the former creates for the latter. In:M.Archer (Ed.), Late Modernity: Social Morphogenesis. Cham: Springer. Al-Amoudi, I. (2019). Management and dehumanisation in Late Modernity. In I. AlAmoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina (pp. 182–194). London: Routledge. Al-Amoudi, I. and Islam, G. (2021). Why should enhanced and unenhanced humans care for each other? In Carrigan, M. and Porpora, D. (Eds.), PostHuman Futures: Human Enhancements, Artificial Intelligence and Social Theory. London: Routledge. Al‐Amoudi, I. and Latsis, J. (2014). The arbitrariness and normativity of social conventions. The British Journal of Sociology, 65(2), pp. 358–378. Al-Amoudi, I. and Latsis, J. (2019). Anormative black boxes: artificial intelligence and health policy. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Al-Amoudi, I. and Islam, G. (2021). Why should enhanced and unenhanced humans care for each other? In M. Carrigan and D. Porpora (Eds.), Post-Human Futures: HE, Artificial Intelligence and Social Theory. London: Routledge. Archer, M.S. (1995). Realist Social Theory: The Morphogenetic Approach. Cambridge: Cambridge University Press. Archer, M.S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M.S. (Ed.). (2014). Late Modernity: Trajectories Towards Morphogenic Society. Berlin: Springer. Archer, M.S. (2019). Considering AI personhood. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix (pp. 28–47). London: Routledge. Archer, M.S. (2021). Can humans and A.I. robots be friends? In M. Carrigan and D. Porpora (Eds), Post-Human Futures: HE, Artificial Intelligence and Social Theory. London: Routledge. Bhaskar, R. (2008/1993). Dialectic: The Pulse of Freedom. London: Routledge. Bhaskar, R. (1998/1979). The Possibility of Naturalism: A Philosophical Critique of the Contemporary Human Sciences. London: Routledge.
Inhuman enhancements?
195
Bhaskar, R. and Norrie, A. (1998). Introduction: dialectic and critical realism. In M. Archer, R. Bhaskar, A. Collier, T. Lawson and A. Norrie (Eds.), Critical Realism: Essential Readings. London: Routledge. Bly, R. (1990). Iron John: A Book About Men. Boston: Addison-Wesley. Collier, A. (1999). Being and Worth. London and New York: Routledge. Cronin, I. and Scoble, R. (2020). The Infinite Retina. Birmingham: Packt Publishing. Foucault, M. (1988). Technologies of the self. In L.H. Martin, H. Gutman and P.H. Hutton (Eds.), Technologies of the Self. Amherst: University of Massachusetts Press. Gibson, D.P. and Kuntz, K.K. (2007). Managing anxiety in patients with implanted cardiac defibrillators. Current Psychiatry, 6 (9), pp. 17–28. Harari, Y.N. (2017). Homo Deus: A Brief History of Tomorrow. London: Vintage. Hayles, N.K. (2008/1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press. Heidegger, M. (1954). The question concerning technology. Technology and Values: Essential Readings, 99, 113. Istvan, Z. (2017). Transhumanism is booming and big business is noticing. The Huffington Post, www.huffpost.com/entry/transhumanism-is-becoming_b_7807082. Jung, C. (1963). Memories, Dreams, Reflections. New York: Pantheon Books. Lawson, C. (2010). Technology and the extension of human capabilities. Journal for the Theory of Social Behaviour 40 (2), pp. 207–223. Lawson, C. (2017). Technology and Isolation. Cambridge: Cambridge University Press. Lawson, T. (2014). Critical ethical naturalism: an orientation to ethics. In S. Pratten (Ed.) Social Ontology and Modern Economics, pp. 359–387, London & New York: Routledge. Lazega, E. (2019). Swarm-teams with digital exoskeleton: on new military templates for the organizational society. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organisations: Confronting the Matrix. London: Routledge. Lazega, E. (2020). Bureaucracy, Collegiality and Social Change. Cheltenham: Edward Elgar Publishing. Leopold, A. (1968/1949). A Sand County Almanac, and Sketches Here and There. Oxford: Oxford University Press. Lindebaum, D., Al-Amoudi, I., Brown V.L. (2018). Does leadership development need to care about neuro-ethics? Academy of Management Learning and Education 17 (1): 96–109. MacIntyre, A. (1999). Dependent Rational Animals. Why Human Beings Need the Virtues. London: Duckworth. Maccarini, A. (2019). Post-human sociality: morphing experience and emergent forms. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix. London: Routledge. Maccarini, A. (2021). Being human as an option: how to rescue personal ontology from trans-humanism, and (above all) why bother. In M. Carrigan and D. Porpora (Eds.), Post-Human Futures: HE, Artificial Intelligence and Social Theory. London: Routledge. More, M. and Vita-More, N. (Eds.). (2013). The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Hoboken, NJ: John Wiley & Sons. Pinkola Estes, C. (1996). Women Who Run With the Wolves: Myths and Stories of the Wild Woman Archetype. New York: Ballantine Books.
196
Ismael Al-Amoudi
Porpora, D. V. (1993). Cultural rules and material relations. Sociological Theory, 11 (2): 212–229. Porpora, D.V. (2019). Vulcans, Klingons, and humans: what does humanism encompass? In I. Al-Amoudi and J. Morgan (Eds). Realist Responses to Post-Human Society: Ex Machina. London: Routledge. Sabini, M. (2016). The Earth has a Soul: C.G. Jung on Nature, Technology and Modern Life. Berkeley, CA: North Atlantic Books. Simmel, G. (1976/1903). The Metropolis and Mental Life. New York, NY: Free Press.
10 The social meanings of perfection Human self-understanding in a post-human society Andrea M. Maccarini
Taking stock and looking ahead: conversations on the end of human exceptionalism The final volume of a book series represents a chance to draw some conclusions about the route that’s been taken, and perhaps to make new sense of the ongoing conversation within the group of authors involved. We have been dealing with many facets of a coming post-human world. With this expression I mean a world in which the range of entities considered on the same interactional and ontological level as humans tends to expand to include bioengineered humans, AIs, robots, and various hybrid forms of life – bio-technical, grown and made, wet and dry. The previous three volumes have touched upon a wide range of issues, which could be summarized as follows: a
The ontological uniqueness of humanity. What (if anything) is essential to being human, and is there anything unique about it? Furthermore, is there anything normative – i.e. endowed with intrinsic worth – about being human? These questions are of profound import to the selfunderstanding of our species. For example, if intelligence and rationality are the properties to which moral value must be attributed, regardless of the entity that “carries” such properties, humans might as well be regarded as place holders. Being a member of the human species and displaying certain properties could then appear as facts that are not linked by any necessary relation, but that are only contingently connected. That relation would be historical–empirical, not ontological. This thesis is of major relevance to most trans-humanists. If a subset of qualities indeed has moral significance, but those qualities only happen to appear in human subjects – being (allegedly) reproducible in other life forms – then humans are just place holders, and “we are confused when we ascribe intrinsic value to the place holder” (Savulescu 2009: 227). We are but carriers of a “spirit”, which was first instantiated by human minds, but can easily be detached from its original home base. This is the end of human exceptionalism. DOI: 10.4324/9780429351563-10
198 b
c
Andrea M. Maccarini Social and legal issues arising from the potential pluralization of personhood. These include various problems about the governance of ontologically differentiated (post- and trans-human) individuals, groups, and communities. How could equality be guaranteed in an ontologically differentiated society? And what puzzles would it entail? To what extent should people be allowed morphological and reproductive freedom? And what must the legal status be of the more complex set of entities who would be possibly claiming to have climbed the ontological stairs?1 The impacts of technologies on typically human experiences and forms of life. What would it mean to introduce non-human actors in the processes of deliberation within various institutions and organizational contexts, from health systems to commercial companies, to the military? Would they result in alienation and the deprivation of human powers and experience, or in a widening of awareness, rationality of decision-making, and ultimately better chances of success?
These are the main issues around which volumes I to III revolved. Let me first present a quick summary of my own contribution to the collective enterprise in the previous three chapters (Maccarini 2019a; 2019b; 2021). Throughout the book series, I have not been directly engaging with philosophical anthropology and the philosophical foundations of inclusive vs. exclusive notions of human dignity, AI dignity, and the moral status of human enhancement. What I tried to do was to say something about the post-human thinking and practices that can be observed through the lens of human experience and existence, in its shape and meaning. This was meant to identify and articulate the most important issues concerning human goods and human dignity, which could derive from a post-human cultural as well as practical turn. From a sociological perspective, in my previous three essays I have characterized post-humanism as a cultural syndrome – with corresponding structural conditionings – and tried to examine how it is produced by deep changes in human identity, in the ways to conceive and experience existence in the dimension of time, in the social realm, and in their material (ontological) self-definition. Needless to say, such a genealogy of post-humanizing ideas and practices also entails powerful feedback loops. In other words, I have presented an interpretation of post- and trans-humanism as a socio-cultural fact, with its socio-cultural causes and possible consequences – focusing attention on human self-understanding and self-experience. From this vantage point, all further, complex phenomena appear mostly as emergent effects of the profound changes occurring on that level. To put it simply, the rationale of my previous chapters was to focus on what people want to do with human enhancement (HE) techniques, and with AIs, robots, etc., which involves providing an insight into the structural and 1
For an original and thought-provoking perspective on this topic see Teubner 2018.
The social meanings of perfection
199
cultural conditions, but also into the personal plans, needs and desires that underlie those trends. The final aim was to develop an argument about the possible ways in which the interaction of those needs, plans, and desires with the relevant technologies over multiple morphogenetic cycles might transform human relationships and identities, ultimately changing the deepest human self-understanding. Given this focus of analysis, a few coordinates could clarify my position about the main dilemmas we have been discussing. An important assumption underlying my whole argument has been a particular definition of personhood. I fundamentally accepted Archer’s proposed criteria, which include having a first-person perspective (FPP), possessing reflexive powers, and having the capacity to develop concerns.2 However, I added that human personhood must be further specified. In human beings, reflexivity, the FPP, and the disposition to develop concerns take on a particular shape through the specific, necessary-and-contingent relationships occurring between their constitutive ontological layers (embodied, psychic, social, symbolical). Thus, human personhood entails a special way of being-in-the-world, which includes (i) unique kinds of relationships with matter, time, the practical and the social realm, and (ii) particular relationships between the body, the psychic system, and the capacity for moral orientation. Such relationships are multidimensional, reflexivity being one key mechanism of mediation. Internal relationships between ontological layers – biological, psychic, social, and cultural – shape subjectivity as an emergent property. As a consequence, a central thesis I developed throughout my chapters is that trans- and post-humanizing practices tend to become de-humanizing to the extent that they threaten to disrupt the integrity of such inner relational constitution of the human person. This claim has far-reaching consequences, which I have only begun to unravel. For one thing, it emphasizes the relevance of the human body, resisting the idea that it be no more than an obsolete platform supporting what really counts in “us”. A further question might be whether or not I can be counted among the “robophobic” mentioned by Archer. The answer obviously depends on the meaning of “robophobic”. I think this notion is meant to indicate two main attitudes: fear that AIs can become dangerous for humans, taking over human societies, and belief that they cannot be, or become, persons. As to the former, rather than fear what robots or AIs could possibly do to humans, I am much more scared of what humans could do to themselves with the help of technology. It is the very human desire to reshape humankind in a post2
In order to mark my distance from empiricism, let me specify that I take these criteria as “potentials”, and not in their extant and empirically observable presence. Moreover, I do not interpret the above criteria as an attempt to articulate an exhaustive list of properties that define what a human person is, but as sort of a phenomenological “threshold of personhood”. In other words, no entity which lacks (the inherent potential to develop) these powers can be described as a “person”.
200
Andrea M. Maccarini
human fashion that I regard as a potential risk for what is worth protecting in humanity. As regards AI personhood, I admit to being skeptical (whilst not dismissive) about AIs achieving personhood. This is because I believe we hardly have any real evidence that they do so far. Most of the post-humanists’ discourse about emergent AI personhood is flawed by a quite reductionistic view of the human in the first place.3 However, this does not contrast with the idea that, in principle, non-human persons could exist, and I am ready to entertain this abstract possibility. Even so, as I said above, this would still not mean other entities could resemble human persons – which pushes the need for distinction one step beyond. Such a discussion may turn out to be complicated in hard cases. If drawing a boundary may still seem relatively easy in the “pure” case of humans and robots, it would be much fuzzier when it comes to the many hybrid forms of life we might witness in the future. In this final essay I would like to introduce a new theme, which can serve the purpose of a closing statement, albeit a clearly provisional one. I will discuss the link between post-humanizing practices and the “good life”, reflecting on what human goods may be achieved or become unattainable as a result. I believe this is a useful approach, and something social science can honestly do, without embracing any a priori ideological cause – be it bio-conservatism or post-humanism – and without blinding itself to what is really happening, for the sake of an ill-conceived form of scientific neutrality. After all, in this last essay I am afraid that criticism will exceed enthusiasm, as was probably the case in the previous three. But in the end, it will be for readers to decide whether the argument I have unfolded throughout these volumes must be regarded as simply falling within the bio-conservative field – and be hailed or dismissed accordingly – or whether they articulate a more complex understanding of the whole matter, as the author would have liked to do.
Ideals of the good life and post-human self-understanding There are many points from which an argument about the connection between post-humanization and ideas of human fulfillment, or the “good life”, could start. In the present chapter, I have chosen to consider human enhancement (HE) and other post-humanizing developments not as a mere search for the satisfaction of individual desires or as a response determined by some functional pressure, but as a moral imperative. The imperative in question would call for human improvement. Post-humanism also comes in this shape, and it is on this ground that it must ultimately be rejected or accepted. An instructive companion to begin this journey is philosopher Julian Savulescu, who is also a well-known spokesman for the transhumanist intellectual 3
References should be so numerous on this point that only an AI could really come up with anything approaching an exhaustive overview. To mention only a few examples, in the longstanding work by Nick Bostrom (2005), Steve Fuller (Fuller and Lipińska 2014), Julian Savulescu (2009), and by most contemporary experts in neuroscience a sheer naturalistic view of the human comes into full light.
The social meanings of perfection
201
movement. Back in October 2009, he gave an impressive talk in Sydney at the Festival of Dangerous Ideas. Its title read: “Unfit for life: genetically enhance humanity or face extinction”. This has been his leit motiv ever since. His basic claim is that the root cause of most major threats to human survival as a species – from climate change to nuclear disasters, down to terrorism and epidemics – is purely and simply the human being in his/her current version. More analytically, what are we unfit for? To begin with, Savulescu argues human subjects are unfit for lasting love, i.e. for a life of monogamous relationships in the way proposed by many religious doctrines. Relational instability is growing in most complex societies, and this in turn causes a lot of trouble.4 Furthermore, human beings are low performers in terms of altruism, aid, and cooperation. They have evolved to restrict their empathy and altruistic orientation to a limited number of people, while global society would require extended, effective solidarity. In the case of global threats like climate change, we are prone to free riding, which will make these problems intractable. Human imperfection gets even worse if we consider the number of fanatics, psychopaths, and sociopaths in the world, and the increased availability of powerful means of mass destruction even to these dangerous examples of our species. Therefore, we should definitely be molded into a wiser and less aggressive shape. Thus, what Savulescu calls “the Bermuda triangle of extinction” is comprised of radical technological power, human nature, and liberal democracy. The latter, too, is unfit to meet the challenges, because it leaves too much to voluntary effort, while we can clearly not trust our fellow humans to display a sufficient level of it in most hard cases. In a nutshell, the world changes, while our biological and moral constitution – the latter being substantially based on the former – does not. Natural evolution runs at a slower pace than social complexity, so our coping capacities are doomed to failure in the new context. As a result of this, the human species needs enhancement as a moral imperative. The price of resisting this will be extinction. One easy way to criticize his argument would be to say that the evidence he presents is conspicuously unbalanced. All big crises, like the present Covid-19 pandemic, really present us with many examples of heroism, altruism, and solidarity, together with some tragic failures, often on the side of political, economic, and cultural élites – which is actually less reassuring than just blaming sociopaths. However, he could respond that the problem lies in the enormous destructive power even a limited number of people could conquer and deploy. A more serious counterargument could be advanced, its main bullet point being that it is hardly understandable how a supposedly evil race 4
This thesis is close to my own idea that human relationships are becoming increasingly burdensome, and that post-humanizing technologies may be the risk winners of relational problems (see Maccarini 2019b), although I draw different conclusions about the meaning of such a predicament, and about the consequences of the technological ways to escape it.
202
Andrea M. Maccarini
as humans are could get to improve itself through technological enhancement. More precisely, if we are destined to put any means to evil ends, why and how should we magically develop the ability to act differently, when and only when we deal with human enhancement? Why shouldn’t these techniques be listed together with the other dangerous ones – weapons of mass destruction like engineered viruses, nuclear weapons, etc. – instead of appearing on the bright side of the list, amongst the possible solutions? If humans are a such crooked timber, why and how should they be able to overcome themselves and their flaws only when they deal with their own genetic code – that is, with the most exciting promise of power and glory ever conceivable? In his numerous writings, Savulescu never clarifies this. With this said, the exciting part of Savulescu’s argument is that the human species may shortly become maladaptive, and that its improvement is the highest moral challenge we should face. The big question is what idea of human perfection or fulfillment may correspond to such a rising cultural constellation. The idea that a technological way to tackle human imperfection might be missing something is easy to entertain, but harder to articulate. Michael Sandel (2007) lays the ground for a critique, introducing two lines of argument. (a)The former maintains that the real problem lies in finding ways to make sure that we deliberate as democratic societies about all the problems listed above. On the contrary, the approach advocating the use of biotech tools to “cure” inequality, violence, and all forms of injustice on the personal level – indeed, the idea that people should be considered to be unfit for the kind of society that’s been created – distracts us from changing the society itself. The point is that Savulescu and the trans-humanists regard society itself as the unchangeable, or at least as the unimprovable product of human biology. That is, the human and the social are reduced to the biological. Everything is poured into the funnel of the human biological constitution, in a deterministic way, while such a view is not confirmed by a huge amount of empirical evidence. (b)A second point is that science cannot really explain everything about human beings, and cannot discern the meaning of human life. Therefore, an important part of the problem lies in understanding how it is possible to take advantage of technologies without allowing them to define our purposes for us. In other words, the problem of meaning should not be left for scientism to muse about. Moreover, the reflexive work involved in such a quest for meaning has to do with respecting human limitations. The argument is subtle and does not point to passive acceptance of one’s limits. The idea is that human beings continually try to overcome their limitations, but this entails a struggle, which includes a certain negotiation. Sandel’s point is that it would be dangerous to escape this predicament, with its inherent tension, because this would end up changing the very meaning of human activity – in the various spheres of social life – and ultimately the way we think of ourselves. These two arguments provide a set of insights, whose core consists in the necessity of making sense of the human experience beyond functionalist
The social meanings of perfection
203
reason. I will develop my own argument in a way that I think is consistent with this, the aim being to highlight the significance of embracing trans- and post-humanism for the ideas of the good life, and of human fulfillment, that are developing in the cultural system of advanced societies. In a previous contribution where I dealt with the notion of the good life (Maccarini 2019c: Chapter 9), I argued that two guiding distinctions could help to map the contemporary cultural landscape of ideas of the good life: flourishing/enhancement and flourishing/calling. The distinction between flourishing and enhancing indicates a difference between two conceptions of the human good: flourishing means the accomplishment of one’s natural potential, i.e. to develop one’s full strength, to “bloom” according to the qualities inscribed in one’s nature. This does not dismiss the element of personal effort or autonomy of will, but involves growing in a specific direction, and implies some idea of limits. The other term, that is enhancement, conveys the idea of grafting “powers” on a “platform” that has no principled “form” (i.e. no “nature”) and therefore can be “empowered” with no inherent “limit”. In this context, the idea of flourishing is meant to think of human good within the limits of historical human reality. That is to say, it would be consistent with a realist concept of human (personal) ontology, and would set some principled limits to “morphological freedom”. A normative idea of “nature” still plays a role in this symbolic horizon, on the side of flourishing. The notion of enhancement, on the other hand, clearly rejects such an idea as ill-founded and misplaced. The other guiding distinction is that between flourishing and calling. In this context, flourishing is taken to mean mundane success, that is, a successful life in purely immanent terms (health, professional success, wealth, etc.). Here, the concept of flourishing is meant to oppose any idea of a “mission” human beings are meant to accomplish, although this might mean putting one’s well-being in jeopardy, and which ultimately constitutes their good, even beyond human intuition and discursive penetration. This meaning appears, for example, in Charles Taylor (2012), who distinguishes flourishing from “Axial” notions of the human good. This does not necessarily reduce flourishing to sheer self-assertion, but does exclude that a “good life” might consist in dedication to certain ideals or causes that transcend, and may even contradict, individual well-being. Now, what do these distinctions allow us to see about the post-humanizing trend in our culture? Where do trans- and post-humanism stand in terms of these distinctions? Seen from another angle, we might argue that post-humanist cultures represent an instantiation of those guiding distinctions, which unfold and shed light upon some of their implications. My thesis is twofold. On the one hand, I claim that a post-human world would be characterized by a specific idea of perfection, of what constitutes a better life in all respects. On the other hand, I argue that such a post-human idea of perfection must be coupled with the desire, and with the technical capacity, to make the world (including human beings themselves) fully disposable – i.e. to make everything available, manageable, customized, and
204
Andrea M. Maccarini
reproducible with the requisite traits. A particular idea of perfection and the quest for disposability constitute the core of a post-human relation-to-theworld. The latter statement means to assume a specific approach, according to which the post-human is defined as a special form of relation-to-the-world, including humanity itself. Of course, such a relation does not float in a cultural vacuum, but is linked with some well-known trends of (not only) Western modernity. The notion that human beings should struggle to improve themselves obviously has a long history, with many twists and turns.5 However, the fact that there is some continuity does not diminish the qualitative change in the transition to a post-human historical formation. In other words, I believe that a certain idea of perfection and the push to disposability are inherent traits of modernity, but late modernity and even more post-humanism intensify such a tendency, representing its “end state” or its ultimate consequence. So, what is the idea of (human) perfection associated with HE and similar practices? For the present purpose, it is useful to frame the answer in the shape of the difference between perfection as regulative ideal vs. perfection as optimization (King, Gerisch and Rosa 2019: 1–3 and passim). The former is a moral ideal, and it is unachievable by definition. It provides normative orientation and involves a struggle to overcome one’s limits, building one’s identity along a path of further self- and social integration, towards a form of personal and social life that is inherently worth living. The latter clearly involves an instrumental logic. Its rationale is to set out to improve oneself continuously, in terms of performance in various relevant domains. Such an optimization is something that can be reached through the right kind of activity, and through the aid of technology. As a consequence, what could be called a perfectionist imperative tends to become ubiquitous in social space and time, spreading in most spheres of social life – even those that would inherently resist instrumental treatment – and defining goals that can be transcended over and over again.6 Perfection as optimization indicates a cultural syndrome, which finds some correspondence with the structural features of global competition and the related pressure. Such forms of conditioning become embedded in a manifold set of “anthropotechnics” (to use Sloterdijk’s words), like role models, biographical schemes (Maccarini 2019a), ideal life plans, self-help guides, and technological tools for self-optimization. Human subjects may internalize or reflexively elaborate on these structural and 5 6
Two very different ways to approach such a complex history can be found in Taylor (2007) and Sloterdijk (2013). A full-blown discussion of this theme should deal with the issues raised in these works. The various contributions gathered in King, Gerisch and Rosa (2019) spell out the ways in which the logic of optimization impacts on various domains of social and personal life, from working environments to sports and leisure. A whole research agenda derives from this insight, which might fruitfully be brought into dialogue with a theory of reflexive socialization and the morphogenesis of the self (Archer 2000; 2003; Donati and Archer 2015).
The social meanings of perfection
205
symbolical elements in different ways, resulting in different forms of modus vivendi and related life trajectories. But my point here is that the idea of perfection as optimization is in a relation of complementarity7 with HE and other post-humanizing techniques. I hesitate to conclude that such a relationship is in principle a necessary one, because optimization existed before post-humanism came into the picture. However, in the current situation – that is, in the current morphogenetic cycle of technically advanced societies – it is hardly questionable that (i) post-humanism couldn’t spread and establish itself among the population without an underpinning culture of optimization as a goal human beings should pursue, and reciprocally (ii) optimization finds a tremendously effective tool in the vast panoply of enhancement and automation techniques. This constellation would have far-reaching consequences. One connotation is that following a regulative ideal means spending one’s lifetime in ongoing cultivation of foundational relationships and in endless discovery of the landscapes encountered along the path of one’s calling. This involves a continuous engagement with otherness (hetero-reference). On the contrary, optimization has to do with the capacity to include every “virtue”, that is to enact every potential within oneself (self-reference).8 Which culture will prevail in terms of the practices of identity building among younger generations, given this symbolic variety, is a matter for empirical study – as well as for educational commitment. But at the end of this escalating feedback loop, human imperfection could be solved by transforming, or even abolishing the human as a source of error and interference with continuously self-optimizing processes (King, Gerisch and Rosa 2019: 57). Now, can post-humanization be linked with what I called disposability? In the present theoretical context, this concept indicates the capacity to master, control, modify, produce, reproduce, and use something. On the contrary, something is called indisposable if it cannot be either produced or appropriated, if it escapes our control and, to some extent, even our predictive capacities.9 More precisely, disposability involves: 1 2 3 4
Making Making Making Making
7
I am obviously referring to the institutional configurations, as they are conceived within a morphogenetic approach (Archer 1995). This in turn echoes the notion of “bulimic self” I bring up in my previous work (e. g. Maccarini 2019c, Chapters 4 and 5). This thread emerges again below. Although I regard it as an important insight, it is impossible to follow up on this theme in the present essay. For further clarifications on the couple disposable/indisposable see Maccarini 2021, particularly note 4.
8
9
things things things things
visible (getting to know them); accessible, physically within reach; controllable, being able to master them; usable, useful to our purposes.
206
Andrea M. Maccarini
Hartmut Rosa has an interesting argument about (in)disposability, which is relevant to my present thesis.10 In his view, a totally mastered world would be a dead world. Life turns into meaningful experience – and can thus be called a fulfilling life – at the boundary between disposable and indisposable. In other words, life’s appeal consists of a continuous game of reciprocity, of the ongoing relational tension between the need to encounter the indisposable and the effort to make it disposable, which crosses all domains of life and experience as a red thread (Rosa 2018: 8–9). Rosa goes on to claim that such is the drama of the modern form of life. The effort to make the world disposable, to bring it increasingly within reach, thereby expanding our range of activity – i.e. human agential powers11 – coupled with the awareness that only an ongoing, parallel presence of the indisposable makes human experience meaningful, lies at the core of the modern dilemmas – both at the macro and meso level and in personal biographies. Therefore, the balance between these two attitudes shapes a particular kind of relation-to-the-world, which is ultimately constitutive of personal and social identity. Moreover, such a form of connectedness also impacts on the “world” itself – particularly once technology makes it ontologically vulnerable to human agential power. Rosa concludes that late modernity tends to overemphasize disposability, and thus ends up producing the catastrophe of that relational equilibrium. In this way, the world becomes indisposable again, but in hostile or tragic forms (epidemics and climate change being good examples). My thesis can be linked with this argument, since the post-human trend represents the epitome of such a predicament. Through post-humanizing processes the inherent problems and paradoxes of increasing disposability are radically intensified. The mutual relation between disposability and the posthuman technological constellation seems to be even closer than was the case with optimization. Without the relevant technical know-how, disposability as a general principle would play a quite limited role, whilst technology needs the idea of disposability to be institutionalized in society’s cultural system for its procedures to be held legitimate. To sum up, there is a strong connection between the expansion of post- and trans-humanizing techniques and an idea of human fulfillment that revolves around self-optimization and disposability of the world, as well as of the human body. Living a good, fulfilling life is increasingly perceived as depending on the chances to improve oneself in terms of performance, and on the availability of resources to increase one’s instrumental mastery of ever greater portions of the world – including one’s inner nature. Therefore, recalling the first guiding distinction I laid out, post-humanism seems to be opposed to flourishing, as distinct from enhancement. 10 The four-item list above is taken from Rosa (2018: 21–23). In the present discussion I draw mainly on his work (2016; 2018), which is the most systematic treatment of the concept in question in recent social theory. 11 This trend resonates with Sandel’s (2007) notion of hyperagency. Indeed, the couple disposability/indisposability could well match Sandel’s distinction of mastery vs. gift.
The social meanings of perfection
207
The other conceptual couple I have chosen as a guiding map in this analysis of cultures of the good life involves flourishing and calling. To speak of a calling clearly evokes a transcendent dimension, be it natural or supernatural – someone or something who calls. This dimension has been conceptualized in various ways in contemporary social theory. Let me just offer two quick examples. To Hartmut Rosa (2016), the focus is on the concept of resonance, defined as a relational mode in which the world speaks to us “with its own voice”, and which is essential for meaning to be attached to human experience, thereby avoiding alienation. The latter is conceived as a relational state in which reality “has nothing to tell us”, and we feel lonely and disconnected. In Hans Joas’s words, such an experience of something that lies beyond, and is not controllable by, individual agency can be defined as the experience of self-transcendence (Joas 2008). Such experiences entail meeting someone or something overwhelming – be it in a disruptive or an exhilarating sense – and this opens the boundaries of personal identity, producing deep transformations in the person involved.12 What these different conceptualizations have in common is that they both attempt an affirmative answer to the question of whether or not our societies still need some sense of calling, which in turn entails a sense of alterity and transcendence. The consequences for the idea of eudaimonia are obviously important. Here again, the symbolical matrix of contemporary societies – at least in what we used to call the West13 – drives us to the crossroads of two diverging cultures: a
b
One defines the act of transcending as a recognition of one’s limits and of the irreducible transcendence of otherness. The relationships multiply and magnify the difference, unity generates other differences and differences call for relationships. The more ego approaches alter, the more s/he remains in his/her difference, and the more this prompts ongoing exploration. Transcendence involves reaching out to someone or something external, and changing oneself in the process—i.e. leaving previous states to acquire new ones over (life)time. An alternative sense of transcendence occurs when ego tries to include everything within him/herself, a bulging self whose depth must be continually nourished through the swallowing of other experiences and alterity. The
12 By the way, this definition should clarify that the phrase “self-transcendence” should not be interpreted to mean that individuals transcend themselves by themselves, i.e. by a pure act of their will or through their inner agential forces alone. On the contrary, experiencing something or someone ‘out there’, and entering into a deeply engaging relationship with “it”, is the essence of the process. This goes for religious experiences, natural ecstasy, or the bliss and euphoria of human love. 13 Reference to the West may sound an unbearable bias. In fact, such a limitation is not due to any old-fashioned evolutionary assumption about (late or post) modernization, but is more humbly caused by the limits of my own competence and by the overwhelming complexity of taking a comparative approach to the issues in question in a short essay.
208
Andrea M. Maccarini individual has no principled limit, but thrives upon the continuous effort to live different states simultaneously, without letting go of any previous way of being. In this context, the self is continually enhanced in his/her “component powers”. Archer’s selective imperative (Archer 2000; 2003) is rejected, as well as the idea that one’s life must have a definite shape. All these are not just lexical details. Such distinctions can serve as tools to interpret the various cultures of the human good which take center stage in the contemporary world, and to study the ways in which they become entangled in the posthuman trend. Whilst culture (a) above follows a neo-humanistic track, in culture (b) the idea of a “good life”, conceived as immanent selfenhancement, clearly opens the door to a brand new way to make sense of one’s place in the world, in which anthropology is superseded by anthropotechnics (Sloterdijk 2013).
If post- and trans-humanism seem clearly to dovetail with immanent selfenhancement – and therefore to be conducive to the lack of any perspective of calling – the link should be illustrated at length by reference to such social phenomena as family and the couple, the experience and cultural construction of adulthood, life course trajectories, lifelong education, and more. One instructive example about the sense of self-transcendence, and its relation to the idea of a good life, could be developed elaborating on a recent survey conducted by Opinium Research for Kaspersky.14 The questionnaire asked whether people would be keen on considering human augmentation to improve their bodily features. It seems that Italians (81%), and more generally southern Europeans, were the most inclined to accept, while Brits (33%) and French scored lowest. It would be interesting to cross-reference these results with birthrates in the various countries, to see whether populations who are more willing to become cyborgs are also those with the lowest propensity to have children. With this, divergent paths to self-transcendence might be identified.15 In the end, going back to the main argument, HE and post-human techniques seem to stand in opposition to an idea of the good life as calling.
A provisional conclusion: relations to the world as a key research agenda Having matched some crucial points in my cultural map of “good life” ideals with the emergent post-human trend, it is now time for some concluding remarks. 14 The survey was submitted in July 2020 to 14,500 adults in 16 countries, among them Austria, Belgium, France, the United Kingdom, Germany, Italy, Spain, Portugal, Switzerland, the Netherlands, and more. 15 Although superficial observation might provide some evidence confirming a connection between the two attitudes, a cautionary note, and a cold shower of realism, must come from considering the low difference between the birthrates in the countries I have mentioned. Thus, the possible significance of those differences and connections must definitely not be overestimated. Nonetheless, this remains an interesting path to future, more in-depth research.
The social meanings of perfection
209
First, we might wonder if the categories chosen for our analysis were really helpful in providing instructive interpretations of the post-human trend. If a characteristic trait of humanity lies in the creation of meaning, which is generated and regenerated through meaningful relationships with an Other who is not just a projection of the Ego, but a true interlocutor, then the distinctions we deployed do highlight some interesting points. Their relevance lies in their importance in identifiying a non-alienating form of relation-to-the-world, in the literal sense of a relationship that allows humans to remain human and not become something else. Non-alienating, reciprocal relationships (i) are distinctly non-disposable, and (ii) respond to some ideal of perfection – not reducible to optimization – which also involve a certain acceptance of imperfection. This happens because the relevance and good to be found in and through such relations, as well as in the Others involved, does not lie in technical perfection. It makes sense to conclude that, beyond the notions of well-being or fulfillment as dependent on long lists of factors, the post-human challenge leads to the idea that it is a certain quality of our relations to the world that must be changed, if hopes of a good life are still to be cultivated. My argument has tried to specify at least a few conditions that would make such relations not alienating, but humanly fulfilling. The emergent needs of self-empowering, cooperation, and new forms and possibilities to “exceed” the current predicament can be interpreted and institutionalized in different ways, whose guiding principles may be grasped through the distinctions flourishing/calling and flourishing/enhancement. The hopes of “good life” depend on a balanced combination of these polarities, without “catastrophic” exits from its relational composition towards radical conceptions. In this sense, human fulfillment or the good life can be said to appear as the emergent effect of flourishing and calling, more precisely of flourishing within, and oriented to, a given sense of calling. This entails the emergence of new sources of the self, whose decisive feature lies in the relation individuals entertain with transcendence and the related notion of limit. A second consideration concerns the intrinsic ambivalence of all social and cultural facts. It seems that my analytical narrative so far has not done justice to such ambivalence, having mainly emphasized the contribution of the posthumanizing syndrome to alienation, loss of meaning, and the overexpansion of instrumental rationality. It is a realist assumption that truth does not come in one book – even less in one chapter. Still, should I be less unilateral? The damage inherent in optimization and disposability as life ideals has been sufficiently exposed.16 But am I not missing the potential for human good in HE and other technical miracles? Is the idea of HE as a moral imperative, from which we started, nothing but a naturalistic misunderstanding of the human good? Are HE and all other kinds of post-humanization of our whole life texture bound to lead to alienation, and not to the widening of the human cognitive and moral horizon? Social ambivalence should be taken seriously. 16 See again the analyses in King, Gerisch and Rosa (2019) for a critical perspective.
210
Andrea M. Maccarini
But do we have a way to strike a balance or to make an adequate assessment? This issue can only be addressed by passing from post-humanism and HE in general to analyzing specific technologies in specific areas of activity. This cannot be the goal of this chapter, but let me present a quick example, for purely illustrative purposes. Education is a good case in point. In the functionalist perspective of optimization and disposability, the whole process of human development should be monitored, controlled, and governed by quantitative standards and evidence-based policy. This aligns a set of phenomena like the socalled “governance by standards” fostered by international standard setting institutions,17 the growing emphasis on measurement, as well as innumerable programs for self-improvement, counselling, support, an increasing dependence on experts for any aspect of psycho-social development over the socialization process and the life course, and more. Indisposability would seem to shrink in the shadow of “old bad” educational stuff. Yet, educational success remains largely indisposable, since measurement of knowledge and competence is usually much more successful and generalizable than educational methods and techniques. Moreover, student anxiety, frustration, and depression increase, making educational outcomes even less predictable. Finally, the range of meanings education can reveal is apparently being narrowed. In this predicament, it is still unclear what AIs, educational robots, platforms, and all education technologies could do, and what they couldn’t. It seems certain that all forms of EdTech are already changing the very process of learning, through a learning style that is no longer text- and reading-based. Furthermore, the daily routine of interacting with non-human entities in the educational context is transforming the educational relationships, which are the essence of education itself. Once again, their possible effects are most often studied with a negative thrust, or in the vein of educational alarm, but are really profoundly ambivalent. And once again, one crucial point consists in understanding how these changes affect relations to oneself and to the world among all participants in educational processes. Piling up a few examples, is it good that AIs should teach foreign languages, and other languages in general, like coding and programming? What do we mean when we say educational robots can enter into relationships with students, helping them to develop various skills, such as teamwork and collaborative thinking? Would it be good to use EdTech to teach students how to develop gender-neutral solutions, interacting in a gender-neutral way? And would it be really feasible? Besides possible distraction, alienation, and the disruption of established educational settings as social environments, EdTech could help chronically ill students keep up with their studies and avoid their social suffering due to absence from school, since schools lack cohesive mechanisms to face this 17 The OECD and PISA tests are here the quintessential example of such a mode of governance.
The social meanings of perfection
211
18
challenge. For example, telepresence robots can help children to take part in school life remotely. On a different account, immersive technology could allow unprecedented experiences, like being a different entity, becoming someone else, and thereby expanding one’s social and emotional capabilities – e.g. increasing empathy, walking in someone else’s shoes and feeling his/her trauma, reducing bias and prejudice, living in a world with completely different rules, and so forth. That said, quite a different thing would be to accept that students’ and teachers’ cognitive abilities should be artificially enhanced. More generally, a sharp distinction should remain between mediating technologies – as in Peyton’s case – and those that aim at substituting human relations with hybrid ones, where AIs play the role of teachers, friends, classmates, carers, trainers, etc. In the latter case, post-humanization means to change the whole set and quality of relationships through and within which humanity is constituted. It thereby triggers a deep, but subtle change, which risks being underestimated, because the relational constitution of the human may itself be less than obvious. At the end of this journey, let me throw a quick glance at the big picture. We might well wonder whether or not our analyses revealed some persistent “essence” of being human. In other words, is any form of human essentialism still a viable culture in the present societal constellation? Any sweeping generalization would be awfully simplistic. But there are at least two big issues this book series has highlighted and thoroughly examined. First, it has exposed the poverty of those post-humanistic perspectives that regard dynamic, relational approaches to the study of humanity as the necessary dismissal of personal ontology – and vice versa, of the equally wanting ontologies which downplay our constitutive relations with ourselves and with the world-as-it-is. In fact, our studies show how a critical, emancipatory view of the human condition in contemporary global society needs to develop along the lines of a certain relational ontology. Second, we have identified some crucial pressure points in the contemporary democratic systems. In this respect, the study of trans- and post-human cultures, structures, and practices represents a reagent, which contributes to shed an impressive light on deep, long-term social trends, e.g. concerning decision-making processes and the very idea of representation. On the deepest level, humanity must now realize that uprooting its grounding relationships and holding them in its hands leads into and unknown land. But 18 Let me pick a non-scholarly, reader friendly example here. Back in 2015, the Washington Post reported that ten-year-old Peyton Walton, who at the time of the report had a rare form of cancer, would control a telepresence robot remotely to connect her to classes she couldn’t attend. With the robot, Peyton joined the daily activities, talked to teachers, and navigated her classroom, while her face showed up in real-time on the iPad screen at her school in Montgomery County, Maryland. “For Peyton, the two-way robot system gives her a greater sense of normalcy, a stronger connection to friends, more focus on the familiar rhythms of childhood that preceded her whirl of medical treatments. The experience is being studied by officials in Montgomery County, where the technology has become a pilot programme”, said the report.
212
Andrea M. Maccarini
much of this cannot be demonstrated through scientific experiments. This fate, too, is indisposable, in the sense that the long-term consequences of these cultural and structural developments are unpredictable. Humans are now faced by the enigma of their identity in a particularly intensive way. Choosing to remain human or to step beyond historical humanity both entail embracing risk. A social scientist can only highlight the possible aftermath. The decision about which risk is worth taking, and what strategies are acceptable, will be a major challenge to human cultures and civilizations. What seems likely is that mastery and gift, disposability and optimization vs. flourishing and calling, are going to be the stuff of which the quality of our personal and social forms of life will be made.
References Archer, M.S. (1995). Realist Social Theory. The Morphogenetic Approach. Cambridge: Cambridge University Press. Archer, M.S. (2000). Being Human: The Problem of Agency. Cambridge: Cambridge University Press. Archer, M.S. (2003). Structure, Agency and the Internal Conversation. Cambridge: Cambridge University Press. Archer, M.S. (2019). Considering AI personhood. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix, pp. 28–47. London and New York: Routledge. Bostrom, N. (2005). In defence of posthuman dignity. Bioethics, 19 (3): 202–214. Donati, P. and Archer, M.S. (2015). The Relational Subject. Cambridge: Cambridge University Press. Fuller, S. and Lipińska, V. (2014). The Proactionary Imperative: A Foundation for Transhumanism. New York: Palgrave MacMillan. Joas, H. (2008). Do We Need Religion? On the Experience of Self-Transcendence. Boulder: Paradigm Publisher. King, V., Gerisch, B. and Rosa, H. (2019). Lost in Perfection: Impacts of Optimisation on Culture and Psyche. London and New York: Routledge. Maccarini, A. (2019a). Post-human (life-)time: emergent biographies and the “deep change” in personal reflexivity. In I. Al-Amoudi and J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina, pp. 138–164. London and New York: Routledge. Maccarini, A. (2019b). Post-human sociality: morphing experience and emergent forms. In I. Al-Amoudi and E. Lazega (Eds.), Post-Human Institutions and Organizations: Confronting the Matrix, pp. 48–66. London and New York: Routledge. Maccarini, A. (2019c). The contingency of human flourishing: good life after modernity. In A. Maccarini, Deep Change and Emergent Structures in Global Society: Explorations in Social Morphogenesis, pp. 253–282. Dordrecht: Springer. Maccarini, A. (2021). Being human as an option: how to rescue personal ontology from trans-humanism, and (above all) why bother. In M. Carrigan and D. Porpora (Eds.), Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory. London and New York: Routledge. Rosa, H. (2016). Resonanz. Eine Soziologie der Weltbeziehung. Frankfurt: Suhrkamp.
The social meanings of perfection
213
Rosa, H. (2018). Unverfügbarkeit. Wien: Residenz Verlag. Sandel, M. (2007). The Case Against Perfection. Cambridge, MA: Harvard University Press. Savulescu, J. (2009). The human prejudice and the moral status of enhanced beings: what do we owe the gods? In J. Savulescu and N. Bostrom (Eds.), Human Enhancement, pp. 211–247. Oxford: Oxford University Press. Sloterdijk, P. (2013). You Must Change Your Life: On Anthropotechnics. Cambridge: Polity Press. Taylor, C. (2007). A Secular Age. Cambridge, MA: Harvard University Press. Taylor, C. (2012). What was the Axial revolution? In R.N. Bellah and H. Joas (Eds.), The Axial Age and Its Consequences, pp. 30–46. Cambridge, MA: Harvard University Press. Teubner, G. (2018). Digital personhood? The status of autonomous software agents in private law. Ancilla Iuris, 106: 107–149.
Index
Page numbers in italics refer to figures. Page numbers in bold refer to tables. Page numbers followed by ‘n’ refer to notes. agonism 165–167 AHRC see Arts and Humanities Research Council (AHRC) AI see artificial intelligence (AI) A.I. (film) 28, 81 Al-Amoudi, I. 23, 75n4, 174 alienations 185–188; from embodied self 183–185; emotion 187–188; from nature 191–192; from others 180–183; from society 180–183 AlphaGo 28, 34–36, 44 Amazing Stories magazine 28 Amazon 9 anthropocentricism 41 anthropotechnics 204, 208 Aquinas, T. 184 Archer, M.S. 1, 22, 26, 27, 31, 40, 70–71, 104, 119, 121, 124, 126, 199–200; Being Human 29n2, 32–36, 41, 44n5, 47, 50; on human agency 106–109, 106n5, 107–108n6; on human enhancements 176, 181; Modernity’s Man 41; on objectivity of the cultural system 123; on socialisation 112–114, 113n11; and speciesism 76, 89 Aristotle 4, 7–9, 58, 98, 159, 160, 164, 174, 184 artificial bodies, unconscious mind processes in partly 188–191 artificial intelligence (AI) 74–100; AI-boosted BRT 133, 134; in childcare 5; + code number 8; as cosentients 9; and digitalised navigation of micropolitics of knowledge 144–147; doing politics with 169–171; emotions 44, 45; entities 3; essential human
dignity and 12–14; as a foe, avoiding 90–97; and humans 40; independent local 149–152; and machine learning 145; as political actor 166–168; politics 158–171; qualitative smartness of 92–97; questions of principle for ethics of (dis)simulation 84–90; regulation 149–152; robophilia/robophobia 26–28, 32–33, 50; robots 2, 6; roles of 23, 211; sentience-sapience relationship 45–50; (dis)simulation 76–84 Arts and Humanities Research Council (AHRC) 84–85 Asimov, I. 28 auratic reality 68 autonomous reflexivity 112 Baker, L.R. 17, 30–32 Banks, I. 94n20 Beer, D. 104n1 Being and Worth (Collier) 57 Being Human (Archer) 29n2, 32–36, 41, 44n5, 47, 50 beneficial intelligence 84, 92 Bermuda triangle of extinction 201 Bhaskarian realism 19 Bhaskar, R. 7, 61, 61n4, 176–178 Biano, O. 184 Bieri, P. 15, 20–22 Big Relational Tech (BRT) databases 133, 133n3, 134, 144–146, 148, 153n9, 154 biological essentialism 5n3 bio-mimetics 87 Blade Runner (film) 28
Index Blasi, A. 53 Bly, R. 192 body, as meat 183–184 Boström, N. 84, 90–97, 200n3 bots 148, 159 Braidotti, R. 106n4 Breiger, R.L. 136 Brody, B. 7 BRT see Big Relational Tech (BRT) databases Brynjolfsson, E. 84 Buber, M. 69–70 bulimic self 109, 205n8 Busby, M. 11 Buttermore, N.R. 31 Butts, C. 148 Buxton, D. 86 capacities, essentialism based upon 8–12, 18 capitalism 2, 90, 150–151, 181 care 74–75 Carrigan, M. 23, 103, 105n2, 107, 107n6, 112n10, 113n11, 116, 123n23 Cecchetto, D. 68n10 Centre for Social Ontology 1, 105, 176, 181n1 Chadwick, A. 117 Charter of the United Nations 12 Christmas Carol, A (Dickens) 120 Cicero 98 Clegg, S. 121 coda 37–38 cognition 21, 43, 110, 158, 159 cognitive triage 116 collective learning 134, 137, 139–144, 140, 153, 154 collective will formation 162 Collier, A. 57, 178 communicative action 162–163, 169 communicative reflexivity 112 community, epistemic 23, 135, 137, 138, 149 consciousness 27, 30; AI and 86, 95, 97; critical 164; ecological 31; embodied 80; self- 31–34; sentience and 35, 37, 40; sine qua non of 40, 47; symbolic 31 convergence culture 117 Conversations about Reflexivity (Wiley) 49 co-orientation 135 Couldry, N. 104, 118, 123 Could You Kill a Robot? (NPR) 29 Coulombe, N.D. 111 Coventry, Anglican Cathedral in 19n15
215
Craib, I. 125 creationism 4, 14, 176 Crimes Against Humanity Initiative 13n8 Critical Realism 14, 15, 20, 185 critical realist personalism 19 cultural system integration 113 culture: convergence 117; weak 136 Damasio, A.R. 43n3 Darling, K. 29 Darwin, C. 6, 12, 58 datum and verbal formulation 52 Davis, J.B. 75 Deep Blue 28, 34–36, 44, 48–49 dehumanisation 71, 83n10, 90, 174–177, 180–192; alienations from embodied self 183–185; alienations from nature 191–192; alienations from others 180–183; alienations from society 180–183; emancipation 185–188; unconscious mind processes 188–191 de-materialisation of the human 68–69 Dennett, D. 35 determinism 23, 90 De Waal, M. 109 digital immigrants 111 digitalised navigation, of micropolitics of knowledge 144–147 digital narcissists 127 digital natives 110–111, 127 dignity 18n14; human 12–18, 20, 56–58, 64, 70, 198; kind of 16; as way of leading one’s life 20–22 (dis)simulation 76–90 Donati, P. 22–23, 56, 61n3, 176 Dunbar, R. 98 Durkheim, E. 109 eBay 9 ecological consciousness 31 Economic History (Weber) 143 EdTech 210 Ego 62n6, 67, 70, 185 Elder-Vass, D. 107n6 emancipation 185–188 embodied self, alienations from 183–185, 191 embodiment 44, 177–178, 185 emotions/emotionality 27, 29n2, 43–46, 193; alienation of 187–188; cognition and 21, 159; concerns and 36, 54; emotional maturity 82; emotional responses 80, 82, 88, 187; emotional transformation 187–188; feeling and
216
Index
36–37, 40, 45; formation of 187; moral significance of 52, 53, 54; reason and 80; regulation of 187; sentience and 22; sources of 43–45 endogenous knowledge 137 epistemology/epistemic: authority 133, 139, 144, 146, 149, 151, 153; community 23, 135, 137, 138, 149; fallacy 41; interdependencies 139; network literacy 149, 153; relativism 19 EPSRC see UK Engineering and Physical Sciences Research Council (EPSRC) Erasmus 98 essentialism 1–12, 15, 18, 48, 111–112, 211; based upon capacities (and liabilities) 8–12; based upon creationism 4; based upon sortals 6–8; based upon species 4–6; biological 5n3; forms of 3–4; relational 22, 56–72; technologised 105, 127; theistic 19 essential vs. accidental property 3EU Commission on Civil Law Rules on Robotics 158 European Parliament 13 expressive reflexivity 109 Facer, K. 111 Fides quaerens intellectum (St Augustine) 14n12 first-person perspective (FPP) 18, 21, 26, 30–33, 51, 53, 199 Fisher, B. 75 FLI see Future of Life Institute (FLI) flourishing: and calling 203, 204; and enhancement 203 Foot, P. 79n7 fourth industrial revolution 75 FPP see first-person perspective (FPP) fractured reflexivity 112 Frankfurt, H. 31 Fuchs, S. 60, 61 Fuller, S. 200n3 functional differentiation 113 Future of Life Institute (FLI) 84, 85, 89–92 ‘Future of the Human, The’ (Centre for Social Ontology) 1 Gardner, R.E. 148 Gerisch, B. 204n6, 209n16 Gerlitz, C. 120 Gibson, D.P. 184, 189, 190
good life 200–208 Gorski, P.S. 114 governance: political as 161–162; by standards 210 Habermas, J. 165, 166, 169 Harari, Y.N. 185–186 Harré, R. 41n1 Hawking, S. 92 Hayles, N.K. 106n4 Healy, K. 61n2 Heidegger, M. 63, 191 Helmond, A. 120 Hepp, A. 104, 118, 123 HEs see human enhancements (HEs) Historie and Chronicles of Scotland (Lindsay) 1n2 Hofstadter, D. 63 Homer 8, 10, 21, 26, 46, 48 homophily 141–142 Hooker, J. 6 Housley, W. 107 Human Dignity: A Way of Living (Bieri) 15, 20 human enhancements (HEs) 198, 200, 204, 205, 208–210; characteristics of 178–180, 179; dehumanisation 180–192; flourishing and 174–175; realist assumptions of 176–178 humanism 67–72; post- 67, 68, 68n10, 185, 198, 200, 203–206, 210; relational 63–67; trans- 180, 183, 198, 208 humankind 41–42, 42 humans/human: agency, in platform society 106–112; and AI 40; de-materialisation of 68–69; dignity 12–18, 20, 56–58, 64, 70, 198; essence 10, 56, 58–64, 68, 70, 175–176, 193; exceptionalism 14, 65n8, 197–200; identity, semantics of 65; nature 158n1, 177, 192–194; perfection 202, 204; as political animal 160; sapience-sentience relationship in 46–50 Hume, D. 41 hybridisation thesis 170 hybrid media 117 hyperagency 206n11 IBM, Watson AI 28 immanent self-enhancement 208 imperfections 78, 125, 201, 209 individuation/individualism/individualisation 7, 14, 61
Index INRA see Institut National de la Recherche Agricole (INRA) institutionalised individualism 114 Institut National de la Recherche Agricole (INRA) 143 instrumental goals 36 instrumental rationality 41, 119, 209 intelligibilia 108, 118, 122, 123 intentional stance 35 Internalist Essentialism 9 Internet Live Stats 115n13, 116 inter-subjective relationality 176 I, Robot (episode of The Outer Limits) 28 irrationality 80 Islam, G. 23, 158, 175, 175n1, 182 Istvan, Z. 180 James IV of Scotland (king) 1 Jenkins, H. 117 Jeopardy 28 Joas, H. 207 Jung, C.J. 184, 188, 192 Kasparov, G. 36, 48–49 Kennedy, H. 104 King, V. 204n6, 209n16 Kirk, R. 46n7, 47 knowledge: claims 137–144, 137; endogenous 137; micropolitics of 133–154 Kurzweil, R. 90 language, reflexivity and 33–35 Late Modernity 113, 178, 181n3, 181–183, 191, 204, 206 Latour, B. 109n8 Latsis, J. 75n4, 175, 175n1 Lawson, C. 77–78n5, 89, 191 Lawson, T. 13, 88 Lazega, E. 23, 85n16, 132, 182 learning: collective 134, 137, 139–144, 140, 153, 154; machine 78, 110, 145, 154 Leary, M.R. 31 Lévinas, E. 63 liabilities, essentialism based upon 8–12 liberation from the geo-local 126–127 Lindsay, R. 1n2 Locke, J. 6 Luhmannian systems theory 60–61 Lupton, D. 116n16 Maccarini, A.M. 1, 23, 176, 182, 197, 205n9
217
machine learning 78, 110, 145, 154 Machines Like Me (McEwan) 80 MacIntyre, A. 120 Mackenzie, A. 104n1 Mackie, P. 6–8 Mahmood, S. 63 Maritain, J. 14 Markus, H.R. 121n20 Marx, K. 174 massive polarisation 147–149 Matrix trilogy 184 McEwan, I. 80 McMaster, R. 75 meta-alignment 96 meta-reflexive reflexivity 112 micropolitics of knowledge 133–154 Mills, W. 107, 127 money 62 Montaigne, M. De 98–99 Montes-Lihn, J. 23, 132, 140 morality/moral: agency 27; commitments 16, 17; emotions 53; imperative 23, 200, 201, 209 More, M. 183 Morgan, J. 23, 50, 74 morphostatic/morphogenetic (M/M) methodology 88 Mouzelis, N. 107n6 Musk, E. 84 Nagel, F. 31 National Public Radio (NPR) 29 neo-Confucianism 14 neo-liberalism/neo-liberal 18, 175; capitalism 181; dehumanisation 175 network theory 60 neuro-reduction 184–185 normativity 9, 27, 113; polynormativity 135; social 44 Norrie, A. 177–178 NPR see American National Public Radio (NPR) Nussbaum, M. 8–13, 18 O’Neil, C. 145 On Friendship (Montaigne) 98–99 Ong, W.J. 170 ontological primacy of real world 42–43 originalism 61 Outer Limits, The (television show) 28 Oyserman, D. 121n20 Pacherie, E. 50 Pasquale, F. 127
218
Index
Peirce, C.S. 121 perfection: idea of (human) 203–204, 209; as optimisation 204–205 personalism 14–20; critical realist 19; definition of 16 personhood, criteria of 26 phronesis 9 Pinkola Estes, C. 192 platformisation 107–108, 116, 118, 119, 124 platform society 109; as cultural context for socialisation 116–119; human agency in 106–112 Plato 184 Plummer, K. 117 Poell, T. 104, 109 Poggio, T. 78n6 polemical claims, resisting 147–149 politics/political: activity, criteria for 164–166; as administration/governance 161–162; animals, humans as 160; as collective praxis 162; as communicative action 162–163; conceptions of 160–161; as living with difference/dissensus 163–164; as protest 164; as recognition order 163; as rule through power 161; subject, changing 168–169; as will formation 162 polynormativity 135; see also normativity Porpora, D.V. 22, 26, 40, 43, 45–51, 53, 70, 176 possible selves 119–126 post-humanism 67, 68, 68n10, 185, 198, 200, 203–206, 210 post-human self-understanding 200–208 potential selves 105, 119–126 Potts, M. 132 power, forms of 161 powerful actors 134–139, 145–146, 149 Prensky, M. 111 protest, political as 164 psychic embodiment, in social cube 177–178 Pyythinen, O. 65n8 qualia 27, 30, 32, 33, 35, 37, 40, 45, 46 radicalisation 147–149 rationality 59; of decision-making 198; human 135; instrumental 41, 119, 209; intelligence and 197; social 136 ‘Raw Feelings’ (Kirk) 46n7 Rawls, J. 10
realist assumptions of human enhancements: acting in a social cube 176–177; psychic embodiment in social cube 177–178 realist ontology 19, 61, 185 reality: auratic 68; checks 133, 134, 137, 137, 138, 144, 145, 149, 152–154; claims 136–137; digital 59, 62; experience of 41; humankind and 41–42; mental 67; nature of 17, 19, 41, 42; orders of 43; social 89, 177 recognition order, political as 163 Rees, M. 84 reflexive imperative 114 reflexivity 10–12, 17, 21, 26, 127; autonomous 112; communicative 112; expressive 109; fractured 112; and language 33–35; meta-reflexive 112; and robotic experiences 50–53 relational essentialism 22, 56–72 relational-hermeneutics 89 relational humanism 63–67 relational speciesism 61 relations to the world 208–212 relativism 11, 19 Repapis, C. 75n4 Rieppel, O. 58, 59 robophilia 26–38; based upon the AI 50; coda 37–38; concern 35–37; first-person perspective 30–33; reflexivity and language 33–35; social philosophies supportive of 14–22 robophobia 13, 26–38, 76, 199; coda 37–38; concern 35–37; first-person perspective 30–33; reflexivity and language 33–35; social philosophies supportive of 14–22 robotic experiences, reflexivity and 50–53 Rosa, H. 204n6, 206, 206n10, 207, 209n16 rule of law 161–162 Ruppert, E. 104n1 SAC (structure, agency and culture) 9, 9n5, 88 Sandel, M. 202, 206n11 sapience-sentience relationship 40–54; in AI robots 46–50; in humans 46–50 Savulescu, J. 200n3, 201, 202 Sayer, A. 1, 2, 108n6 Schultz, J. 136 Scott, R. 28 Scott, Sir W. 1n2
Index Scrooge, Ebenezer, Strahan and Wilson’s observations on 120 Searle, J. 16, 34, 81, 82, 86, 89, 95, 97, 163 Sejnowksi, T. 78n6 self: bulimic 109, 205n8; -consciousness 31–34; embodied 183–185; -narration 123; -transcendence 207, 207n12, 208; -worth 44, 45 selves, possible and potential 119–126 Sen, A. 10, 12, 18 sentience: first-person perspective without 32–33; relationship with sapience 40–54 Simmel, G. 61, 62 simulation see (dis)simulation Sloterdijk, P. 204, 204n5 Smith, C. 15–20, 60 social: communication 16, 169; constructs 13, 22; control 135, 161; cube 177; identities 18, 26, 64, 163, 206; integration 113, 114, 126, 181, 182, 193, 204; justice 9; life 23, 105, 109, 117, 125, 128, 152, 166, 169, 170, 202, 204; meanings of perfection 197–212; media 82, 98, 107, 116–119, 123–127, 147, 151, 168, 182; networks 132, 138, 148, 171; niches and status 138, 141–143; normativity 44; objects 68n9; orders 42, 44, 164; philosophies 14–15; physiology 60; positioning 68, 88–89; rationality 136; reality 89, 177; robots 71; sciences 1, 8, 86, 149, 200; spaces 79, 137–138, 146, 148, 204 social cube: acting in 176–177; psychic embodiment in 177–178 socialisation 104, 127; approach to 112; Archer on 112–114, 113n11; changing character of 107; platform society as cultural context for 116–119; process 23, 105–106, 113, 124, 127; as reflexive engagement 112–115 Society’s Being 41, 41n1 sociocentrism 41 socio-cultural integration 113 Sociological Imagination, The (Mills) 127 Somers, M.R. 61, 61n4 sortals, essentialism based upon 6–8 species: essentialism based upon 4–6; identity 9 speciesism 4, 9, 13, 17, 61, 76, 89 Spencer-Brown, G. 58
219
Spielberg, S. 28 Stark, L. 59–60 Star Trek Voyager 27, 28, 37–38 St Augustine 14n12 Steigler, B. 106n4 Stevenson, J. 121 Strahan, E.J. 120 subjectivism 11 Superintelligence (Boström) 84, 91, 92 symbolic consciousness 31 symbolic interactionism 134, 135 systemic inducements into natal context 125–126 Tamminen, S. 65n8 Taylor, C. 54, 72, 203 technogenesis 106 technologised essentialism 105, 127 technology/technological: artifacts 72, 77n5, 178, 179; changes 103, 127, 128, 144; determinism 90; post-humanism 68n10 Tegmark, M. 84, 90, 92 Terminator, The (film) 28 Teubner, G. 198n1 theistic essentialism 19 Theodorou, A. 87–89 Thompson, J.B. 118 totalitarianism 14 trans-humanism 180, 183, 198, 208 Tronto, J. 75 Turner, G. 119 UK Engineering and Physical Sciences Research Council (EPSRC) 84, 85, 87–89 ‘Uncanny Valley’ problem 87–88 unconscious mind processes, in partly artificial bodies 188–191 Universal Declaration of Human Rights 12 van Dijck, J. 104, 109, 124, 126 Vedantam, S. 29, 30 Vedantic Hinduism 14 von Balthasar, H.U. 14 weak culture 136 Weapons of Math Destruction (O’Neil) 145 Weber, M. 143 Westworld (television show) 28 Wiggins, C. 7, 8 Wiley, N. 49
220
Index
Wilson, A.E. 120 Wortham, R. 87, 89 Wright, E.O. 7 Wu, T. 103
Yudkowsky, E. 92n17, 94n19 Zimbardo, P. 111 zoon politikon 158n1, 159