120 61
English Pages 206 [267] Year 2021
The Ethos of Digital Environments
While self-driving cars and autonomous weapon systems have received a great deal of attention in media and research, the general requirements of ethical life in today’s digitalizing reality have not been made sufficiently visible and evaluable. This collection of articles from both distinguished and emerging authors working at the intersections of philosophy, literary theory, media, and technology does not intend to fix new moral rules. Instead, the volume explores the ethos of digital environments, asking how we can orient ourselves in them and inviting us to renewed moral reflection in the face of dilemmas they entail. The authors show how contemporary digital technologies model our perception, narration as well as our conceptions of truth, and investigate the ethical, moral, and juridical consequences of making public and societal infrastructures computational. They argue that we must make the structures of the digital environments visible and learn to care for them. Susanna Lindberg is Professor of Continental Philosophy at the University of Leiden, Netherlands. Hanna-Riikka Roine is an Academy of Finland Postdoctoral Researcher at Tampere University, Finland.
Perspectives on the Non-Human in Literature and Culture Series Editor: Karen Raber, University of Mississippi, USA
Literary and cultural criticism has ventured into a brave new world in recent decades: posthumanism, ecocriticism, critical animal studies, the new materialisms, the new vitalism, and other related approaches have transformed the critical environment, reinvigorating our encounters with familiar texts, and inviting us to take note of new or neglected ones. A vast array of non-human creatures, things, and forces are now emerging as important agents in their own right. Inspired by human concern for an ailing planet, ecocriticism has grappled with the question of how important works of art can be to the preservation of something we have traditionally called “nature.” Yet literature’s capacity to take us on unexpected journeys through the networks of affiliation and affinity we share with the earth on which we dwell—and without which we die—and to confront us with the drama of our common struggle to survive and thrive has not diminished in the face of what Lyn White Jr. called “our ecological crisis.” From animals to androids, non-human creatures and objects populate critical analyses in increasingly complex ways, complicating our conception of the cosmos by dethroning the individual subject and dismantling the comfortable categories through which we have interpreted our existence. Until now, however, the elements that compose this wave of scholarship on non-human entities have had limited places to gather to be nurtured as a collective project. “Perspectives on the Non-Human in Literature and Culture” provides that local habitation. In this series, readers will find creatures of all descriptions, as well as every other form of biological life; they will also meet the non-biological, the microscopic, the ethereal, the intangible. It is our goal for the series to provide an encounter zone where all forms of human engagement with the non-human in all periods and national literatures can be explored, and where the discoveries that result can speak to one another, as well as to scholars and students. Transhumanism and Posthumanism in Twenty-First Century Narrative Edited by Sonia Baelo-Allué and Mónica Calvo-Pascual The Ethos of Digital Environments Technology, Literary Theory and Philosophy Edited by Susanna Lindberg and Hanna-Riikka Roine For more information about this series, please visit: www.routledge.com/ Perspectives-on-the-Non-Human-in-Literature-and-Culture/book-series/PNHLC
The Ethos of Digital Environments Technology, Literary Theory and Philosophy
Edited by Susanna Lindberg and Hanna-Riikka Roine
First published 2021 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Taylor & Francis The right of Susanna Lindberg and Hanna-Riikka Roine to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Roine, Hanna-Riikka, editor. | Lindberg, Susanna, editor. Title: The ethos of digital environments : technology, literary theory and philosophy / edited by Susanna Lindberg and Hanna-Riikka Roinen. Description: New York : Routledge, 2021. | Series: Perspectives on the non-human in literature and culture | Includes bibliographical references and index. | Identifiers: LCCN 2020049806 | ISBN 9780367643270 (hardback) | ISBN 9781003123996 (ebook) Subjects: LCSH: Technology--Moral and ethical aspects. | Automation--Moral and ethical aspects. | Artificial intelligence--Moral and ethical aspects. | Digital media--Moral and ethical aspects. | Ethics, Modern--21st century. Classification: LCC BJ59 .E878 2021 | DDC 174/.9302231--dc23 LC record available at https://lccn.loc.gov/2020049806 ISBN: 978-0-367-64327-0 (hbk) ISBN: 978-0-367-64332-4 (pbk) ISBN: 978-1-003-12399-6 (ebk) Typeset in Sabon by Taylor & Francis Books
Contents
List of figures List of contributors Introduction: From Solving Mechanical Dilemmas to Taking Care of Digital Ecology
vii viii
1
SUSANNA LINDBERG AND HANNA-RIIKKA ROINE
Should a Self-driving Car
22
EINO SANTANEN AND TRANSLATED BY KASPER SALONEN
PART I
Digital Ecologies Today 1 Three Species Challenges: Toward a General Ecology of Cognitive Assemblages
25 27
N. KATHERINE HAYLES
PART II
The Ethos: Description and Formation 2 Viral Storytelling as Contemporary Narrative Didacticism: Deriving Universal Truths from Arbitrary Narratives of Personal Experience
47
49
MARIA MÄKELÄ
3 Authorship vs. Assemblage in Digital Media
60
HANNA-RIIKKA ROINE AND LAURA PIIPPO
4 The Logic of Selection and Poetics of Cultural Interfaces: A Literature of Full Automation? MATTI KANGASKOSKI
77
vi
Contents
5 Ghosts Beyond the Machine: “Schizoid Nondroids” and Fictions of Surveillance Capitalism
98
ESKO SUORANTA
PART III
The Ethos: Entanglement and Delegation 6 The Zombies of the Digital: What Justice Should We Wait For?
115 117
FRÉDÉRIC NEYRAT
7 Just Machines. On Algorithmic Ethos and Justice
131
SUSANNA LINDBERG
8 Automation: Between Factuality and Normativity
151
MARC-ANTOINE PENCOLÉ
9 How Agents Lost their Cognitive Capacities within the Computational Evolution of Market Competition
161
ANNA LONGO
10 Thinking about Google Search As #DigitalColonialism
178
JOSHUA ADAMS
PART IV
The Ethos: Thinking, Computing, and Ethics
187
11 The Light of Morality and the Light of the Machine
189
FRANÇOIS-DAVID SEBBAH TRANSLATED BY AENGUS DALY
12 What Do We Call “Thinking” in the Age of Artificial Intelligence and Moral Machines?
202
ANNE ALOMBERT
13 Can a Machine Have a Soul?
214
DANIEL ROSS
14 The Chiasm: Thinking Things and Thinging Thoughts. Our Being with Technology
233
LARS BOTIN
Index
251
Figures
4.1 A screenshot by Matti Kangaskoski of an “opening window” of a bookshop’s website 4.2 A screenshot by Matti Kangaskoski of Rupi Kaur’s poem “responsibility” on her Instagram account 4.3 “Please rate your satisfaction with your experience today” at Food Republic. Photo by Matti Kangaskoski 4.4 “How clean are these toilets?” at Heathrow Airport. Photo by Matti Kangaskoski 4.5 “How was your immigration experience today?” at Heathrow Airport. Photo by Matti Kangaskoski 14.1 The lemniscates: constantly expanding and intensifying
79 84 89 90 91 239
Contributors
Joshua Adams, MA, writer and journalist; formerly Assistant Professor of communications at Salem State University, USA. Anne Alombert, PhD, teacher of philosophy, Université Catholique de Lille, France. Lars Botin, PhD, Associate Professor of Planning, Aalborg University, Denmark. N. Katherine Hayles, PhD, James B. Duke Professor of Literature Emerita, Duke University, USA; Distinguished Research Professor, University of California, Los Angeles, USA. Matti Kangaskoski, PhD, University of Helsinki, Finland; scholar and poet. Susanna Lindberg, PhD, Professor of Continental Philosophy in Leiden University, Netherlands. Anna Longo, PhD, Program Director at Collège International de Philosophie, Paris. Maria Mäkelä, PhD, Senior Lecturer in Comparative Literature, Tampere University, Finland. Frédéric Neyrat, PhD, Associate Professor in Comparative Literature and Mellon-Morgridge Professor of Planetary Humanities, University of Wisconsin-Madison, USA. Marc-Antoine Pencolé, PhD candidate in Philosophy, Nanterre University, France. Laura Piippo, PhD, University Teacher in Literature, University of Jyväskylä, Finland. Hanna-Riikka Roine, PhD, Academy of Finland Postdoctoral Researcher, Tampere University, Finland. Daniel Ross, PhD in philosophy, commentator and translator of Bernard Stiegler’s work, Australia.
List of contributors
ix
Eino Santanen, writer, Helsinki, Finland. François-David Sebbah, PhD, Professor of Philosophy, Nanterre University, France. Esko Suoranta, PhD candidate in English Philology, University of Helsinki, Finland.
Introduction From Solving Mechanical Dilemmas to Taking Care of Digital Ecology Susanna Lindberg and Hanna-Riikka Roine
Traditionally, philosophers as well as specialists of religion and justice have thought that morality and ethics are prerogatives of human beings alone. Whether one advocates virtue ethics, hedonism, consequentialism, deontology, pragmatism, or any other of the modern textbook ethical theories, all of them rely on an autonomous, conscious, and responsible human subject. From this perspective, technical objects, including machines, appear as tools – as means of humans’ moral actions without inherent moral qualities. If someone is killed, the blame falls upon the person who pulled the trigger, not upon the gun. It has been argued, however, that technologies are much more than tools, since they innervate the entire lifeworld we live in (Ellul 1964; Feenberg 1999). Modern industrial technologies, in particular, “shape us and our social and ecological world as much as we shape technology” (Sandler 2014, 2). Today we, both as individuals and as societies, are becoming increasingly entangled in environments produced by computation, rapidly developing into complex, self-learning and self-evolving systems. If technical objects are regarded as systems or, rather, as constituting entire environments instead of being simple tools, technics itself has moral effects (Verbeek 2011). In other words, digitalization shapes not only our material environment but also our cognitive and social spaces. Does this mean that the ethical dimensions of our cognitive and social activity can be computed as well? Interrogations of the morality of technical systems have arisen in the context of a range of systems, from self-driving cars to autonomous weapon systems, in fields of research such as robot ethics (e.g. Lin et al. 2017), and these discussions are being extended to cover all sorts of digital systems. However, it seems to us that the ethical role of technical objects, systems and environments cannot be fully discerned without rethinking the very notion of ethics, even if this goes against the tradition. The omnipresence of sensitive humanoid robots in science fiction narratives illustrates the ways in which the question of whether machines can be moral agents and even ethical and political subjects preys on our minds. A viewer of films and television series such as Blade Runner (1982), Star Trek: The Next Generation (1987–93), Battlestar Galactica (2004–9), Real Humans
2
Lindberg and Roine
(2012, Äkta Människor) or Westworld (2016–) might be inclined to grant androids moral rights. However, a viewer speculating on 2001: Space Odyssey (1968), The Terminator (1984) or The Matrix (1999) may wonder whether it would be more prudent to prevent the apotheosis of artificial intelligence – or the so-called Technological Singularity (Kurzweil 1990) – before it is too late. It is important to understand, however, that despite what both fictional and nonfictional mainstream narratives tell us (see e.g. Cave & Dihal 2019), the contemporary machines prompting moral and ethical considerations are not human-faced robots. The emblematic study of machine morality is the MIT Moral Machine experiment (http://moralmachine.mit.edu). The Moral Machine is a public online platform that presents a thought experiment to the user, postulating a self-driving vehicle that encounters a situation in which it is obliged to choose between two actions, both of which result in killing people. The platform then asks the user to respond: how the vehicle should choose? In other words, how should the vehicle be programmed to determine the lesser of two evils? Should the car sacrifice, say, its passenger or a pedestrian? Two law-abiding elderly persons or four jaywalking children? A poor man or a rich woman? And so on. On the platform, everybody can give their opinion on the value of different human lives. Most people appear to find this experiment – which really is merely an online version of the old philosophical trolley problem – fundamentally unsatisfactory and even immoral. Eino Santanen’s poem “Should a Self-Driving Car,” published for the first time in this volume, investigates the cold conclusions of the experiment. The Moral Machine experiment is insufficient because, in reality, machines function in complex environments that cannot be reduced to a binary choice. It is fundamentally immoral because, by definition, one cannot calculate the greater or lesser value of different people, as human life is an invaluable end in itself. Indeed, science fiction author Isaac Asimov’s famous “Three Laws of Robotics”1 hold a deeper wisdom that the MIT experiment: according to Asimov, a robot should never sacrifice a single human being. It can cause a human death by accident, but not by choice. Be that as it may, self-driving cars are being developed and constructed, and they have already encountered lethal accidents. The problem has therefore become a real one instead of being merely theoretical, one that legal scholars in particular are examining. If a self-driving car kills a person, who should get the blame? The car? Its constructor? The legislator who has given the car a license? Pedestrians who do not abide by the traffic rules? (For a comprehensive study of the juridical status of AI, see Kurki 2019, and for a rich survey of case studies, see Beever et al. 2019.) Finally, what is even more important from the perspective of this volume is that the moral machines of today cannot be limited to objects recognizable as discrete actors and then evaluated by humans – such as the self-driving cars. Instead, we must expand our perspective towards complex human-technical systems that deeply affect what we see and how we see it (cf. Latour 2002).
Introduction
3
The most salient features of the contemporary technological environment are not objects like robots and self-driving cars, but the general digital environment of human activity. So-called artificial intelligence – which actually consists of machine learning systems – has also considerably boosted the digital sphere. Yuk Hui defines contemporary digitality lucidly: “By digital objects, I mean objects that take shape on the screen or hide in the back end of a computer program, composed of data and metadata, regulated by structures or schemas” (Hui 2016, 1). The digital environment is made of networks that spread beyond the horizon, as well as in their computational substructure that their users cannot see and that most users do not even comprehend. Insofar as machine learning systems do not merely repeat predetermined programs but function on the basis of a recursivity that includes contingency, this impossibility of understanding everything that happens in a system becomes a structural feature of the machine (Hui 2019). For all these reasons, today’s digital media function as the unthought condition of an increasing portion of contemporary life (see Hayles 2017). Such a condition cannot be located in a singular object. It is the very opposite of the human-faced robot: the pervasive “environmentality” (Hörl 2017) of digital technologies is part of our daily life, as not only is digital media spreading into every other technology, but also humans and computational agents collaborate in networks that no individual subject or group directly controls or manipulates. Such collaboration operates at levels both “beyond” and “below” the scope of human awareness (see Galloway & Thacker 2007; Hansen 2015), thus obscuring the many effects it has on human ethics and justice. These effects are growing so exponentially that they urgently need to be conceptualized within contemporary, technologically developed societies where life is entangled in digital environments even when we are not directly in contact with them by means of digital devices or interfaces. The aim of this volume is to articulate the ethical and political stakes of contemporary digital reality. It goes almost without saying that they cannot be explained in terms of an ethical theory based on an autonomous, conscious and responsible human subject – or its robotic double. Above all, this book aims at discovering and describing the ethical and political effects of contemporary digital reality. Only when these are brought forth is it possible to look for solutions. Let us first give a quick overview of the contemporary digital reality and some of its most important ethical and political worries and discontents, which loom behind the articles in this volume.
The Wide World of Computation and Its Discontents The worldwide, albeit unequal, presence of digital technologies has become a given in the contemporary world. While these technologies are known almost everywhere, access to them has become a new human right in all but name. Through the rise of the internet and the world wide web, digital
4
Lindberg and Roine
technologies have become a major vehicle for participation in both global and local venues. Access Now, an international nonprofit group dedicated to an open and free internet, begins their Human Rights Principles for Connectivity and Development, “Internet connectivity is essential for economic, social, cultural, political and civic participation in the digital age” (2016, 2). With falling prices, access to digital technologies – mobile phones, in particular – has gradually become more equal. Currently, it is estimated that about 5 billion of the 7.1 billion world population own mobile phones, and they have turned out to be an important factor of access to better living conditions for many people who cannot afford a computer.2 The increasingly common digitalized commercial services worldwide do not concern only the wealthy: for example, in many African countries and now even in India and Eastern Europe, the originally Kenyan M-Pesa bank, which functions through mobile phones, has provided access to banking services to those who were previously too poor to obtain a bank account. Yet, as the recent “Contract for the Web” initiative started by the inventor of the world wide web, Tim Berners-Lee, puts it: “Half of the world’s population still can’t get online. For the other half, the web’s benefits seem to come with far too many unacceptable risks: to our privacy, our democracy, our health and our security.”3 In other words, access to high technology is not the only ethical problem we face today, but its very structure and use presents serious issues. The architecture of the digital reality presents different kinds of risks. As Bruno Latour (2002) has argued, technical devices are not simply passive recipients of human intentions but rather active agents that change the landscape within which human choices are formulated and carried out. The design of the computational systems that are being implemented everywhere is far from neutral, as these systems determine what the world looks like today. The internet is an excellent example of a technical device invented for one purpose and then transformed into something entirely different: first designed to facilitate communication among scientists, it then quickly morphed into the web, and now it has transformed everything from human sociality to entrepreneurship and marketing, as well as social services, public law and administrative justice (Alston 2019; Tomlinson 2019). Furthermore, the infrastructure of the digital reality is designed by a handful of private enterprises that are mainly interested in their commercial interests, never mind the psychological, social, economic, and even political price that individuals and communities end up paying. As Shoshana Zuboff (2019) has shown, relying on numerous concrete examples, digitalized “surveillance” capitalism can cause all sorts of economic exploitation and exclusion, while Bernard Stiegler argues in his Disbelief and Discredit series (2011; 2012; 2014a) that digital capitalism can insidiously produce “general proletarianization” and stupidity. The web was designed to bring people together and make knowledge freely available, but this noble goal has given way to worries about the privacy of
Introduction
5
our data and security as well as concerns over social alienation and growing divisions within societies. Personal data about us as individuals, and especially about us collectively, present enormously lucrative opportunities for various actors (Zuboff 2019). Great weight is put on the possibilities of “big data” for generating economic growth as well as enhancing well-being through supporting medical research, at the risk of silencing arguments concerned with privacy and autonomy (see Snell 2019). While primarily generated for commercial marketing purposes, citizens’ internet engagement has provided an exponentially expanding source of data for both political campaigns and foreign disinformation campaigns – as is evident from the Cambridge Analytica/Facebook scandal and its implications for democracy (see Downes 2018). Calls for more ethical handling of personal data by businesses and organizations have become more prominent, and legislation such as the EU General Data Protection Regulation (GDPR) are establishing the legal boundaries for what can and cannot be done with personal data. Moreover, emerging forms of data activism are developing social imaginaries that promote new practices by employing data technology to fulfill the aims of social justice and political participation (Lehtiniemi & Ruckenstein 2019). For instance, MyData, a data activism initiative originating in Finland, aims to shape a more sustainable, citizen-centric data economy, contrasting with the dominant economic logic embodied by the US data giants and promising to combine the “industry need [for] data with digital human rights” (Poikola et al. 2015). In his recent report to the United Nations General Assembly on extreme poverty and human rights, human rights lawyer Philip Alston expresses concerns similar to those of the MyData activists. He points out that “governments have certainly not regulated technology industry as if human rights were at stake” (2019, 13) as well as criticizing the fact that governments too readily leave their regulatory responsibilities to big tech – which is not even self-regulated through free market mechanisms as it is a deeply anti-competitive sector. Even when motivated by the best of intentions, “those designing artificial intelligence systems in general, as well as those focused on [the] welfare state, are overwhelmingly white, male, well-off and for the global North” (ibid., 22). These are significant ethical concerns focusing not only on the autonomy and participation of the individual citizens in today’s data-intensive information society, but also on the dominance of a few big technology companies and of a small, relatively homogenous group of people making decisions that affect the lives of everyone. Yet another well-known problem of the digital reality is the so-called black box effect: whether willed by the designers or not, it will become more and more prominent with the development of machine learning. Sometimes, the functioning of specific algorithms may be opaque to the larger public due to the simple reason that they are trade secrets, owned by huge technology companies responsible for the technological infrastructure that frames and conditions our lives (see Stiegler 2010b). However, even the very structure of the new forms of machine learning may be such that the algorithms develop in ways that are
6
Lindberg and Roine
unpredictable to their designers (Hui 2019). This is the fundamental meaning of the black box effect: machine learning algorithms that process huge amounts of data at great speed, but even more importantly, are capable of developing through feedback, are such that the human users see the input and the output but not what happens in between – in the “black box.” As a result, it may be well-nigh impossible to know why and how an automatic decision-making system has arrived at a certain decision. As recent EU and UN reports have pointed out, it is risky to use such procedures in administration because if a machine’s decisions cannot be accounted for or audited, it goes against the law and against the general sense of justice (Villani et al. 2018; Madiega 2019; Alston 2019). The infrastructure of digital technologies is opaque not only because it is complex: some programs actually produce content as well as influence our actions, unbeknownst to their human users. In addition to collaborating and sharing the web with other people, we coexist with nonhuman computational actors such as bots, whose key feature is not to match, but to exceed human capacity. As Ed Finn (2017) has argued, this is more than a collaboration: a kind of co-identity, as we are adapting ourselves to become more knowable to algorithmic machines. Lately, more and more attention has been directed towards a co-identity of this kind: for example, a recent study suggests that YouTube’s algorithm-driven search and recommendation system appears to “have systematically diverted users to far-right and conspiracy channels in Brazil” (Fisher & Taub 2019), until the far right successfully won the elections. Moving our attention away from, for example, individual human users writing – or moderating – hateful messages towards the larger logics of a system that itself generates, circulates and intensifies such hatred, is already suggested in contributions that attempt to rethink human–machine relations in internet-based environments (e.g. Ruckenstein & Turunen 2019; see also Chapter 3 by Roine & Piippo in this volume). Finally, a question which has attracted less general attention, but which new literary and media studies can help us to understand, is the effect of new technologies on our very thinking. Not only are digital technologies spreading to areas such as commerce and governance, but our core cultural practices of reading, writing, conversation, and thinking are also fast becoming digital processes. In the larger field studying writing and storytelling on the web, the focus has been on how these environments transform our understanding of compelling narratives – or, narratives that are worth telling – and on the consequent “life-tellings of the moment” (e.g. Georgakopoulou 2017; Page 2018). While these are, of course, topics worth analyzing, we need to go further to conceptualize the changing ethical dimension of writing and reading stories in digital media. Mark B.N. Hansen (2015) has, among others, urged us to recognize how “twenty-first century media” differs from previous forms of media, and to turn away from the equation of experience and content towards the examination of how relations are composed between technical circuits and human
Introduction
7
experience. This has serious consequences for the concept of authority and the ethical questions tied to this, traditionally understood as individual instead of a collectivity formed by both human and nonhuman actions. Just as our objects of study adapt to us as we interpret them (cf. Finn 2017), this process of adaptation on the various platforms takes the dynamic role in “co-authoring”. The feedback loop between our actions and the environments adapting to us (such as social media services and various forms of digital art) has been discussed from multiple perspectives. Researchers such as Lev Manovich (2013) have argued for looking beyond media surfaces to the layer of software: While, in physical media, adding new properties means modifying its physical substance, in digital media new properties can always be easily added or even new types of media invented by simply changing existing or writing new software. As a result, it is not enough for us to understand the creation of media as writing text or composing images, but as authoring new processes and designing structures of responsive behaviors (see Wardrip-Fruin 2009; Murray 2011). Lori Emerson’s term of readingwriting (2014) calls our attention to a similarly fundamental shift in the arts: due to our constant connection to networks, media poetics is fast becoming a practice of writing through the network. This network, in turn, tracks, indexes, and algorithmizes everything we enter into it, thus constantly reading our writing and writing our reading. Our individual choices cannot necessarily be distinguished from digital environments such as the networks described by Emerson. Algorithms developed by training neural networks on large datasets play an important part in making aesthetic judgments and in keeping us engaged with content through various recommendation and filtering routines. Furthermore, as the algorithms of the cultural interfaces aim to predict what we desire, the feedback loop is tightening to be able to capture what the user wants to see, read and hear before any conscious engagement even takes place. The end point, albeit speculative, of this logic, as suggested by Matti Kangaskoski in Chapter 4 in this volume, is the futile act of selection beginning to represent individual will and freedom. At this stage of development, however, it is evident that the digital turn constantly affects media poetics and our sense of aesthetics, and that these affects need to be interrogated. Thinking (and) Computing If this is what the wide world of computation looks like today, how may we conceptualize it from the philosophical and aesthetic perspectives, then? How may we reformulate ethics, morality, and justice in the functions of digital technology? In order to describe the specific nature of machine ethics, one can start by asking what characterizes machine thinking in general and what differentiates it from human thinking, including moral reflection. For instance,
8
Lindberg and Roine
when a machine chooses between two apparently moral options, is its choice based on a moral reflection or does it merely calculate a rational outcome? When a machine engages in an action, does it act freely or does it only follow the necessary path determined by its programming? With the concepts of classical Enlightenment philosophers such as Immanuel Kant, one should argue that because the machine does not really act and reflect freely, we cannot call its action moral in the proper sense of the word. A machine that merely follows its programming and calculates the most profitable outcome cannot be seen as responsible for its actions or guilty of crimes. However, as François-David Sebbah shows in Chapter 11 in this volume through a comparison of Martin Heidegger’s theory of technics and Emmanuel Levinas’s theory of ethics, the ethical and the technical perspectives are not two different regions of life but two perspectives that fundamentally intertwine – or as Sebbah puts it, they are two “lights” shed on the same world. In the contemporary digital reality, it is impossible to maintain a straightforward distinction between machine and human thought because today’s machines take part in the very process of reflection or replace it altogether, as Susanna Lindberg shows in Chapter 7 through the example of “sorting algorithms” (which select people for jobs or higher education), and as Anna Longo shows in Chapter 9 through the case of the computational algorithms used in high frequency trading. In Chapter 8 Marc-Antoine Pencolé studies the delegation of decisions to automatic processes in the context of internet communities. These examples also show why the direct identification of machine and human thought can result in irresponsibility and even injustice. While the difference between computation and human thought has thus been displaced and has become less obvious, it is important to rethink the criteria of their difference, as shown in the contributions of Anne Alombert (Chapter 12) and Daniel Ross (Chapter 13). Only if we know how to conceptualize the specificity of the human in relation to new digital environments, can we learn to take better care of these environments together. As we have already suggested, in order to rethink ethics in the digital world it is important to abandon or at least rethink the current individualist theories of ethics and conceptualize new ways of taking care of human beings and their nonhuman environment, as Stiegler (2010b) puts it. Furthermore, as N. Katherine Hayles notes in Chapter 1 in this volume, the foci of traditional ethical frameworks are not appropriate for contemporary, technologically-developed societies: one obstacle is their focus on individuals rather than collectivities; another is the predominant role of “free will”. On the one hand, are people really as free and rational as classical philosophers would expect them to be? Are their actions not conditioned by their technological context – as Hayles has argued already in How We Became Posthuman (1999) and Stiegler in his seminal work Technics and Time I–III (1998; 2008; 2010a)? On the other hand, if
Introduction
9
machines do have an effect on moral reality, should this not be taken into account even if their effect is not intentional but based on unthought influence (Hayles 2017)? We see that machines should be evaluated in moral and juridical terms as soon as they contribute to situations that are experienced in terms of justice and injustice. These include autonomous weapon systems and algorithms used in financial markets, education, and recruitment as well as the particularly insidious everyday use of search engines that recycle harmful ideologies, as shown by Joshua Adams in his study of the colonial gaze prevailing in Google Search. As Hayles suggests, in the current condition an ethics that concerns itself only with humans is simply not adequate for our present situation. Along with Hayles’s formulation of cognitive assemblage, the recent revival of cybernetical (e.g. Erich Hörl, Yuk Hui), system-theoretical (Donella H. Meadows), and network approaches (Alexander Galloway and Eugene Thacker), as well as their development into new approaches such as general ecology (Hörl) and hyperobject (Timothy Morton), suggest that we have begun to understand ourselves, our experiences, and cognitive processes as being embedded in larger environments. Contemporary, digitalized reality does not appear as a uniform system that integrates humans and nature as simple passive resources, like in the somber dystopian hypotheses of Martin Heidegger, Jacques Ellul, and Theodor Adorno in the middle of the 20th century. Instead, it surrounds and carries us like an environment where we live and in which we also take part actively, although not necessarily consciously. Today, technology marks the entire environment in such a way that we live in a technoecology, as Hörl puts it in his introduction to General Ecology: The New Ecological Paradigm (2017) or techno-nature, as Susanna Lindberg (2020) puts it. In other words, our relation to nature as well as to ourselves is entirely mediated by technology, as shown for instance by JeanLuc Nancy in After Fukushima (2015) or by Frédéric Neyrat in Biopolitique des catastrophes (2008) and La part inconstructible de la terre (2016). Therefore, if we do not take technology into account, we cannot understand and evaluate our situation. This does not mean that we can control nature by means of technology – on the contrary, the explosive growth of technology is an essential element in the emergence of technologically provoked natural phenomena such as the climate crisis or the sixth mass extinction, which are unwanted and unplanned consequences of the industrial revolution. We cannot control the human being, either, in the manner of a transhumanist dream where man is enhanced into an unprecedented creature invented by himself. Instead, technology, and, digital technology in particular, constitute our ecological niche in a way that needs to be made visible, evaluable, and maybe transformable. The word “ecology” helps us understand how technology surrounds and supports us. Unlike the classical sense of ecology, modern techno-ecology does not constitute only our natural context, but also our social and
10
Lindberg and Roine
cognitive environment (cf. Guattari 2000). The digital environment does not surround us like a culture but rather like a kind of an infraculture. While a culture consists of “spiritual” things like significations, meanings and values, the digital infraculture is made of calculations and software that do not produce values but merely follow algorithmic orders. It does not think but performs nonconscious cognition, as Hayles (2017) puts it in Unthought: they make up the unthought that constitutes an independent level of intellectual-like operations between physical processes and conscious thinking. Because non-conscious cognition constitutes our techno-ecological environment, it affects our ethical and political situation in the various ways mentioned above. Because of its effects on the domains of ethics and justice, technology is finally also a political matter. However, it is hardly the object of political evaluation and decision in the way it should be, as human rights lawyer Philip Alston (2019) has suggested. Bernard Stiegler, in particular, has noted the political character of contemporary technology, especially of digital technologies: today, the technological infrastructure constructed by the GAFAM (short for Google, Amazon, Facebook, Apple, and Microsoft) frames and conditions practically everyone’s life. The world they construct is pleasant in many aspects, but its users have neither chosen nor designed it. On the contrary, it shapes them, particularly the youth and the children who grow into it, as Stiegler argues in Taking Care of Youth and the Generations (2010b). Contemporary digital reality is primarily constructed in order to profit these enterprises, not to emancipate individuals and communities to do whatever these technologies are virtually capable of enabling us to do. Hence our reality is haunted by what Frédéric Neyrat calls the “zombies and the spectres of the digital” in his article in this volume. He argues that the very structure of digital reality oppresses certain areas of life, while others easily exploit the situation – and where there is exploitation, there will also be rebellion. The politics of the digital reality transcend most national politics, which is why it is also difficult to discuss them in traditional nation-centered political contexts. Stiegler and Neyrat show how the logic of capitalism mainly runs these politics. However, they also ask whether they could be run otherwise. Could our contemporary techno-ecology be cared for so that it would serve individual and collective freedom and creation above all else? Several contributions to this volume – Roine and Piippo (Chapter 3), Mäkelä (Chapter 2), Adams (Chapter 10), Kangaskoski (Chapter 4) – investigate different domains of the new digital everyday world consisting of Tweets, Google searches, snapshots and literary interfaces and the like, where new ways of identification and community-building are already at work. New political forms are doubtless being generated in these domains that are very unlike classical political institutions, for they are a curious combination of privacy and vast publicity. These digital communities are places where sense is made, communication takes place, violence is unleashed, and power is exerted. Today they often
Introduction
11
appear to be unlawful, unruly spaces where traditional politics falls into disgrace. Could they also become spaces of caring for people and of the world?
Possibilities of Ethics in the Digital Infraculture While questions related to self-driving cars, autonomous weapon systems, or algorithms based on neural networks have received a great deal of attention in the media, the general requirements of ethical life in the contemporary digital reality have not been made sufficiently visible and evaluated. The articles collected in this volume do not intend to fix new moral rules. Such rules would probably not be long-lived, as the digital reality changes so quickly. Instead, the articles point to the need to practice one’s moral skills rather than adopting definitive maxims. They invite us to maintain constant vigilance in the ever-changing environment, and to renew moral reflection in the face of unheard-of moral dilemmas. This is how the articles propose to help us to orient ourselves within the new digital infraculture. They distinguish five areas to which one should pay attention when exploring new ethical and political situations. Firstly, digital technologies shape perception, for they do not just function as mediators, but also change the ways in which we see the world, transforming the spectrum of possibilities within which human intentions and choices are conceived (see Latour 2002; Verbeek 2011). Smartphones increasingly encourage us to relate to ourselves and others through images rather than words. These images are not the instants of naked reality they present themselves as, but are often carefully framed, filtered, and fabricated messages. Digital technologies are also used to communicate verbal messages, but they favor short slogans and “reactions” rather than long explications, and they generate new forms of narration. One can hardly place such momentary expressions of self within the framework of moral acts. Nonetheless, they contribute to the formation of a particular kind of ethical character that favors a constructed modular identity – we can call this a fabricated or an artificial self, as well as note its ludic qualities (see Frissen et al. 2015) – rather than the classical ethical virtues of authenticity and sincerity. In (Chapter 2) Maria Mäkelä shows more exactly how the mechanisms of social media storytelling distil universal truths from arbitrary stories of personal experiences, building on strong moral positioning. Secondly, digital technologies shape knowledge, not only because thinking reflects digital means of expression, but also because an increasing amount of content is being produced by digital means. Sciences and media have adapted to digital tools, and, as Hansen (2015) has argued, our focus has shifted away from past-directed recording platforms and storage toward a data-driven anticipation of the future. Furthermore, as Isabelle Stengers (2000) has shown, science has always reflected the instruments available: contemporary sciences and even humanities lean heavily on the power of computation, computational modelling, and treatment of big data. Today,
12
Lindberg and Roine
the distribution of scientific results has also undergone profound transformations. In principle, printed text is disappearing and open access publishing becoming the rule, but at the same time, the problems of access, copyright, and validity are taking new forms. Information and entertainment media have found similar ways to adapt to new technologies. However, if content production and distribution have become easier, so have the production and distribution of degrading images, lies, fakes, and malevolent rumors. The realm of illusion has expanded as quickly as the realm of information, if not even more quickly. Today, the problem is not really a lack of information, but the difficulty of judging its reliability. Search engines cannot tell the difference between truth and fake news. Moreover, both search engines and social media platforms are run by algorithms, which have the potential to create echo chambers and isolate users within so-called “filter bubbles”. As a result, instead of discovering new ways of seeing things, the users are repeatedly pushed back upon their old preferences, as algorithms try to predict what we want to see. Thirdly, digital technologies shape social relations, not only because they transform how we show ourselves and see others, but also because they define the scope of a community in a new way. Today, our individual social networks neither simply comprise the group of people we actually share our physical space with nor the ethnical or national community to which we legally belong. Instead, they are much larger, virtually worldwide, networks of people that we are able to reach in a disembodied and ubiquitous manner. The links between these people can be fluid and still count for much. At the same time, the promise of the world wide web to connect and open the world is gradually turning out to be another illusion, as it is also misused in commercial and political surveillance and limited by digital walls, such as the ones that are being built around China, Russia, and other nations. Fourthly, algorithms that affect the formation of a just society are being implemented. Well-known examples of the power of algorithms have been provided by, among others, Yuval Noah Harari (2016), Cathy O’Neil (2016), and Éric Sadin (2015). These demonstrate, for example, how algorithms can be used to determine if someone gets a job, a bank loan, a good insurance, or advanced healthcare. In higher education, algorithms are used in the selecting students. Furthermore, in the court of law, algorithms are used to determine a fitting sanction, for example. Such algorithms can have a huge influence on our lives. As O’Neil has shown, while they are supposed to make evaluation processes more equitable, they can, on the contrary, enforce racial and social biases rather than eliminating them. What is more, when the choice is made by an algorithm and not by a person, the criteria of choice become opaque, and nobody can be held responsible for a contestable choice. If algorithms determine our standing and chances in the society, it is of paramount importance to verify that these algorithms are capable of treating individuals in a fair and just manner, as required in the EU and UN reports quoted above. This may require that instead of entrusting the design
Introduction
13
of these algorithms to a few specialists, we should open the black boxes of such algorithms and give them over to public debate. Fifthly, algorithms do indeed run many functions of the public space. Infrastructures such as banking, traffic, and commerce increasingly rely on digital systems. Both in the global North and the global South, more and more states use digital systems in, for example, identity verification (including developing countries such as Kenya or the state of Aadahar in India), eligibility assessments, calculating welfare benefits and payments, and risk scoring as well as communication between welfare authorities and beneficiaries. This is useful since these systems are cheaper and quicker than human workers, and they make fewer mistakes. However, there are also reasons to be concerned by these systems. Firstly, the systems do what they are programmed to do – but what if they are not programmed well enough? Is the best way to transport goods, for example, calculated only with respect to cost and time, or also with respect to CO2 emissions? Secondly, once such powerful systems are put into operation, it is difficult to dismantle them: like automated stock exchanges, they tend to become an element of reality to which people adapt unquestioningly, instead of being an object of public debate in the way that laws are. Not only is the technological infrastructure thus automatized, but also public space. This is apparent, for example, when news is generated by artificial intelligence, or when constant polling makes not only consumers, but also citizens, politicians, and policies adapt to queries, rather than debating them and questioning the basis on which they have been created. Lastly, digital systems such as those described above can obviously be used for questionable purposes: probably the tightest state surveillance system in the world is China’s social credit system, which uses facial recognition and big data analysis technology in order to regulate social behavior. When digital technologies, including machine learning processes, become the primary means of governance, it is indispensable to look for ways of controlling them democratically.4
The Structure of the Volume This volume was preceded by the conference Moral Machines? Ethics and Politics of the Digital World, organized at the Helsinki Collegium for Advanced Studies in the University of Helsinki in March 2019. The conference was an interdisciplinary meeting that turned out to be extremely rich and stimulating, and the editors of this book wish to thank all of its participants once more. Although the interdisciplinary breadth of the conference was needed to bring all these questions to the fore, a good book needs to be more clearly focused, and this is why this book concentrates on the analyses of the ethical, moral and political consequences of digitalization presented in philosophical, literary and media studies. After this introduction, the book opens with Eino Santanen’s poem “Should a Self-Driving Car,” originally presented as part of the artistic
14
Lindberg and Roine
program of the conference. Part I, “Digital Ecologies Today,” includes a contribution from one of the most prominent voices in the field, N. Katherine Hayles. Her article is followed by three further Parts, each highlighting a different facet of the main issues at hand, namely, contemporary algorithm-powered media environments and their (largely nonconscious) effects on human users; the delegation of moral decisions to machines and our entanglement with the hidden ethics of digital tools; and the abstract relations between machines and humans, morals and knowledge. In her article, Hayles contextualizes not only her own work, but also the larger issues tackled by this volume, discussing an ethics that would involve technical actors and take into account the full complexities of human–technical systems. She argues for expanding the concept of species to include digital media as symbionts to humans, as well as for revisioning species as consisting of three categories that redefine the relation of humans to nonhumans and technical systems. Originally, Hayles’s article was to be followed by an article by Bernard Stiegler. Sadly, his untimely death in the summer of 2020 interrupted his work, and also prevented the publication of his article in this volume. Because several articles in this volume refer to Stiegler’s original speech, the reader of this volume benefits from knowing that it provided a more somber outlook on modern technology. Like in many of his recent works, he interpreted artificial intelligence as a continuation of the process of the depositing-and-deposing of affective, cognitive, and social functions in technical prostheses: he analyzed this process in the technical terms of “exosomatization of the noesis itself.” This is a double process in which, on the one hand, the digital world is more and more our familiar home but, on the other hand, it is traversed by tendencies towards disintegration and entropy. This happens especially when the digital technologies that are, in principle, extraordinary cognitive extensions, are used to produce “artificial stupidity.” Against such tendencies, Stiegler called forth counterforces capable of creating new spaces of care: or, in other words, he urged us to create “negentropic localities” against the “entropic tendencies” of the contemporary world. His thoughts are further explained and developed in this volume by Anne Alombert (Chapter 12) and Daniel Ross (Chapter 13) in particular. With all authors, the editors of this volume wish to salute the memory of Bernard Stiegler who was one of the strongest and most original thinkers of the present technological situation, and whose work remains a model of philosophical profundity and social responsibility. Part II, “The Ethos: Description and Formation,” presents analyses of the contemporary media environment by means of literary studies that face the challenge of virality, algorithmic platforms, and digital interfaces. Chapter 2, by Maria Mäkelä, concentrates on the forms of viral storytelling that can be considered as part of the general storytelling boom of the 21st century from a narrative-analytical perspective, approaching the mechanisms of social media as distilling universal truths from arbitrary stories of personal experiences. She argues that viral phenomena that are particularly narrative
Introduction
15
in nature build on strong moral positioning, thus collectively producing narrative didacticism and necessitating a postulation of an emergent “narrative agency”. In Chapter 3, Hanna-Riikka Roine and Laura Piippo continue to problematize the concept of authorship which has gone hand in hand with the understanding of authoring as a work of distinct agents, failing to acknowledge the ways in which human agency is entangled with more-than-human actors within digital environments. Taking their cue from Hayles’s concept of assemblage, they argue for an understanding of platforms as affective environments based on a feedback loop of a kind: they are not only affected by our actions but, in turn, shape and guide our agency. Matti Kangaskoski’s Chapter 4 examines the logic of cultural digital interfaces and how this logic itself influences literary poetics with the case studies of Instagram poetry and the criteria of the Man Booker Prize from 2011 to 2018. This logic of selection tends to appear natural, which allows it to extend to the public sphere as well as the academic and the artistic spheres. Kangaskoski then argues that insofar as the act of selection happens before a conscious will or desire has been formed, it almost unnoticeably takes its place as the affirmation of the algorithmic prediction – with significant implications for our reading. Chapter 5, by Esko Suoranta, then, discusses a figure originally coined by Hayles to describe the center of the complexities of cybernetic systems, the schizoid android. Through an analysis of two novels, Dave Eggers’s The Circle (2013) and Malka Older’s Infomocracy (2016), Suoranta updates the figure into what he calls the schizoid nondroid, a speculative synthesis of humans and technology as well as information capitalist systems that profit from the collection and modification of behavioral data. Part III of the volume, “The Ethos: Entanglement and Delegation,” brings together scholars approaching the delegation of moral decisions to machines and our increasing entanglement with digital tools from both philosophical and more practical angles. Philosopher Frédéric Neyrat (Chapter 6) sheds light on what happens when abstractions, “immaterial” operations, are turned into material, concrete operations that the machines can take charge of. This happens, for instance, when moral decisions are delegated to self-driving cars. He argues that the two-way exchange between the “virtual” and the “actual” is always incomplete and gives rise not only to the zombies of the digital, resisting the virtualization of the world, but also to the specters of the analog, the potentialities being repressed by the actualization of virtual entities. Susanna Lindberg, in Chapter 7, engages with the ongoing development of machines assuming the role of dispensing justice and takes a critical look at the complicated algorithmic systems that have the function of “just machines.” With the concrete examples of recruitment algorithms, especially those operating admission to higher education, she shows the philosophical grounds for assessing their flaws depending not (only) on bad conceptions, but also on the fact that just machines are inevitably also unjust machines – because they are just machines. In Chapter 8, Marc-Antoine Pencolé further focuses on the
16
Lindberg and Roine
delegation of a moral or ethical decision to an automaton. With the illustrative example of Wikipedia and of diverse peer-to-peer files exchange communities, he shifts the debate about the morality of “decision-making” machines towards a discussion of the intrinsic or contextual elements that make the different forms of delegation a successful effectuation of collective norms – or a sheer dispossession of our autonomy. In Chapter 9 Anna Longo shows how modern digital technologies have changed economical modeling in far-reaching ways. While classical economics made bets on the rationality of the agents, the predictive algorithms used in automated trading systems undermine the agents’ cognitive capacities and count on the agents’ ignorance. By actively increasing uncertainty, they also increase inequality in new ways that will also call for new political analyses. Hailing from the field of communication studies, Joshua Adams then argues in Chapter 10 that digital tools like search engines can reinforce current and historical inequalities. Through his analysis of Google Search results for the term “Ubuntu,” he shows how the search engine incentivizes a kind of colonial gaze where prevailing ideas about the democratic potential of the internet blind users about how these tools privilege the values, beliefs, ideologies, and ontologies of the Western world. Finally, Part IV of the volume, “The Ethos: Thinking, Computing, and Ethics,” combines philosophical studies of the relations between machines and morals, machines and humans, computational machines and knowledge, and things and humans. In Chapter 11 François-David Sebbah argues that we should regard technics and ethics as two types of light, in the sense that they are two ways of “making appear”. He suggests that the two robust candidates for describing these two lights are Martin Heidegger’s and Emmanuel Lévinas’s descriptions of technics and ethics as modes of revelation, and, through the complicated relations between these two lights, shows how the question of the relation between machines and morals can be examined on this level of abstraction. Anne Alombert then turns to another relation between the realms of the abstract and the concrete in Chapter 12, questioning the notions of Artificial Intelligence and Technological Singularity in the light of Gilbert Simondon’s and Bernard Stiegler’s refusal of the abstract analogy between humans and machines. She then argues that, while there is no sense in comparing technical, mechanical, or computational operations to human thought, we need to focus on asking how human culture could take care of artificial, automated, and digital milieus so that these technologies can support a new collective intelligence. Daniel Ross, in Chapter 13, shows why the so-called artificial intelligence is not at all an intelligence in the sense of a noetic soul (following Aristotle). By further developing motives from Martin Heidegger (1995), Jacob von Uexküll (2010), and especially Bernard Stiegler, he shows why human – or more precisely “non-inhuman” – noetic soul must be distinguished both from the simply living sentient soul and from the cybernetic operations that may be autopoietic, but that do not for that matter constitute a “soul”.
Introduction
17
In Chapter 14, the final one of the volume, Lars Botin focuses on the boundaries between things and humans when it comes to thinking, exploring how things think and thoughts become things through the concept of thinging. He shows how things are basic to any kind of thinking and how any sort of things propels thinking and reflection, arguing for a view on thinking and action with the purpose of moral and political character. Three Principles towards an Ethical Digital Reality At the end of this Introduction, we present three principles to summarize the main goals of this book with regard to taking us towards an ethical digital reality. 1
2
3
Technology, and digital technology in particular, constitutes our ecological niche in a way that needs to be made visible, evaluable, and maybe transformable. At the same time, the general requirements of ethical life in this increasingly digitalizing niche must be made visible and evaluated. Technological systems, including machine learning systems (AI), cannot act ethically, but they condition ethical action by creating the environment in which it takes place. In order to rethink ethics in the contemporary digital reality, we must abandon current individualist theories of ethics: the ethical and political stakes in this reality cannot be explained solely in terms of an ethical theory based on an autonomous, conscious, and responsible human subject – or its robotic double. Ethics belongs to the beings who can be obliged, responsible, and guilty: it still makes sense to attribute such duties to humans only, but we should see how their ethical action is unconsciously formatted by algorithmic life and algorithmic governmentality, and how they can be tempted to discharge their ethical duties on algorithmic systems. Access to high technology is not the only ethical problem we face today; the very structure and use of this technology presents issues. We must conceptualize new ways of taking care of human beings and the nonhuman environment. We must also find ways to discuss technological structures publicly and democratically, instead of just adapting to them as if they were simply neutral means of politics.
Notes 1 Asimov formulated his laws as early as 1942 in a short story, Runaway, that was later included in the collection I, Robot (1950). The laws go as follows: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 2 UNHCR reports that the most vulnerable populations, refugees, greatly profit from mobile phones not only for keeping in touch with their families but more
18
Lindberg and Roine
generally for safety and security (see Kaplan 2018). Another report by UNESCO shows that mobile phones can promote literacy (UNESCO 2014). 3 Contract for the Web (see https://contractfortheweb.org/) is a global plan of action to “make our online world safe and empowering for everyone”. Among its principles are goals for governments (such as “Ensure everyone can connect to the Internet”), for companies (such as “Respect and protect people’s privacy and personal data to build online trust”) and for citizens (such as “Be creators and collaborators on the Web”). Its supporters include foundations such as the World Wide Web Foundation and Electronic Frontier Foundations, but also big tech companies such as Google and Facebook. 4 See the special issue of Multitudes 2010 / 1 (n° 40) “Du contrôle à la sousveillance.”
References Access Now. 2016. “Human Rights Principles for Connectivity and Development.” Accessnow.org, October 2016, https://www.accessnow.org/cms/assets/uploads/2016/ 10/The-Human-Rights-Principles-for-Connectivity-and-Development.pdf. Adorno, Theodor, and Max Horkheimer. 2002. Dialectic of Enlightenment. Stanford, CA: Stanford University Press. Alston, Philip. 2019. “Report of the Special Rapporteur on Extreme Poverty and Human Rights.” United Nations General Assembly, distr. 11 October 2019 (A/74/ 493), https://undocs.org/a/74/493. Beever, Jonathan, Rudy McDaniel, and Nancy A. Stanlick. 2019. Understanding Digital Ethics. Cases and Contexts. London: Routledge. Cave, Stephen, and Kanta Dihal. 2019. “Hopes and Fears for Intelligent Machines in Fiction and Reality.” Nature Machine Intelligence 1: 74–78. https://doi.org/10.1038/ s42256-019-0020-9. Downes, Cathy. 2018. “Strategic Blind-Spots on Cyber Threats, Vectors and Campaigns.” The Cyber Defense Review 3, no. 1 (spring): 79–104. Ellul, Jacques. 1964. The Technological Society, trans. John Wilkinson. New York: Knopf. Emerson, Lori. 2014. Reading Writing Interfaces: From the Digital to the Bookbound. Minneapolis: University of Minnesota Press. Feenberg, Andrew. 1999. Questioning Technology. London and New York: Routledge. Finn, Ed. 2017. What Do Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press. Fisher, Max and Amanda Taub. 2019. “How YouTube Radicalized Brazil.” New York Times, August 11, 2019. https://www.nytimes.com/2019/08/11/world/americas/ youtube-brazil.html. Frissen, Valerie, Sybille Lammes, Michiel de Lange, Jos de Mul, and Joost Raessens. 2015. “Homo Ludens 2.0: Play, Media, and Identity.” In Playful Identities: The Ludification of Digital Media Culture, edited by Valerie Frissen et al., 9–50. Amsterdam: Amsterdam University Press. Galloway, Alexander and Eugene Thacker. 2007. The Exploit: Theory of Networks. Minneapolis: University of Minneapolis Press. Georgakopoulou, Alexandra. 2017. “Narrative/Life of the Moment: From Telling a Story to Taking a Narrative Stance.” In Life and Narrative: The Risks and Responsibilities of Storying Experience, edited by Brian Shiff, A. Elizabeth Kim, and Sylvie Patron, 29–54. Oxford: Oxford University Press.
Introduction
19
Guattari, Félix. 2000. The Three Ecologies, trans. Ian Pindar and Paul Sutton. London and New Brunswick, NJ: Athlone Press. Hansen, Mark B.N. 2015. Feed-Forward: On the Future of Twenty-First Century Media. Chicago: University of Chicago Press. Harari, Yuval Noah. 2016. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker. Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: University of Chicago Press. Hayles, N. Katherine. 2017. Unthought. The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press. Heidegger, Martin. 1977. The Question Concerning Technology and Other Essays, trans. William Lovitt. New York and London: Garland Publishing. Heidegger, Martin. 1995. The Fundamental Concepts of Metaphysics: World, Finitude, Solitude, trans. WilliamMcNeill and Nicholas Walker. Bloomington and Indianapolis: Indiana University Press. Hui, Yuk. 2016. On the Existence of Digital Objects. Minneapolis and London: University of Minnesota Press. Hui, Yuk. 2019. Recursivity and Contingency. London: Rowman and Littlefield. Hörl, Erich. 2017. “Introduction to General Ecology: The Ecologization of Thinking,” trans. Nils F. Schott. In General Ecology: The New Ecological Paradigm, edited by Erich Hörl and James Burton, 1–74. London: Bloomsbury. Kaplan, Ivy. 2018. “How Smartphones and Social Media have Revolutionized Refugee Migration.” UNHCR Blogs, 26 October 2018. https://www.unhcr.org/blogs/ smartphones-revolutionized-refugee-migration/. Kurki, Visa. 2019. A Theory of Legal Personhood. Oxford: Oxford University Press. Kurzweil, Raymond. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press. Latour, Bruno. 2002. “Morality and Technology: The End of the Means.” Theory, Culture and Society 19, no. 5–6: 247–260. Lehtiniemi, Tuukka, and Minna Ruckenstein. 2019. “The Social Imaginaries of Data Activism.” Big Data & Society 6, no. 1: 1–12. https://doi.org/10.1177/2053951718821146. Lin, Patrick, Keith Abney, and Ryan Jenkins. 2017. Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence. Oxford: Oxford University Press. Lindberg, Susanna. 2020. Techniques en philosophie. Paris: Hermann. Madiega, Tambiama. 2019. EU Guidelines on Ethics in Artificial Antelligence: Context and Implementation. Briefing for European Parliamentary Research Service. PE 640.163, September 2019. https://www.europarl.europa.eu/RegData/etudes/ BRIE/2019/640163/EPRS_BRI%282019%29640163_EN.pdf. Manovich, Lev. 2013. Software Takes Command: Extending the Language of New Media. New York and London: Bloomsbury. Meadows, Donella H. 2008. Thinking in Systems: A Primer, ed. Diana Wright. London: Earthscan. Morton, Timothy. 2013. Hyperobjects. Philosophy and Ecology after the End of the World. Minneapolis and London: University of Minnesota Press. Murray, Janet. 2011. Inventing the Medium. Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press. Nancy, Jean-Luc. 2015. After Fukushima. The Equivalence of Catastrophies, trans. Charlotte Mandell. New York: Fordham University Press. Neyrat, Frédéric. 2008. Biopolitique des catastrophes. Paris: Éditions Dehors.
20
Lindberg and Roine
Neyrat, Frédéric. 2016. La part inconstructible de la terre. Critique du géo-constructivisme. Paris: Seuil. O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown Books. Page, Ruth. 2018. Narratives Online: Shared Stories in Social Media. Cambridge: Cambridge University Press. Poikola, Antti, Kai Kuikkaniemi, and Harri Honko. 2015. “MyData – A Nordic Model for Human-Centered Personal Data Management and Processing.” Ministry of Transport and Communications, Finland, 2015. http://urn.fi/URN:ISBN:978-952-243-455-5. Ruckenstein, Minna, and Linda Lisa Maria Turunen. 2019. “Re-humanizing the Platform: Content Moderators and the Logic of Care.” New Media & Society: 1026–1042. https://doi.org/10.1177/1461444819875990. Sadin, Éric. 2015. La Vie algorithmique. Critique de la raison numérique. Paris: L’Échappée. Sandler, Ronald L. (ed.). 2014. Ethics and Emerging Technologies. London: Palgrave Macmillan. Snell, Karoliina. 2019. “Health as the Moral Principle of Post-Genomic Society: DataDriven Arguments against Privacy and Autonomy.” CQ: Cambridge Quarterly of Healthcare Ethics 28, no. 2: 201–214. https://doi.org/10.1017/s0963180119000057. Stengers, Isabelle. 2000. The Invention of Modern Science, trans. D.W. Smith. Minneapolis: University of Minnesota Press. Stiegler, Bernard. 1998. Technics and Time, 1: The Fault of Epimetheus, trans. George Collins and Richard Beardworth. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2008. Technics and Time, 2: Disorientation, trans. Stephen Barker. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2010a. Technics and Time, 3: Cinematic Time and the Question of Malaise, trans. Stephen Barker. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2010b. Taking Care of Youth and the Generations, trans. Stephen Barker. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2011. The Decadence of Industrial Societies: Disbelief and Discredit, volume 1, trans. Daniel Ross. Cambridge: Polity Press. Stiegler, Bernard. 2012. Uncontrollable Societies of Disaffected Individuals: Disbelief and Discredit, volume 2, trans. Daniel Ross. Cambridge: Polity Press. Stiegler, Bernard. 2014a. The Lost Spirit of Capitalism: Disbelief and Discredit, volume 3, trans. Daniel Ross. Cambridge: Polity Press. Stiegler, Bernard. 2014b. Symbolic Misery, Book 1. The Hyperindustrial Epoch. Cambridge: Polity Press. Stiegler, Bernard. 2015. Symbolic Misery, Book 2. The Catastrophe of the Sensible. Cambridge: Polity Press. Stiegler, Bernard. 2016. Automatic Society. Volume 1: The Future of Work. Cambridge: Polity Press. Tomlinson, Joe. 2019. Justice in the Digital State. Assessing the Next Revolution in Administrative Justice. Bristol and Chicago: Policy Press. UNESCO. 2014. “UNESCO study shows effectiveness of mobile phones in promoting reading and literacy in developing countries.” Unesco.org, 23 April 2014. https://en. unesco.org/news/unesco-study-shows-effectiveness-mobile-phones-promoting-readingand-literacy-developing-0. von Uexküll, Jakob. 2010. A Foray into the Worlds of Animals and Humans, with A Theory of Meaning. trans. Joseph D. O’Neil. Minneapolis: University of Minnesota Press.
Introduction
21
Verbeek, Peter-Paul. 2011. Moralizing Technology: Understanding and Designing the Morality of Things. Cambridge: Cambridge University Press. Villani, Cédric. 2018. For a Meaningful Artificial Intelligence. Towards a French and European Strategy. A mission of the French Parliament 8 September 2017 to 8 March 2018. https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf. Wardrip-Fruin, Noah. 2009. Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge, MA and London: MIT Press. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontrier of Power. London: Profile Books.
Should a Self-driving Car Eino Santanen Translated by Kasper Salonen
Should a self-driving car, having suddenly lost its brakes, continue driving straight and run over a law-flouting man, killing him, or change lanes and run over a law-abiding woman so that she dies. Ought a self-driving car, having suddenly lost its brakes, continue driving straight and run over three law-abiding men, one woman, and one female executive, killing them, or change lanes and run over two law-flouting male athletes, a girl, a boy, and a male executive so that they die.
Should a self-driving car, having abruptly lost its brakes, change lanes and drive into a concrete barrier so that its passengers, a small woman, two men, a boy, and a large man, die, or should it continue to drive straight and run over five law-abiding dogs and one cat, killing them. Would it be better for a self-driving car, having abruptly lost its brakes, to change lanes and run over a large law-abiding woman, a female executive, and a male athlete, killing them, or should it continue driving straight so that a large white bird dies just as it is taking wing.
Should a Self-driving Car
23
Should a self-driving car, having abruptly lost its brakes, continue driving straight into an oncoming self-driving car passing an oncoming self-driving car so that the passengers of the colliding cars, a boy, a small male executive, a dog, two cats, and a small female athlete, die, or should it change lanes and drive into the self-driving car that was passed by a self-driving car so that it kills three cats, a girl, a dog, and a small male athlete in addition to its own passenger, a small female athlete. Should a self-driving car, having abruptly lost its brakes, change lanes and kill a large, selective, trash-devouring white bird that is devouring trash thrown onto the road from self-driving cars, or should it continue driving straight and kill a man who has stepped out of another self-driving car and is currently taking a selfie in front of a large white bird.
Part 1
Digital Ecologies Today
1
Three Species Challenges Toward a General Ecology of Cognitive Assemblages N. Katherine Hayles
As many are beginning to realize, Planet Earth is in trouble.1 The STEM disciplines, supported by funding agencies, are organizing to identify and address a series of Grand Challenges, among them Global Climate, Hunger and Thirst, Pollution, Energy, and Health.2 The challenges are “global” not only in their reach and scope but also because their effects cannot be contained by geographical boundaries. If someone contracts bird flu in Beijing, chances are it will show up in Paris and New York; if Californians dump plastic into the ocean, it washes up on the shores of Japan and Easter Island. Moreover, the challenges involve cultural, sociopolitical, and ethical issues as well as scientific and technical problems, so the STEM disciplines alone will not be sufficient to solve them; input from the humanities and qualitative social sciences will be necessary as well. Effective action on these global issues requires large-scale consensus among different regions, nationalities, and ethnicities – yet the mechanisms to achieve such consensus are woefully lacking. Only a few come to mind, such as the Paris Climate Accords, the prohibitions against nuclear, chemical, and biological weapons, and constraints on altering the human genome. As humans, we desperately need a sense of solidarity and shared purpose that can help create these global mechanisms. Even to write such a sentence, however, risks bringing howls of protests from humanists and social scientists, because of the historical baggage of false universalisms that have been so effectively deconstructed over the past several decades. Hence the challenge this chapter addresses: is it possible to arrive at conceptual foundations for human solidarities that do not reinscribe oppressive ideologies and discriminatory practices? I will propose three such foundational concepts: species-in-common, species-inbiosymbiosis, and species-in-cybersymbiosis.
The Challenge of Species-in-Common Immediately problems arise with the concept of species, because biologists have been unable to arrive at a rigorous definition of what constitutes one. All of the proposed criteria – morphology, reproductive success, genetics – have fallen short in some aspect or another. Consider, for example, the
28
Hayles
widely used criteria that individuals count as the same species if they can mate and have fertile offspring (this leaves out mating between donkeys and horses, whose mule offspring are sterile). The problem here can be illustrated with ring species. Consider squirrels, for example: individuals in adjacent geographical regions can mate and have fertile offspring (New Yorkers with Pennsylvanians, Pennsylvanians with Missourians, Missourians with Coloradians, Coloradians with Californians). Insert enough geographical distance, however – say, matching Californians with New Yorkers – and mating is not successful. Problems like these notwithstanding, most biologists nevertheless share a general understanding of species and find it indispensable for their work. For the humanities, a more serious issue is speciesism, the ideology that humans are morally more important than other species and therefore entitled to exploit or dominate them. A founding document is the 1970 privately printed pamphlet Speciesism by Richard D. Ryder.3 Arguing against animal experimentation, it equated speciesism with racism: just as speciesism considers humans morally superior to other animals, so racism judges one ethic group morally superior to others. Contemporary commentators on speciesism include Timothy Morton, who recently argued that speciesism is more fundamental than racism and that anyone who is a speciesist must perforce be a racist as well. Given that racism is one of the most virulent charges one humanist can level against another, such arguments virtually guarantee that if someone asks, “Who here is a speciesist?” there would be a thunderous absence of response. Why, then, do most biologists continue to find species a necessary concept, even with all of its problems? The answer seems obvious: different species have distinctively different capabilities and potentialities. The human species notably differs from others in its ability to predict the future and form intentional plans to address anticipated problems. Which brings us back to the Grand Challenges: only humans could have conceived of these as global concepts, and only humans can devise technological, cultural, and ethical solutions to them. Here it may be useful to invoke a term used by Donna Haraway (2016): human response-ability. Humans respond through an empathic bond with other humans and nonhumans, and because of our abilities to conceptualize and anticipate the future, we bear a special responsibility for working toward ensuring the welfare of others and the planet in general. That we so far have failed miserably in meeting this challenge does not negate our potential to do so. Indeed, writers such as Haraway, Bruno Latour (2018), Brian Holmes (2018) and many others are now urging us to embrace our response-abilities. For this kind of Grand Challenge, a reconceptualized notion of species may be helpful – not one that implies speciesism with its imperialistic heritages of exploitation and racism, but rather what I call species-in-common, a notion of human solidarities and purposes that can work to mitigate the damages we have so far wrecked upon our common and only home, the Earth.
Three Species Challenges
29
So reconceptualized, species-in-common can serve as a bulwark against racism rather than a facilitator of it. For virtually all of human history, people have believed that their own group is fully human, while those in the next valley are somehow less or other than human. Indeed, when genocide raises its horrible head, one of the first (predictable) rhetorical moves is to equate the despised others with rats, vermin, cockroaches rather than with the human species (another indication of the close historical tie between racism and speciesism). Richard Rorty put the matter into useful perspective: Most people live in a world in which it would be just too risky – indeed, would often be insanely dangerous – to let one’s sense of moral community stretch beyond one’s family, clan, or tribe. Such people are morally offended by the suggestion that they should treat someone who is not kin as if he were a brother, or a nigger as if he were white, or a queer as if he was normal, or an infidel as if she were a believer. They are offended by the suggestion that they should treat people whom they do not think of as human as if they were human. (Rorty 1998, 125) He cautions that saying these benighted others should simply become more rational will not solve the problem (indeed this way of thinking is part of the problem). The necessary prerequisites, he suggests, are security (“conditions of life sufficiently risk-free to make one’s difference from others inessential to one’s self-respect, one’s sense of worth”) and what he calls “sympathy,” here denoted as empathy (ibid., 128). This pragmatic approach makes clear the relevance of the Grand Challenges, particularly Global Hunger and Thirst and Global Security, to the species-in-common concept. Solutions to each of these challenges reinforce and depend on the others. Species-in-common, with its focus on human solidarity, insists that every individual of the human species counts as human, but such a potently anti-racist vision can be effective only if everyday life for the world’s peoples includes enough of the necessities to ensure some measure of relief from danger, famine, drought and other catastrophic urgencies. In its reconceptualized form, species-in-common articulates a vision that has taken literally thousands of years of human history to achieve. Still in its infancy throughout most of the world, it calls for us to take response-ability for working toward the global conditions that will enable us to see the people in the next valley, living, feeling, cognizing people, as human like us. Moreover, the concept of species-in-common offers new clarity for media theory as well. This aspect is implicit in the move that John Durham Peters (2016) makes when he upends media theory by proposing that elemental processes such as clouds and ocean currents function as media interfaces through which communications are processed. One may be tempted to object that this stretches the concept of “media” so far as to render it
30
Hayles
meaningless as an analytical category, since in this view almost everything can count as media. Here it may be useful to return to John Guillory’s exploration of the genesis of the media concept (2010), where he argues that almost from the beginning, media has implied both “mediation” and “communication through a distance.” If we accept these as the two essential components of media as a concept (which Durham suggests we do), then there is no reason why mediation has to involve technical apparatuses. The result has been an explosion of media theory in a number of new directions, including Melody Jue’s Wild Blue Media (2020), exploring the ocean as a medium complete with databases and communication circuits. An investigation of coral reefs, with their long histories of sedimentation and interlocking life forms, in this view could count as media archeology, which typically involves such archaic technical media as stereoscopes, magic lanterns, and cycloramas. To evaluate what is gained (and lost) by this paradigm shift, we can compare Peters’s “elemental” scheme with Claude Shannon’s famous diagram of the communication situation (Shannon and Weaver, 1949; diagram available at http://people.seas.harvard.edu/~jones/cscie129/nu_lectures/lecture1/ Shannon_Diagram_files/Shannon_Diagram.html). Recall that Shannon’s diagram begins with a sender, who composes a message that is then encoded into a signal and sent through a circuit to a decoding apparatus, which reconstitutes the message and conveys it to the receiver. Intervening in this process is noise, always threatening to degrade the signal and compromise the message’s legibility. Shannon made a point of emphasizing that the sender and receiver need not be humans; they could be technical apparatuses instead. In either case, however, implicit in the diagram is the idea that both the sender and receiver have sufficient cognitive capabilities to perform the actions required of them. In Peters’s “elemental” model, the signal to be communicated over distance need not originate with a cognitive entity; the movement of clouds, which he argues communicates information to humans and nonhumans (birds, animals, and plants, for example), are material processes that do not require cognition to function. However, I would argue that there must be a cognizer at the end of the process for the two necessary components of mediation and communication over distance to function. Otherwise there are only material processes, distinguished as I argued in Unthought (2017) from cognition because there is no choice and no interpretation, only chemico-material events that are the resultant of the forces acting upon and through them. Communication, unlike material processes, always requires interpretation and choice – choice in determining which phenomena will be considered as media, for instance, and interpretation in the decoding and reception of the message. This leads immediately to one of Peters’s finest insights: “media are species and habitat-specific and are defined by the beings they are for” (2016, 56). Of course! Only our anthropocentric biases can account for why the
Three Species Challenges
31
field called “media theory” remains almost exclusively about human communication, while communication within and between other species is relegated to the relatively marginalized field of biosemiotics. With a multitude of examples, Peters gives a vivid sense of what media mean, for example, to whales and dolphins, including seawater sonic waves and ocean currents. As he argues, once the species-specificity of media is explicitly recognized, many new kinds of inquiries are opened, different vocabularies become possible, and novel theoretical frameworks can be developed. As an example, suppose that I am sitting on the couch with Monty (my dog), watching a rerun of the classic Lassie TV series. I see Lassie coming to the rescue, defeating bad men, helping the good. What does Monty see? He notices flashing lights and, when Lassie barks, momentarily looks at the screen, but he quickly loses interest because the images are contained in a box and have no smell, so he knows they are not real. Compare that with a trip to the dog park, where Monty comes across fresh urine. Smelling it, he notices the specificity of its chemicals and associates them with the handsome poodle that has just left the area. The urine smell-signature matches up with other smells coming from her anus, which he trots over to sniff. These are media for him because they communicate information and messages, although not for me. Similarly, dissolved chemicals in water function as media for redwood trees, salt-tinged air for seagulls, blood in water for sharks. What can count as media is therefore tied to the specificities of the sensory apparatus of the receiving cognizer, whether human, nonhuman, or computational, and are constituted within and through the environments of the cognizing species. It is not enough, then, to insist on media-specific analysis, for which I have been arguing for some time to encourage literary critics in particular to attend to the specific forms in which texts are instantiated; we must also attend to the specificities of the species that engage in communicative acts. This is another reason why the concept of species remains an essential analytical tool, despite its problems and historical baggage. Without it, we could scarcely formulate the mediations and communications through distance that comprise what Hoffmeyer calls the semiosphere, the totality of signs and messages passing between and within living organisms. Taking species into account has the additional advantage of restoring some of the specificity that opening media to elemental communications had dissolved. Even though virtually anything can be seen as originating a communication, such signals must be received and interpreted by cognizers to count as acts of communication, and the meanings extrapolated from the signals are specific to the sensory and neuronal capacities of the species that receives them. In addition to connecting species to their environments, these communicative acts help to construct and expand the deep interdependence of living organisms.4
32
Hayles
The Challenge of Species-in-Biosymbiosis Species entangle and interpenetrate. In 1967 when Lynn Margulis finally had her revolutionary paper published, she upended current biological dogma by arguing that mitochondria had descended from bacteria, and chloroplasts from cyanobacteria; these once freely living organisms became symbionts of eukaryotic cells in a process of endosymbiosis. Indeed, she went on to argue that endosymbiosis, rather than natural selection, is the primary driver of evolution: “Natural selection eliminates and maybe maintains, but it doesn’t create,” she argued in an interview (Teresi 2011). Humans have likewise acquired symbionts in our evolutionary history, for example, the gut bacteria essential for the proper digestion of food. Recently Donna Haraway has extended this work to global scope through the concept of sympoiesis. “Sympoiesis,” she writes, “is a word proper to complex, dynamic, responsive, situated, historical systems. It is a word for worlding-with, in company. Sympoiesis enfolds autopoiesis and generatively unfurls and extends it” (2016, 58). “What happens,” she asks, when the best biologists of the twenty-first century cannot do their job with bounded individuals plus contexts, when organisms plus environments, or genes plus whatever they need, no longer sustain the overflowing richness of biological knowledges, if they ever did? What happens when organisms plus environment can hardly be remembered for the same reasons that even Western-indebted people can no longer figure themselves as individuals and societies of individuals in humanonly histories. (Haraway 2016, 31) She continues, “poiesis is symchthonic, sympoietic, always partnered all the way down, with no starting and subsequently interacting ‘units’” (ibid., 33). In this view, organisms do not precede the relations they enter into with one another but reciprocally produce one another from lifeforms that for their part have already emerged from earlier involutions. This is a compelling vision, which has been extended by Jason Moore into the socioeconomic realm (2015, 2014). He urges a shift of perspective in which we turn from considering “the environment as object to environment as action. All life makes environments; all environments make life” (Moore, 2014, 012). His focus is specifically on capitalism, which he argues has radically exploited and reconfigured environments to extract profits, a process that continues into the 21st century with the transformation of human behaviors into dataflows that can be commoditized. Nevertheless, there are counter-narratives to these strong arguments. Countering endosymbiosis, for example, is the continuing tendency toward speciation, in which species occupy new niches or otherwise become isolated from one another and, over time, develop into new species distinct from
Three Species Challenges
33
their ancestors. The existence of Homo sapiens is testimony to the power of speciation to effect tremendous changes, given an evolutionary timespan. And temporality here is key. Given enough time, glass flows, mountains erode, continents drift apart. But seen from the measure of a human lifespan, windows abide comfortably in their frames, Mount Everest remains more or less the same for generations of aspiring climbers, and African shores measure the same distances from South America. Similarly, organisms carry through time their inheritances of DNA, the great conservation mechanism, so the extent to which they can be shaped by their present environments is tempered by the inertia of all the past environments they have inhabited. Even if DNA itself is constantly in motion over the generations, for example through horizontal gene transfer among bacteria that facilitates resistance to antibiotics (Gyles and Boerlin 2014), these events also take place along temporal scales that both moderate and enable the potentiality of all living things to change. In similar fashion, a counter-narrative to endosymbiosis is the fact that all life forms depend on membranes (skin, surfaces, cell walls) that at once distinguish them from and connect them to their milieux. That these surfaces change over time, admitting what was previously exterior into the interior, is an observation also subject to perspectives that can be range from the (relatively) stable to the (relatively) porous, depending on the temporal scale of the chosen viewpoint. In the same way, whether the lifeform’s membrane connects or divides is a matter of whether one emphasizes its protective function or its activity as a surface across which multiple kinds of communications occur. Jesper Hoffmeyer makes this point in distinguishing between endosemiotics, “semiotic processes taking place inside the organism,” and exosemiosis, “biosemiotic processes going between organisms, both within and between species.” The distinction, he warns, “should not be taken to signify any privileged role in biosemiotics for either side of the interface, or boundary. In fact, semiotics is in principle always connected with some kind of inside–outside interaction” (2008, 6). A surprising counter-narrative to endosymbiosis is chimerism, as Margrit Shildrick notes (2019). Whereas a mule is a hybrid, with DNA consistent throughout its body, the geep (a rare offspring of a sheep and goat) is a chimera, with discrete areas of sheep and goat DNA mixed together within its body. If endosymbiosis is like one bubble encased within a larger one, chimerism is two bubbles sitting side by side, both encased by another membrane. There have been documented cases of human chimeras, as in the case of Lydia Fairchild, whose DNA did not match those of her biological offspring. Shildrick explains: the most likely explanation is that the woman was the result of a dizygotic twin conception that had disappeared from knowledge when her embryonic self had absorbed the other twin in utero. The resulting
34
Hayles singleton carried both her own original DNA and that of the non-identical twin, thus creating a chimera. (2019, 14)
The extent to which (micro)chimerism is prevalent among microorganisms is still being investigated, but it is thought to play an important role in whether a transplanted organ is accepted or rejected. Granted that chimerism seems to be a much rarer phenomenon than endosymbiosis, it nevertheless illustrates the dazzling complexity of trying to form generalizations about biological processes. These complexities notwithstanding, distinctions nevertheless persist and play useful analytical roles. In the view argued here, species-in-common and species-in-biosymbiosis co-constitute each other, each delineating the contours of the other, as in the famous yin/yang symbol. Like the white dot nestled in field of black and a black dot in white, the symbol hints that a push too far in either direction will set an opposing tendency into action. The clear implication is that extremes risk distorting the world’s complexities. All flow and no structure is as distorting as its apparent opposite, all structure and no flow. Such oppositions can be found in many forms: only symbiogenesis and no speciation or only speciation and no symbiogenesis, only environment-organism becomings and no individuals or only individuals and no environment-organism becomings – each extreme diminishes our ability to understand the world’s complexities and construct useful frameworks to account for evidence. In summary, we need both species-in-common and species-in-biosymbiosis (and one more too) to meet the challenges of our times and create new openings for speculative thought. Species interpenetrate each other’s domain both physically in processes such as endosymbiosis and semiotically as their life patterns entangle through exchanging signs and contextualized meanings with each other. Species-in-biosymbiosis, connoting both physical and semiotic interdependence, works together with species-in-common to create a nuanced sense of how “species” can indicate both a specific kind of entity and a web of entangled reciprocities between species. Together, the two concepts open possibilities for a revitalized and expanded media theory that builds on the essential insight, media are species specific.
The Challenge of Species-in-Cybersymbiosis Until now, I have been considering only living organisms (in their manifestation as species-in-common and species-in-biosymbiosis) as the interpreters who act upon mediated communications. Clearly, however, technical devices can also perform these functions. Humans are in the process of entering into deep symbiosis with computational media. Over the last 50 years, virtually every complex technological system has either incorporated, or been controlled by, computational media, including transportation networks, energy
Three Species Challenges
35
grids, food production chains, and so on. Short of complete environmental collapse or apocalypse, everything points to this trend continuing and intensifying in the new millennium. Recent efforts by Erich Hörl to analyze the effects of this transformation under the rubric of “general ecology” provide a useful starting point for this discussion. Deeply versed in media theory as well as philosophical traditions, including phenomenology, Foucauldian genealogy, deconstruction, and Deleuze and Guattari’s rhizomatics, Hörl provides a comprehensive synthesis of what he calls the “absolute prioritization of mediation” (2013, 124), including ubiquitous computing, intelligent environments, data analytics, and the microtemporal regimes of computational systems. Foucault’s governmentality, he suggests, has now morphed into “environmentality,” a term that implies the control and governance of a population through mediated conduits of power (2018, 154). The new regime, however, has distinctive aspects only just coming into existence in the 1970s when Foucault first introduced the term. Chief among these differences is the ability of contemporary media to access human cognition under the temporal horizon of consciousness, an effect that Luciana Parisi and Steve Goodman have termed “mnemonic control” (2011). The effect has also prompted Mark Hansen to designate 21st-century media as “atmospheric” (2015), implying not only their ubiquitous presence through cell phones, social media sites, internet searches, and so on, but also their inescapability, their permeation into virtually every aspect of human sensations and experiences. Reverberating through Hörl’s rhetoric is what I might call, appropriating a phrase from Brian Massumi, a shock to thought, especially to thought as it is understood in the phenomenological tradition. The phenomenological emphasis on consciousness, intentionality, and temporality is subverted with atmospheric media, which short-circuit reflective thought by addressing human cognition through Libet’s “missing half-second” (1985), the interval between the onset of sensory sensations and conscious awareness of them. Hansen makes this implication explicit. Pointing out that consciousness in the context of mediated microtemporal regimes takes on the “more humble role as modulator of a sensory presencing that takes place outside its purview” (2015, 24), Hansen goes on to point out that this development “sounds the final death knell for phenomenology’s animating dream, the dream of a consciousness capable of experiencing its own presencing in the very moment of that presencing – consciousness’s dream of living in the ‘now’ of its happening” (ibid., 25). I drew similar conclusions about the belated role of consciousness in Unthought (2017), working however through neuroscientific research rather than phenomenology. For Hörl, and to some extent for Hansen, these developments make the reconceptualization of human subjectivity an urgent task, an implication that I have sought to address as well. Hörl’s approach is to posit a “general ecology” based solely on human–computational interactions, which leaves nonhuman and noncomputational entities out of the picture. There is a
36
Hayles
certain irony in calling this approach “environmental,” precisely because it pays no attention to what used to be called “the environment,” meaning the natural world of bacteria, plants, insects, birds, animals, and fungi that go about their business with only minimal and occasional interactions with humans or computers. Indeed, he critiques this kind of environmentalism for its “reaction to the machine” and its invocation of “the undamaged and unscathed, the unspoiled, intact and immune, the whole and holy” (2013, 128). I think this kind of critique is justified, for several reasons. By idolizing “wilderness,” for example, this view of unspoiled nature makes it harder for us to see that weeds in a vacant urban lot are also nature; by focusing on the unspoiled, it deflects attention from issues of environmental justice that arise when we send our most polluted contaminants to communities too impoverished and powerless to object; and so on.5 Hörl proclaims that the “general ecology” at which he aims is “an unnatural, non-natural, and, one might say, subtractive ecology; an ecology that eliminates the immunopolitics of ecology” (ibid., 128). He continues, “It is an ecology of a naturaltechnical continuum, which the general environmentalization through technology and the techno-sciences and the concomitant explosion of agency, schematizes as the core of our current and, even more, of our future basic experience” (ibid., 128). He defends his use of “ecology” by insisting that it is not merely a metaphor adopted from the environmental movement and therefore “bound to strictly biological, ethological, or life-scientific references” (2013, 126). On the contrary, he asserts, it is more likely the case that the traditional concept or discourse of ecology causes a breakthrough and imparts a principle form to the conceptual constellation, which as a consequence in the course of techno-medial development, ascends to the level of a critical intuition and model for the description of the new fundamental position. (Hörl 2013, 126) This principle, of course, is relationality. “Being is relation,” he states (ibid., 122). For Hörl, crucial aspects of the new relation of humans to the computational media that surround and interpenetrate human complex systems are the reconfigured subjectivities that result when agency is distributed throughout the system. He writes that such technological systems are “currently driving the ecologization of sensation, with the additional consequence, however, of ecologizing cognition, thought, desire, and libido, as well as power and governmentality” (2013, 127). Citing Guattari, he refers to this as “non-subjective subjectivity that is distributed in a multiplicity of relations” (ibid., 129). This is a vision similar to my own framework of cognitive assemblages, with a crucially important difference: as I envision
Three Species Challenges
37
them, cognitive assemblages emphatically do not exclude nonhuman lifeforms. In Unthought, I locate human cognition along a continuum with both the cognitive capabilities of nonhuman life and the artificial cognition of computational media. All of these entities perform cognitive acts consisting of interpreting information in contexts that connect it with meaning. Although they may perform as individuals, more frequently they function as cognitive assemblages, collectivities through which information, interpretations, and meanings circulate. A farm, for example, would count as a cognitive assemblage. It likely involves computational components, for example in the tractor and other automated equipment that the farmer uses and in the computer he powers on to access current market prices for his crops. But it also includes all the lifeforms necessary for the farm to produce its harvests, from the bacteria in the soil to the plants in the fields to the livestock those plants and bacteria help to feed. From microbes to the farmer and his cell phone, all count as cognizers interpreting information and engaging in meaning-making practices specific to their capacities and milieux. By appropriating the term “general ecology” for interactions that do not include 95 percent of the world’s biota, Hörl risks exacerbating trends already too prevalent in our anthropocentric cultures. His approach makes it more difficult to see how and why humans should take response-ability for the welfare of other species on the planet. However, it also has real strengths in its large scope of reference, synthesis of diverse material, and articulation of how computational media are impacting the very idea of human subjectivity. These are contributions that should be celebrated. Moreover, the very concept of a “general ecology” is a fine insight that, with due credit to Hörl, I would like to develop along lines not limited by his exclusion of the lifeworld of more-than- human cognizers and cognition.
A General Ecology of Cognitive Assemblages To illustrate the usefulness of a cognitive assemblage framework, I will consider three topics that Hörl also analyzes, but now with an emphasis on an integrated approach to cognition: 1) the ability of computational media to address humans in the microtemporal regime, underneath the temporal horizon of consciousness; 2) the distributed agency that human enmeshment in cognitive assemblages implies; and 3) the prevalence of machine-machine communication over human–machine and human–human communication. From a cognitive assemblage perspective, the idea that consciousness is not the whole of cognition is a fundamental premise. In Unthought (2017), I presented a timeline showing that nonconscious cognition starts significantly before conscious awareness, on the order of 200 milliseconds as opposed to 500 milliseconds for consciousness. As I noted there, nonconscious cognition is a level of neuronal processing inaccessible to conscious introspection and yet responsible for performing tasks essential for consciousness to operate,
38
Hayles
including constructing a coherent body image, processing information for patterns too complex and “noisy” for consciousness to process, and forwarding or suppressing the results to consciousness depending on the context. Here is a major point from Unthought: nonconscious cognition can suggest that consciousness pay attention, but it cannot by itself initiate intentional action. Consciousness is always able to ignore such nudges if it considers other information to be more important at the moment. Once consciousness decides to pay attention and sends down reinforcement in the form of activation waves, “ignition of the global workplace” takes place, as Stanislas Dehaene puts it, and then consciousness can continue to meditate on a given thought indefinitely. Nevertheless, the suppression function of nonconscious cognition is also crucial; much of the work it does in creating a coherent body representation, for instance, never enters consciousness at all. Indeed consciousness, with its slower processing speed and limited information capacity, could not function without the pre-processing done by nonconscious cognition; it would simply be overwhelmed. It depends on both the suppression and representation of information from the anterior work of nonconscious cognition, and in this respect it is always belated. From this perspective, it is no surprise that consciousness is temporally vulnerable to phenomena that enter under its half-second delay; indeed, this is the normal way that all sensory information is processed. What is different in computational microtemporal addresses is that the messages are not simply coming from the body’s sensory interfaces with the outside world (as well as from internal sensing mechanisms) but rather are targeted by corporate interests specifically to create a propensity toward certain kinds of information, for example the kind of branding information that Parisi and Goodman discuss in “Mnemonic Control” (2011). When this kind of targeted information reaches consciousness, for example when one is looking at a web page with side boxes displaying certain commodities, consciousness has already been predisposed to pay selective attention to some of them because of the information that had previously entered through nonconscious cognition, whether or not at that point it entered conscious awareness. This kind of approach was already denounced in Vance Packard’s 1957 The Hidden Persuaders, where it went by the name of “subliminal” advertising, but now that the mediascape has enormously expanded and the technologies of micro-address have become much more sophisticated, it returns with a virulence to direct human attention to the products that corporations want us to purchase. It is difficult to know how to protect ourselves against this informational onslaught, given that it exploits how the human neuronal system works. The key, no doubt, lies in the fact that only consciousness can initiate intentional action such as clicking on a “Buy” icon. That allows time for reflection and resistance to come into play.
Three Species Challenges
39
A similar issue is the distributed agency that human enmeshment in cognitive assemblages implies. No doubt consciousness has always tended to exaggerate its own importance of human cognition, through its internal monologue that typically dominates human awareness. As stand-up comedian Emo Philips has joked, “I used to think the brain was the most wonderful organ in the body, but then I asked myself, ‘Who’s tell me this’?” Meditation techniques, mindfulness exercises, and other body-awareness practices aim to stop the internal monologue and empty the mind of conscious thoughts so that another kind of awareness can enter. In this sense, distributed cognition is a hallmark of human being, central to the body’s functioning as a semiotic entity through which external and internal messages are constantly passing and interacting, with only a small part of them available to consciousness. With the advent of computational media, however, distributed agency takes on a different sense as human–computational assemblages perform tasks that human cognition alone could never accomplish. Already problematic, in my view, is the notion of “free will,” because it over-simplifies the body’s complex interplays between sensory inputs, neuronal processes, and conscious awareness, tilting the matter entirely too far toward consciousness intentionality, as if that were all that is involved. With cognitive assemblages, “free will” becomes hopelessly muddled, to the extent that it is rendered virtually useless as an analytical concept. Consider, for example, an officer standing next to a drone pilot as they scrutinize images from the drone’s camera, and the pilot waits for the officer’s decision to strike or not. Is the officer making his decision based on “free will”? The images he relies on to distinguish a disguised enemy combatant from a woman on her way to the market have already been highly processes by the drone’s computerized visual system, where innumerable software decisions have been made about which information to accept and which to reject – decisions that have significant consequences and that the officer himself cannot access or evaluate Moreover, the officer acts in a highly regulated environment with its own complex constraints on what kinds of decisions and actions he can take. Finally, the drone pilot and the drone itself are part of this cognitive assemblage, and they both have behaviors they can initiate independent of what the officer decides. This situation is unusual in that it involves a life-or-death decision, but it is not at all unusual in the interplays between complex cognitive components, the information they can and cannot access, the constraints on their actions, and resultant actions that the assemblage as a whole will enact. This is what distributed agency looks and feels like in the computational era. The cognitive assemblage framework, with its emphasis on the information, interpretations, and meanings circulating throughout the assemblage and the cognizers, human and nonhuman, that comprise it, provides a way to talk about distributed agency that does not succumb to the panic implicit
40
Hayles
in some of Hörl’s rhetoric, the shock of discovering that humans alone are not in control (if we ever were). At the same time, the cognitive assemblage approach creates many possible openings for analytical inquiry, from the construction of software algorithms to hardware configurations of computational platforms to the interfaces with humans, and in turn to the bureaucratic and institutional constraints under which they may operate. Within cognitive assemblages as a whole, machine-machine communication is growing exponentially faster than human–human or human–machine communication. The software company Cisco estimated that by 2017, the number of things connected to the internet was 20 billion; by 2020, that number was estimated to rise to 31 billion, and by 2030, 500 billion (Cisco 2019). The human population of the planet, by contrast, is about 7.4 billion and, although it too is predicted to rise (sparking concerns about resource scarcity), it is nevertheless increasing much more slowly than the numbers of smart machines. As machines communicate more with each other than with us, the intervals and pervasiveness of machine autonomy increase – areas where machines make decisions that affect not only other machines but also humans enmeshed in cognitive assemblages with them. Proponents of driverless cars, for example, argue that this is a good thing, because there will likely be, on the whole, fewer traffic accidents than when humans alone are in control. Nevertheless, the prospect of machine autonomy is concerning because there are many instances where humans, with their wider context-awareness, are better able to judge the consequences of actions. Cases in point are the two tragic airplane crashes of Lion Air flight 610 on October 29, 2018, and the Ethiopian Airlines Flight crash in Kenya on March 10, 2019, both flying the Boeing 737 Max 8 aircraft. As details emerge, it appears that the cause of both flights was malfunction in the Maneuvering Characteristics Augmentation System (MCAS), which is connected to external angles of attack sensors (AoA). If these sensors indicate that the plane is flying too slowly or at too steep an angle, the MCAS software will force the plane nose down to present stalling. This software system is obviously not robust, because it depends on only one set of sensors; if these are not correct, then the software will dictate actions that could (and apparently did) result in the plane crashing. A day before the fatal March 10 flight, a different set of pilots reported problems with the MCAS; they were fighting to lift the plane nose after the software turned it down, frantically going through their check lists to solve the problem. Fortunately, a pilot not on duty but riding in the cockpit jump seat (a “dead head,” as such pilots are called) knew how to disable the motor that powers the MCAS system and conveyed this information to the pilots, who were able to disable the system and (presumably) save the flight from the same fate that occurred a day later on the very same aircraft with a different set of pilots (Bloomberg Report 2019). As machine autonomy increases, issues arise about how to program algorithms to make ethical decisions when some degree of harm is inevitable. This
Three Species Challenges
41
is the premise for the “Moral Machines Experiment,” a computer game designed by an international team of researchers headed by Edmond Awad recounted in “The Moral Machine Experiment” (Awad et al. 2018). The game used stick figures and users to make binary choices about how to distribute harm, choosing, for example, to spare more people rather than fewer, a pregnant woman rather than an elderly man, a person rather than a dog or cat, and so forth. The game elicited input from millions of users from 233 countries and regions, and sophisticated statistical techniques were used to analyze the results. Regional and ethic differences revealed interesting systemic differences in preferences, but perhaps more surprising was the large amount of consensus, for example, responses sparing children over adults, humans over nonhumans, women over men. The game has been rightly criticized on several grounds, ranging from its forced binary selections to such problematic choices as the fit versus the overweight, or those crossing streets legally versus those crossing illegally. Nevertheless, looking at the results, I was struck by how many of the preferences coincided with choices in favor of species survival – young over old, pregnant woman over elderly man, humans over nonhumans. Humans in extreme circumstances have made such wrenching choices in reality, from the abandonment of old people in Eskimo cultures when food becomes scarce, to the sacrifice of sled dogs to conserve food for humans in the Shackleton expedition to the Antarctica. Here is an example where speciesin-cybersymbiosis (the computer game, designed by humans and played by them but executed by algorithms with the data collected and analyzed by more algorithms) becomes entangled with species-in-common on multiple levels, leading to the kinds of complexities that a general ecology of cognitive assemblages is designed to address.
Activating Species Concepts for Global Challenges The major issue confronting those of us who want to address the Global Challenges is immediately and starkly evident: why, when it seems clear that we as a species could make significant progress on several of the challenges with technologies already readily available, is it so difficult in fact to do so? Peter Haff, in his analysis of large technological systems, gives us a partial answer (2014: pdf available from author at Researchgate.net). Arguing that large-scale technologies such as communication systems, transportation networks, and energy and resource extraction industries comprise a “technosphere,” Haff proposes to analyze “the dynamics of this newly emerged Earth system and the consequences for humans of being numbered among its parts” (2014, pdf 1). Rejecting the idea that humans control the technosphere, he instead takes the approach that “the workings of modern humanity are a product of a system that operates beyond our control and that imposes its own requirements on human behavior” (ibid., pdf 2). From this premise, he deduces rules for how the technosphere operates, based on
42
Hayles
coarse-gained scale distinctions between micro-scale (small compared to humans, which he calls Stratum I), meso-scale (human-sized, Stratum II), and macro-scale (very large compared to humans, Stratum III) (ibid., pdf 6). His rules include ones specifying that macro-objects cannot directly affect humans, and that humans cannot directly affect macro-objects. If we consider the reality that the human species, through its activities, is accelerating climate changes and endangering the future of humans on this planet, along with risking extinction of myriad nonhuman species, it is easy to agree that humans are not in control of the technosphere: otherwise, why would we rush headlong to our own destruction? Yet Haff surely indulges in misplaced agency when he argues that the technosphere as such can carry out intentional actions (in this respect his approach resembles that of Kevin Kelly in What Technology Wants [2011]). For example, one of his six rules is that the technosphere acts to protect the humans that are some of its components. It is as though his approach has carried out a vector analysis of the global totality of competing/ cooperating forces that comprise the technosphere, and then, by designating their resultant with a singular noun, has created a fantasy-object that has intentionality, agency, and rule-based behaviors. The key to his flawed approach can be found here: “We analyze the role of technology in the Anthropocene by examining basic physical principles that a complex dynamic system must satisfy if it is to persist, i.e. continue to function” (Kelly 2011, pdf 3). This imparts to the technosphere an imperative similar to the biological imperative, “survive and reproduce.” But plenty of technical systems have failed to function and consequently have not persisted in human history – narrow-gauge railways in Great Britain, Jacquard looms throughout Europe, horse-and-buggy transportation networks in the US, samurai cultures in Japan. There is no will within a technical system to persist, only the desires of individual humans that it do so. Even these desires are not homogeneous throughout the system; for example, many humans abandoned making buggies (including wheels, axles, whips, traces, and so forth) when they saw there was no future in them. These objections notwithstanding, some of his analysis yields insight into our present predicament. In comparing the technosphere to a navy ship, he argues that the captain (meso scale) can control the ship (macro scale) as well as the cams on the ship engine (micro scale) because connecting all three scale levels are mediating mechanism that allow interactions between levels, from the mechanical linkages in the engine, to the computational media in the navigation and communication systems, to the hierarchical structures that regulate the captain’s relations with his junior officers and crewmen. Only because of these multi-scalar and highly articulated mechanisms, developed over centuries of navy tradition and engineering, can one person control the actions of the ship as a whole. He contrasts the navy ship with the technosphere: the technosphere is not an engineered or designed system and during its emergence has not relied on nor required an overall leader, and in consequence lacks the infrastructure needed to support leadership. In this
Three Species Challenges
43
regard the technosphere resembles the biosphere – complex and leaderless. (2014, pdf 10) Therein lies the relevance to Global Challenges: one way to understand our present situation is through the need to create more mediating mechanisms that connect different scale levels to each other, creating the possibility for humans, in all their diversity as well as species-specificities, to begin to affect the direction of events. We might think of the Paris Accords as a successful attempt to create a series of such mechanisms linking together individual governments, corporations, and human actors within them to achieve global-scale results. Another experiment in creating multi-scale linkages is “The Moral Machine Experiment”; for all its faults, it succeeded in gathering data from millions of participants across the globe, analyzing the results and presenting them in easyto-understand graphs illustrating overlaps and divergences between different regions and cultures. In effect, it provides linking mechanisms between the programming of self-driving cars (species-in-cybersymbiosis), millions of human users (species-in-common), and the complex bureaucratic and technical systems that will determine how driverless cars are actually developed (the technosphere). Such experiments suggest that it may be possible to develop other multi-scalar mechanisms to address the Grand Challenges facing us and nonhuman species (species-in-biosymbiosis). A general ecology of cognitive assemblages provides a framework through which to map existing linkages and, equally important, points to possibilities for developing more linkages across multiple scale levels. For such an ecology to succeed, it is necessary to have some way to talk about the enmeshments of humans, nonhuman others, and our computational symbionts without obliterating important distinctions and yet also acknowledging commonalities. The relational thinking characteristic of a general ecology includes the necessity to discover and specify the mechanisms through which relations are established and also to help bring other mechanisms into existence where they are lacking. Grand Challenges need species challenges to achieve effective action, just as species challenges need Grand Challenges to direct attention and effort toward our most urgent problems. A general ecology of cognitive assemblages provides a framework within which both can happen.
Notes 1 UN Intergovenmental Panel on Climate Change (IPCC) warns that the planet will reach the crucial threshold of 1.5 degrees Celsius above preindustrial levels by as early as 2030, leading to extreme weather events, sea level rise, food shortages, and increased human misery. 2 More information on the Grand Challenges can be found at the Gates Foundation, https://gcgh.grandchallenges.org/. 3 For a reprint of the original by Richard Ryder, see https://www.veganzetta.org/wp -content/uploads/2013/02/Speciesism-Again-the-original-leaflet-Richard-Ryder.pdf.
44
Hayles
4 An example of this interdependence and the shared semiotic space it creates is recounted in Lorimer (2006), about the re-establishment of a reindeer herd in Scotland and the herders who tend them. 5 More than two decades ago I participated in a residential research group at the University of California, Irvine that included such well-respected scholars as the eminent environmental historian William Cronon, historian Richard White, landscape architect Anne Whiston Spirn, and South American ethnographer Candace Slater, as well as several emerging scholars. Our semester-long discussions, which included such wry comments as Richard White remarking that we know something is a wilderness because there is a 300-page book of regulations governing what you can and cannot do, culminated in the anthology Uncommon Ground: Rethinking the Human Place in Nature (Cronon 1996). After the book was published, Gary Synder sent a bitter email to Cronon saying that our book had set back the environmental cause 20 years. In hindsight, however, a stronger, more enlightened environmentalism has since emerged that acknowledges the interpenetration of natureculture and does not rely on romanticized tropes of solitude, awe, and grandeur for its effectiveness.
References Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. “The Moral Machine Experiment.” Nature 563 (24 October): 59–64. Bloomberg Report. 2019. “Off-Duty Pilot Saved Lion Air 737 Max One Day Before Doomed Flight.” March 20, 2019. https://www.bloomberg.com/news/videos/ 2019-2003-20/off-duty-pilot-saved-lion-air-737-max-one-day-before-doomed-flightvideo. Cisco. 2019. “The Internet of Things.” https://www.cisco.com/c/dam/en/us/products/ collateral/se/internet-of-things/at-a-glance-c45–731471.pdf. Cronon, William. 1996. Uncommon Ground: Rethinking the Human Place in Nature. New York: W.W. Norton. Gabyrs, Jennifer. 2016. Program Earth. Minneapolis: University of Minnesota Press. Guillory, John. 2010. “Genesis of the Media Concept.” Critical Inquiry 36, no. 2 (Winter): 321–362. Gyles, C. and P. Boerlin. 2014. “Horizontally Transferred Genetic Elements and Their Role in Pathogenic Bacterial Disease.” Veterinarian Pathology 51, no. 2: 328–340. Haff, Peter. 2014. “Humans and Technology in the Anthropocene: Six Rules.” The Anthropocene Review 1, no. 2: 126–136. https://doi.org/10.1177/2053019614530575. Hansen, Mark. 2015. Feed-Forward: On the Future of Twenty-First-Century Media. Chicago: University of Chicago Press. Haraway, Donna J. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Durham, NC: Duke University Press. Hayles, N. Katherine. 2017. Unthought: The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press. Hoffmeyer, Jesper. 2008. “Semiotic Scaffolding of Living Systems.” In Introduction to Biosemiotics: The New Biological Synthesis, edited by Marcel Barbierl, 149–166. Dordrecht: Springer. http://jhoffmeyer.dk/One/scientific-writings/semiotic-scaffolding.pdf. Holmes, Brian. 2018. “Learning from Cascadia.” https://deptofbioregion.org/departm ent-of-bioregion/2018/12/11/ecotopia-today-learning-from-cascadia.
Three Species Challenges
45
Hörl, Erich. 2018. “The Environmentalitarian Situation: Reflections on the Becoming-Environmental of Thinking, Power, and Capital,” trans. Nils F. Schott. Cultural Politics 14, no. 2: 153–173. Hörl, Erich. 2019. “Environmentalitarian Time: Temporality and Responsibility under the Technoecological Condition.” Keynote Address, Moral Machines Conference, Helsinki, Finland, March 7. Hörl, Erich. 2013. “A Thousand Ecologies: The Process of Cyberneticization and General Ecology,” trans. from the German by James Burton, Jeffrey Kirkwood, and Maria Vlotide. In The Whole Earth. California and the Disappearance of the Outside, ed. Diedrich Diederichsen and Anselm Franke, 121–130. Berlin: Sternberg Press. https://www.academia.edu/7484844/A_Thousand_Ecologies_The_Process_ of_Cyberneticization_and_General_Ecology. Jue, Melody. 2020. Wild Blue Media: Thinking Through Seawater. Durham, NC: Duke University Press. Kelly, Kevin. 2011. What Technology Wants. New York: Penguin. Latour, Bruno. 2018. Down to Earth: Politics in the New Climatic Regime. Cambridge UK: Polity Press. Libet, Benjamin. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” The Behavioral and Brain Sciences 8, no. 4: 529–566. Lorimer, Hayden. 2006. “Herding Memories of Humans and Animals.” Environment and Planning D 24, no. 4: 497–518. Moore, Jason. 2015. Capitalism and the Web of Life: Ecology and the Accumulation of Capital. London: Verso Books. Moore, Jason. 2014. “Toward a Singular Metabolism: Epistemic Rifts and Environment-Making in the Capitalist World.” In Grounding Metabolism, ed. Daniel Ihañez and Nikos Katsikis, 10–20. New Geographies 06. https://jasonwmoore.com/ wp-content/uploads/2017/08/Moore-Towards-a-Singular-Metabolism-2014-NG.pdf. Packard, Vance. 1957. The Hidden Persuaders. New York: D. McKay Company. Parisi, Luciana, and Steve Goodman. 2011. “Mnemonic Control.” In Beyond Biopolitics: Essays on the Governance of Life and Death, ed. Patricia Ticento Clouish and Craig Willse, 167–176. Durham, NC: Duke University Press. Peters, John Durham. 2016. The Marvelous Clouds: Toward a Philosophy of Elemental Media. Chicago: University of Chicago Press. Rorty, Richard. 1998. “Human Rights, Rationality, and Sentimentality.” Chapter 9 in Truth and Progress: Philosophical Papers, 167–185. Cambridge: Cambridge University Press. http://ieas.unideb.hu/admin/file_6249.pdf. Sagan (Margulis), Lynn. 1967. “On the Origin of Mitosing Cells.” Journal of Theoretical Biology 14, no. 3: 255–274. Shannon, Claude E., and Warren Weaver. 1949. The Mathematical Theory of Communication. Champagne-Urbana: University of Illinois Press. Shildrick, Margrit. 2019. “(Micro)chimerism, Immunity and Temporality: Rethinking the Ecology of Life and Death.” Australian Feminist Studies 34, no. 99: 10–24. Teresi, Dick. 2011. “Discover Interview: Lynn Margulis Says She’s Not Controversial, She’s Right.” Discover Magazine (April 2011). http://discovermagazine. com/2011/apr/16-interview-lynn-margulis-not-controversial-right.
Part 2
The Ethos: Description and Formation
2
Viral Storytelling as Contemporary Narrative Didacticism Deriving Universal Truths from Arbitrary Narratives of Personal Experience1 Maria Mäkelä
Introduction: Narrative Universality Claims and the Campfires of Contemporary Story Economy While narrative imagination is touted as the universal propensity of the human mind both in contemporary research and public parlance, social media have radically changed the rhetoric and ethics of everyday storytelling. The immediate consequence is the proliferation of singular stories of personal experience and their rhetorical amplification within the public sphere. The storytelling consultants’ nostalgic cry for “compelling stories” as an antidote for information overflow, coupled with the social media prompt to “share your story” have created a 21st-century storytelling boom that heavily instrumentalizes personal storytelling (see Shuman 2005; Polletta 2006; Salmon 2010; Fernandes 2017; Mäkelä 2018). Social media’s affordances for affective networking (see Van Dijck 2013; Papacharissi 2015) are key to this process of instrumentalization. When shared and accompanied by strongly polarizing signs of affect such as hearts and angry face emojis, mostly unverifiable and sometimes anonymous stories of personal experience have a potential to grow disproportionately representative (“This story is true in so many ways!”) and lead to normative conclusions (“This story highlights an issue that we need to tackle immediately!”) (Dawson and Mäkelä 2020; Mäkelä 2020;,Mäkelä et al. 2021). Any appeals to the universal campfire of storytelling are thus bound to obscure the fact that contemporary narratives of the public sphere are often carefully curated, instrumentalized, and competing in myriad ways. Then again, whereas the campfire represents, sociolinguistically speaking, the most “natural” storytelling situation with its face-to-face, naturally occurring narratives (see Fludernik 1996), the campfires of social media bring forth a radically contrasting logic of narrative communication by detaching the circulated narratives from their origins. It is somewhat surprising that this feature of contemporary storytelling is largely ignored within the discourses of the storytelling boom. The writer of probably the most read viral storytelling manual Winning the Story Wars (2012) Jonah Sachs, for example, is famous for his work in the service of corporate sustainability, yet resorts to a universalist rhetoric that ignores the ethical and rhetorical complexities of viral storytelling:
50
Mäkelä great stories are universal because at their core, humans have more in common with each other than the pseudo-science of demographic slicing has led us to believe. Great brands and campaigns are sensitive to the preferences of different types of audiences, but the core stories and the values they represent can be appreciated by anyone. Universality is the opposite of insincerity. (Sachs 2012, 44)
The celebration of narrative in non-academic discourses as a universal tool and therefore essentially “natural” in the sense of mutually accessible, immediate and innocent finds ample support from contemporary cognitive studies: the campfire rhetoric of storytelling consultants, effacing individual backgrounds of storytellers and audiences as well as the varying affordances of narrative platforms, is a neoliberal, streamlined interpretation of the cognitive rhetoric on storytelling promoted by evolutionary narrative studies (e.g. Boyd 2009) and cognitive narratology (see, for example, Sternberg 2001). In the following, however, I will demonstrate how concepts and notions originating from narrative theory and narrative studies may also yield analytical applications that counteract the essentializing discourse on storytelling and its many virtues. However, this critical approach requires a non-universalizing approach to the forms of narrative agency and affect conditioned by the social media. Narrative, at best, is an artform able to pass the particular for the “universal” in the Aristotelian sense, and therefore its didactic uses date back to the origins of language. Narratives, in their prototypical oral forms, tend toward moral positioning (e.g. Pratt 1977) and an explicit evaluation of the moral and the point of the story (Labov and Waletzky 1997 [1967]). Yet the campfires of social media differ radically from the campfires of the prehistoric times as the generation of our shared mythologies is conditioned by likes, shares, and algorithms that support strong affect. Much rather than a product of our joint brain architecture, the alleged universality of stories going viral in the public sphere is the result of a clash between the affordances of narrative form and the affordances of social media (Mäkelä et al. 2021). At the core of the storytelling boom, we may find what the first wave cognitive narratologists such as David Herman (2009), Monika Fludernik (1996) and Marie-Laure Ryan (2007) would call a prototypical narrative: a situated account of what it feels like for a particular person to live through a disruptive experience in a storyworld conveyed through particulars. As such, then, the most tellable of stories – a “compelling story” in consultant jargon – is the very opposite of universalism: it conveys a particular experience in particular circumstances, and moreover, is inclined to foreground the unexpected, the out-of-theordinary. Paradoxically however, in social media, the universality of the story’s moral depends precisely on its personality, alleged authenticity and particularity. This leap from experientiality and particularity to representativeness is enabled by the affordances of social media storytelling: besides the proliferation and amplification of personal narratives, another consequence of
Viral Storytelling
51
social media for the contemporary story economy is a singular narrative’s radical detachment from its original source – the particularized teller or experiencer of the narrative. If “compelling” enough, a narrative of personal experience is usurped by the agential assemblages (see Chapter 1 by Hayles and Chapter 3 by Roine and Piippo in this volume) of algorithms, platform affordances and user collectives, and transformed into a co-constructed, stripped-down, “skeletal” and thus easily shareable and adaptable masterplots (see Abbott 2008a), conforming to the polarized expectations of different social media audiences. In the following, my aim is to flesh out the relationship between narrative didacticism, narrative universalism and the viral story logic of social media, with a particular attention to the reshaping of narrative rhetoric and ethics in contemporary narrative environments where your story is never truly yours. I ask how the story logic of social media is able to give rise to claims of “universal truth” based on arbitrary narratives of personal experience. Unlike Hanna-Riikka Roine and Laura Piippo in this volume, I do not consider the loss of traceable narrative agency in viral storytelling to be a reason to abandon a focus on the particularly narrative logic of some – not all! – viral phenomena. This choice of mine is, first and foremost, methodological. Roine and Piippo are unquestionably right in their insightful synthesis of the complexities of human–technical assemblages as the ultimate force spreading and interpreting content in digital environments. Yet as a narrative theorist, not specializing in algorithms but narrative structure and its uses, I find it useful for the interdisciplinary fields of both narrative and social media studies to be able to analyse some facets of digital meaning-making agency while necessarily bracketing some other facets for the sake of precision and methodological yield. A limited focus on narratives of personal experience moreover connect the story logic of social media to the contemporary storytelling boom as a cultural dominant that transforms experiential particulars into cultural and political, often polarized doxa. The contemporary story economy can be considered to form a crucial part of phenomena that have previously been conceptualized as the attention economy (e.g. Terranova 2012) and emotional capitalism (Illouz 2007), all three phenomena sharing storytelling, social media, and the affects of the neoliberal subject as their core features.
What is Viral Storytelling? From Immediate Experiences to Moral Positioning The narrative appropriation of the personal and the particular in social media is most simply exemplified by the memetic reuse and spread of stories of personal experience in forms that condense the moral of the story in a sloganish one-liner. A well-known example from the Finnish public sphere would be the widespread social media appropriation of the comment made by the party secretary of the nationalist True Finns party
52
Mäkelä
Riikka Slunga-Poutsalo in a tabloid interview in 2015. As a response to accusations concerning the party’s association with right-wing extremist groups, the secretary recounted a hearsay story of a Kosovan asylum seeker being told at the social insurance office to just live on welfare benefits and forget about employment. The secretary concluded her narrative by uttering the meme-friendly words: “Whether the story is true or not, that’s another thing. This is how people experience things.” The statement continues to live on in public parlance and especially social media, mainly for the purpose of tagging a narrative or a comment as completely subjective and therefore unreliable. A parallel example from Sweden would be the viral story of the “Jimmie Moment” originated by physician Kajsa Dovstad in her guest column to Göteborgs-Posten and referring to the party leader of the right-wing populist Sweden Democrats Jimmie Åkesson and his political credo. Dovstad recounted her late-night experience of trying to buy “traditional” Swedish food in Gävle, populated with Middle Eastern grocery stores. Dovstad wrote: “I am in Sweden, in a Sweden that does not feel Swedish. And I don’t like it. A Jimmie moment, as my friend would say.” This concise, storified neologism – the “Jimmie moment” – went viral both in the anti-immigration and anti-racism camps of social media: the conservative and radical antiimmigrationists appropriated this novel and yet easily malleable masterplot to recount their own “similar” experiences of culture shock and estrangement in their native country, whereas the anti-racists turned the masterplot around to recount their “anti-Jimmie moments” of opening their eyes to the growing xenophobia in Sweden. However crafty the original narrative behind the viral phenomenon, and however apparent the ethos of the original storyteller, the narrative’s viral afterlife turns it into common property. Admittedly, the meaning of a narrative – particularly that of a written one – was considered disconnected from the authorial intention for the long 20th century of literary studies, and increasingly conditioned by the contexts and the horizons of expectations of interpretive communities. Yet in social media storytelling, the use of narratives is characterized by a much more significant distance from the original storyworld, teller, and the narrative occasion (cf. Phelan 1996, 120–2) than in more traditional forms of storytelling, while at the same time, all the liking and sharing we do is part of the “natural” continuum from naturally occurring face-to-face storytelling to our social media identities. A narrative theorist is thus forced to ask: to what extent can we even speak of viral storytelling? Do narrative studies methods, mainly developed for the analysis of literary fiction, face-to-face communication and interviews, be of any help in the analysis of viral storytelling? As Roine and Piippo argue in this volume, “tying authorship up with distinct agents is not, in digital environments, accurate or beneficial, as it emphasizes human activity at the expense of the nonhuman agencies of digital technology” (p. 000). Instead, Roine and Piippo promote an approach to social media storytelling that would account
Viral Storytelling
53
for the myriad visible and nonvisible mechanisms making content available and guiding interpretations of it, human and nonhuman. A narrative theorist is however hard-pressed to imagine what this kind of a complex, multi-layered agential analysis of a particular case of viral storytelling would look like. By arguing against the homocentricity of much of the linguistic or literary research on social media storytelling, Roine and Piippo position themselves among those scholars of virality that consider platform architectures as key to why virality occurs in the first place – other positions highlighting, by contrast, either the role of influencers and mainstream media as gatekeepers, or the virality potential of a particular type of content – emotional relatability, eventfulness, or importance in a specific contest (Munster 2013; Nahon and Hemsley 2013; Stage 2017). Another take would concentrate less on the agents and more on the consequences of virality; as intelligibly formulated by Tony Sampson, “small, unpredictable events can be nudged into becoming big, monstrous contagions without a guiding hand” (Sampson 2012, 6). Sampson’s formulation, in turn, dovetails with notions of complexity and emergence that have recently gained ground in narrative theory. The problematic relationship between full-blown and tellable narratives requiring, in Porter Abbott’s words, a “centralized controlling instance” and complex, emergent phenomena such as evolution or climate change that proceed “without a guiding hand” has been explored in narrative theory (Abbott 2008b; Walsh and Stepney 2018; Grishakova and Poulaki 2019), yet little attention has been devoted to the emergent qualities of social media agency. In our recent article, Paul Dawson and I argue for the pertinence of emergent authority in social media (Dawson and Mäkelä 2020); next I will try to demonstrate how viral storytelling as a social media activity that lacks traceable narrative agency can nevertheless be analysed in narrative terms. Understanding agency in terms of emergence does not exclude human action, nor does it even foreground non-human action; as recently summarized by Marie-Laure Ryan, “[e]mergence, in its strongest, form, is a property of phenomena that we do not fully understand: how the individual elements of a system organize themselves into larger functional patterns without the top-down guidance of a controlling authority” (Ryan 2019, 42). Not all viral content is narrative and not all social media activity thus necessarily storytelling. Eminent social media theorists such as Zizi Papacharissi repeatedly use the word “storytelling” to denote any affective cocreation on social media, yet it would be useful to better elaborate on the differing degrees of narrativity in our social media activities. Consider, for example, the archetype of a viral phenomenon: a cat video. Reactions to cat videos are without a doubt affective and embodied, and sharing them creates a network of collective affect that is being transformed and refined into culturally recognizable feelings that range from rapture to joy and amazement and find their expression in comments and shares. Indeed, the recent
54
Mäkelä
waves of cognitive narrative studies have highlighted embodied experientiality as the key ingredient to narrativity, and even recent narrative complexity theories consider embodied experiences to be the main trigger for narrative sensemaking (see Grishakova and Poulaki 2019, 15). Yet we may well ask if sharing cat videos has anything to do with storytelling, and I would maintain that intuitively speaking, no. The lack of narrativity does not come down to the lack of experientiality in the original video material nor in the paratexts such as shares, comments, and likes; what is usually lacking in cat videos going viral is moral positioning and a search for a “teachable moment.” Sharing a cat video rarely implies representativeness: the point of sharing the video is not to argue how cats, in general, are. Even less there is normativity in it: we do not share a cat video to propagate a world view or a moral position – such as advocacy of stern discipline for cats. In other words, viral storytelling – at least in its prototypical form – elevates particulars onto the level of universals, while not all viral material ends up in such didactic use. Narratives that have prototypical elements in the cognitive-narratological sense, such as temporal causality, human qualia, storyworld particulars, and breach in the expected script (see Bruner 1991; Hyvärinen 2016), tend to be shared and read as exempla. According to sociolinguists Anna De Fina and Alexandra Georgakopoulou (2012, 98), the logic of the exemplum and its inherent narrative-argumentative double standard dominates our everyday storytelling: stories of personal experience are recounted as manifestations of some pre-given, generally accepted truth or a normative stance (“let me tell you about cats – I’ll give you an example from my own experience … ”), while at the same time these stories are presented as evidence on that very same maxim (“this is what happened with me and my cat, and I guess that’s how cats are”). Precisely because of this rhetorical double standard, argue De Fina and Georgakopoulou, narratives of personal experience are notoriously difficult to argue against while at the same time they are effective in displaying and maintaining moral stances. Virality amplifies the logic of the exemplum. The vicious cycle from particulars to universals and back is amplified on every share that adds up to the gestures of narrative positioning and reinforces the moral of the story. This reinforcement is achieved by claiming ownership of the shared story by way of connecting it to the user’s own experience. The Swedish “Jimmie moment” story meme is a perfect example of this story logic. A general truth about Sweden forgetting its cultural roots and causing estrangement in its native citizens takes the form of a storified meme, which again gives rise to new exempla and new expressions of confirmation of this “truth.” One explanation for the success of the “Jimmie moment” is precisely that it offers an easily adaptable masterplot, a rough story format with a familiar structure of conversion or epiphany that is moreover verbalized as a general doxa (“a Sweden that does not feel Swedish”). However, as previously argued, the rhetorical detachment of the story meme from its original
Viral Storytelling
55
authority and setting in social media unleashes it for unorthodox and parodic uses. This affordance for counter-narrativity by positioning was what happened with the Finnish story meme (“Whether the story is true or not [ … ] this is how people experience things”), as the original ethos, considered as paranoid and xenophobic, was turned against itself in the social media appropriation of the narrative. Again, what is crucial for the normative use of the story meme, is the moral positioning already present in the original narrative, highlighted by the memorable evaluation. The “original” story about the Kosovan immigrant may itself have been a viral narrative among anti-immigrationists, yet what truly went viral was the narrative positioning, reimagined. As in the case of the “Jimmie moment,” these contrasting narrative positionings can only be considered “storytelling” against the backdrop of canonized “cultural narratives” (e.g. Phelan 1996, Dawson and Mäkelä 2020) affecting on the background, as shared cognitive schemata, ideological stances and conventions of telling. In viral storytelling, moral positioning is thus a key mediator between narrative particulars and universals. Could this narrative logic even partly explain the growing polarization of contemporary “cultural narratives” in the public sphere (see, e.g. Bail et al. 2018)?
“This.” – Or, the Teachable Moments of Social Media Ultimately, the narrative didacticism characterizing viral storytelling is a result of an emergent collaborative narrative effort, where small narrative gestures such as short tags, framings and reactions contribute to the claims for the representativeness and universality of the original material. As argued in recent small stories research, social media is making such small gestures of narrative positioning increasingly tellable and multipliable (Bamberg and Georgakopoulou 2008; Georgakopoulou 2013; Georgakopoulou 2017). Instead of fostering narrative diversity (everyone telling “their own story”), social media storytelling favors narrative positioning with small gestures of affect (see also Page 2018). A narrative most likely to go viral is the one offering a moral position so easily multipliable that the accompanying word “This.” suffices. Idiosyncratic or ambivalent narrative content does not spread as easily as stories that conform to familiar patterns and positionings. Viral storytelling therefore relies heavily on presupposed narrativized knowledge, such as cultural masterplots enforcing preexisting ideologies and opinions. In their black-and-whiteness, these viral narratives not only consolidate the affective consensus of the likeminded, but often hand a loaded gun to the hands of the political opponents, as simple positionings are easy to turn around. Moreover, while a narrative of personal experience qua experience has a significant potential for virality and functions as the first step followed by massive leaps to representativeness and normativity, the particulars of a personal experience are easily contested if the motivation is to reject the story completely. Yet both the affective consensus
56
Mäkelä
and the backlash by positioning are not reducible to any identifiable narrative agent – and therefore the moral authority they depend on is emergent. An illustrious case of the emergent moral authority and the other side of the coin, the narrative backlash by way of upending the positioning of narrative, is the notorious viral video scandal known as the “Lincoln Memorial controversy,” or the “Covington Kids Controversy” (see Dawson and Mäkelä 2020; Mäkelä et al. 2021). At the same time, this case demonstrates how experientiality and storyworld disruption can be projected into a minimal content if the social media invitation to a moral positioning is strong enough. A one-minute video clip, shot at the Indigenous People’s March at the Lincoln Memorial in January 2019, caused an almost unforeseen upheaval on the social media profiles of US citizens, and the viral phenomenon was uncritically reinforced by the leading non-conservative US media (CNN, Washington Post, New York Times). The footage shows a high school teenager wearing a “Make America Great Again” cap of Trump supporters, face to face with a Native American elder playing a drum. The video was launched and promoted by a couple of fake social media accounts, but it was actual American people, ranging from ordinary citizens to high-end celebrities and journalists, representing the liberal left, who took care of the spreading of the video as an alarming exemplum of the growing racism and the return of white supremacism in the United States politics. The enraged Twitter responses (Alyssa Milano: “This is Trump’s America”; Bernice King: “This is ugly, America”) read the ambivalent expression on the teenager’s face as the face of a backward Nation who can only confront its past with ridicule and contempt. As in such typical cases of viral exemplum (Mäkelä 2018; Dawson and Mäkelä 2020; Mäkelä et al. 2021), here too the leaps from (projected) experientiality to representativeness (our nation) and normativity (outright death threats to this random high school student) were enormous yet incredibly quick. Again, the perfect opportunity was laid out for the conservative backlash, which proceeded by first finding contrary evidence from additional footage, then sharing the student’s official statement recounting the events from his perspective, and finally celebrating the fact that Democrats are spreading fake news. The social media story wars over the viral video are still present in the polarized political setup of the United States, while on the way, both the life and “narrative” of both the high school student and the Native elder have been repeatedly instrumentalized for this purpose or the other. Sociologists Francesca Polletta and Nathan Redman (2020) have recently reviewed several studies concerning narrative persuasion in politics, partly in order to challenge the general assumption of narrative universals overcoming political differences, fueled by today’s storytelling industry. Focusing on storytelling that attempts to change the audience’s opinion on structural problems in society, they found out that stories of personal experience and other exempla that rely on individual characters rarely change the audience’s
Viral Storytelling
57
political opinions. Depending on what Polletta and Redman call “background stories and stereotypes” and what we might just as well call cultural narratives, masterplots, and easily adaptable narrative positionings, “a story may be heard as emotionally touching or as manipulative and inauthentic” (ibid.). In this chapter, I have attempted to demonstrate how this narrative dynamic is amplified by the narrative affordances of social media that are not reducible to the rhetoric and ethics of identifiable storytellers but result from the assemblage of narrative prompts by platform affordances, the “original” content and user collectives. Indeed, the logic of narrative universalism has changed from Aristotelian tragedy, or the medieval exempla. A narrative’s potential to yield a moral lesson is no longer considered to be dependent on the laws of probability or necessity, or based on the pre-existing authority of the classics, the church, or the sovereign. Contemporary narrative didacticism is based on the illusion of immediate, personal experience and fuelled by the narrative affordances of social media. A weapon of heavy-handed morality, a story of personal experience going viral is nevertheless free of responsibility, ethical, referential or otherwise. The chain reaction from experientiality to representativeness and normativity creates emergent narrative authority, and thus fosters narrative agency that cannot be held accountable for fact-checking or respect for story ownership. Yet narrativized hate campaigns repeatedly target individuals that have little to do with the narrative assemblages that have construed them as heroes or villains. Therefore, if anything, viral storytelling is a dubious art of disproportion.
Note 1 This article was written in the context of the consortium project “Instrumental Narratives: The Limits of Storytelling and New Story-Critical Narrative Theory,” funded by the Academy of Finland (grant no. 314768).
References Abbott, H. Porter. 2008a. The Cambridge Introduction to Narrative. 2nd edition. Cambridge: Cambridge University Press. Abbott, H. Porter. 2008b. “Narrative and Emergent Behavior.” Poetics Today 29, no. 2: 227–244. Bail, Christopher A., Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. “Exposure to Opposing Views on Social Media Can Increase Political Polarization.” Proceedings of the National Academy of Sciences of the United States of America 115, no. 37: 9216–9221. Bamberg, Michael, and Alexandra Georgakopoulou. 2008. “Small Stories as a New Perspective in Narrative and Identity Analysis.” Text & Talk 28, no. 3: 377–396. Boyd, Brian. 2009. On the Origin of Stories. Evolution, Cognition, and Fiction. Cambridge, MA: Harvard University Press.
58
Mäkelä
Bruner, Jerome. 1991. “The Narrative Construction of Reality.” Critical Inquiry 18, no. 1: 1–21. Dawson, Paul, and Maria Mäkelä. 2020. “The Story Logic of Social Media: CoConstruction and Emergent Narrative Authority.” Style 54, no. 1: 21–35. De Fina, Anna, and Alexandra Georgakopoulou. 2012. Analyzing Narrative. Discourse and Sociolinguistic Perspectives. Cambridge: Cambridge University Press. Fernandes, Sujatha. 2017. Curated Stories: The Uses and Misuses of Storytelling. New York: Oxford University Press. Fludernik, Monika. 1996. Towards a “Natural” Narratology. London and New York: Routledge. Georgakopoulou, Alexandra. 2013. “Small Stories Research and Social Media Practices: Narrative Stance-Taking and Circulation in a Greek News Story.” Sociolinguistica 27: 19–36. Georgakopoulou, Alexandra. 2017. “Narrative/Life of the Moment: From Telling a Story to Taking a Narrative Stance.” In Life and Narrative: The Risks and Responsibilities of Storying Experience, ed. Brian Schiff, A. Elizabeth Kim, and Sylvie Patron, 29–54. Oxford: Oxford University Press. Grishakova, Marina, and Maria Poulaki (eds.). 2019. Narrative Complexity: Cognition, Embodiment, Evolution. Lincoln, NE: University of Nebraska Press. Grishakova, Marina, and Maria Poulaki. 2019. “Introduction. Narrative Complexity.” In Grishakova and Poulaki, eds. 1–26. Herman, David. 2009. Basic Elements of Narrative. Chichester: Wiley-Blackwell. Hyvärinen, Matti. 2016. “Expectations and Experientiality: Jerome Bruner’s “Canonicity and Breach”. Storyworlds 82:1–25. Illouz, Eva. 2007. Cold Intimacies: The Making of Emotional Capitalism. Oxford: Wiley. Labov, William, and Joshua Waletzky. 1997 [1967]. “Narrative Analysis: Oral Versions of Personal Experience.” Journal of Narrative and Life History 7, no. 1–4: 3–38. Mäkelä, Maria. 2018. “Lessons from the Dangers of Narrative Project: Toward a Story-Critical Narratology.” Tekstualia4: 175–186. Mäkelä, Maria. 2020. “Through the Cracks in the Safety Net. Narratives of Personal Experience Countering the Welfare System in Social Media and Human Interest Journalism.” In Routledge Handbook of Counter-Narratives, ed. Klarissa Lueg and Marianne Wolff Lundholt. London and New York: Routledge. Mäkelä, Maria, Samuli Björninen, Laura Karttunen, Matias Nurminen, Juha Raipola, and Tytti Rantanen. 2021. “Dangers of Narrative: A Critical Approach to Narratives of Personal Experience in Contemporary Story Economy.” Narrative 28, no. 2. Munster, Anna. 2013. An Aesthesia of Networks. Conjunctive Experience in Art and Technology. Cambridge, MA: MIT Press. Nahon, Karine, and Jeff Hemsley. 2013. Going Viral. New York: Polity Press. Page, Ruth E. 2018. Narratives Online: Shared Stories in Social Media. Cambridge: Cambridge University Press. Papacharissi, Zizi. 2015. Affective Publics. Sentiment, Technology, and Politics. New York: Oxford University Press. Phelan, James. 1996. Narrative as Rhetoric. Technique, Audiences, Ethics, Ideology. Columbus: Ohio State University Press. Polletta, Francesca. 2006. It Was Like a Fever. Storytelling in Protest and Politics. Chicago: University of Chicago Press.
Viral Storytelling
59
Polletta, Francesca, and Nathan Redman. 2020. “When Do Stories Change Our Minds? Narrative Persuasion About Social Problems.” Sociology Compass 14, no. 4: e12788. Pratt, Mary-Louise. 1977. Toward a Speech Act Theory of Literary Discourse. Bloomington and London: Indiana University Press. Ryan, Marie-Laure. 2007. “Toward a Definition of Narrative.” In Cambridge Companion to Narrative, ed. David Herman, 22–36. Cambridge: Cambridge University Press. Ryan, Marie-Laure. 2019. “Narrative as/and Complex System/s.” In Grishakova and Poulaki, eds. 29–55. Sachs, Jonah. 2012. Winning the Story Wars. How Those Who Tell – and Live – the Best Stories Will Rule the Future. Boston, MA: Harvard Business Review Press. Salmon, Christian. 2010. Storytelling: Bewitching the Modern Mind, trans. David Macey. London and New York: Verso. Sampson, Tony D. 2012. Virality: Contagion Theory in the Age of Networks. Minneapolis: University of Minnesota Press. Shuman, Amy. 2005. Other People’s Stories: Entitlement Claims and the Critique of Empathy. Urbana: University of Illinois Press. Stage, Carsten. 2017. Networked Cancer: Affect, Narrative and Measurement. Cham, Switzerland: Palgrave Macmillan. Sternberg, Meir. 2001. “Universals of Narrative and Their Cognitivist Fortunes.” Parts I and II. Poetics Today 24, no. 2: 297–395 and no. 3: 517–638. Terranova, Titziana. 2012. “Attention, Economy and the Brain.” Culture Machine 13. https://culturemachine.net/wp-content/uploads/2019/01/465-973-1-PB.pdf. Van Dijck, José. 2013. The Culture of Connectivity: A Critical History of Social Media. Oxford: Oxford University Press. Walsh, Richard, and Susan Stepney (eds.). 2018. Narrating Complexity. Cham, Switzerland: Springer.
3
Authorship vs. Assemblage in Digital Media Hanna-Riikka Roine and Laura Piippo
Introduction Since the 1960s, writers have systematically examined various procedural techniques of writing, where various constraints such as Oulipo member Georges Perec’s famous “story-making machine” used in the construction of Life: A User’s Manual (1978) have been utilized to generate works of literature. Despite this long tradition, the literary crafting of a story has been first and foremost discussed as based on human behavior, perception, experience, and scale. Two influential definitions of narrative can serve as illustrative examples: the rhetorical model of narrative as an act of someone communicating something meaningful to someone else for some purpose, canonized in criticism by Wayne C. Booth (e.g. 1983) and James Phelan (e.g. 2007), and Monika Fludernik’s definition of narrativity on the basis of experientiality, as “the quasi-mimetic evocation of real-life experience” (1996, 12). This chapter sets out to discuss agencies of storytelling in relation to “story-making machines” which still execute algorithms in the sense of stepby-step instructions, but the functioning of which no longer corresponds to the level of human behavior and experience. Instead, their effects can be described as elemental or environmental, occurring outside our awareness but still conditioning and shaping our activity (see Hansen 2015; Hörl 2018). In our view, most of the existing analyses of storytelling have ignored such environmentality. While excelling at the examination of representational strategies of storytelling, narrative theory remains limited to the domain of content, that which is immediately visible to human users, because it is the part that is designed to be experienced by humans through the interfaces (Kangaskoski 2019, 46; Taffel 2019, 13; see also Roine 2019; Georgakopoulou et al. 2020, 21–2). At the same time, innumerable encounters between humans and machines are ignored. Though cybertext theory has revised narratological concepts in order to engage with the dynamic of visible and invisible operations of the machines (e.g. Aarseth 1997; Eskelinen 2012), it has not concerned itself neither with the environmentality of digital media nor the affective implications of human–technical assemblages that affect ends as well as means (see Hayles 2017; 37; also Latour 2002). For
Authorship vs. Assemblage in Digital Media
61
instance, humans write the algorithms, but the algorithms, in turn, collect data of humans on levels not visible to users, which then affects human perception of content, that which is visible. As a result, our coexistence is more than a collaboration: a kind of co-identity, as we are adapting ourselves to become more knowable to the algorithmic machines (Finn 2017, 190). Our discussion in this chapter is focused on the concept of authorship, which has gone hand in hand with the understanding of authoring as a work of distinct agents instead of being a situational set of relations in a process. We suggest that tying authorship up with distinct agents is not, in digital environments, accurate or beneficial, as it emphasizes human activity at the expense of the nonhuman agencies of digital technology. In other words, the ways in which certain content appears on the level that is designed to be experienced by us are understood as conscious acts of human agents. In this chapter, we argue for an alternative view of content being conditioned by the whole, multi-agent environment that no individual subject directly controls but where various actors are being produced and make use of opportunities for action. As an illustrative example of the nonhuman agencies we discuss the role of platforms as “a ‘raised level surface’ designed to facilitate some activity that will subsequently take place” (Gillespie 2010, 350) in guiding and limiting our actions. We are not simply intertwined with digital technologies as separate entities but entangled with them in the sense of lacking an independent, self-contained existence (see Barad 2007, ix). It is worth noting that compared with concepts such as “storied matter” posited by material ecocriticism (see Iovino & Oppermann 2014), digital environments are, in their programmed, algorithmic nature, more forceful in their capacity to participate in the construction of stories than most nonhuman matter. In order to include the entanglement of human and nonhuman agents and actors in the analysis, we consider assemblage as an alternative to authorship for conceptualizing agencies of storytelling. In our view, assemblage enables the understanding of digital media as affective environments, where users and platforms form a mutual relation of beingaffected (see Colebrook 2014), and in which we read and write texts and discourses that are “dynamic and ergodic” in nature (Aarseth 1997, 1–2). Our reasoning goes along the lines of N. Katherine Hayles who, building on Bruno Latour’s (2002) and Peter-Paul Verbeek’s (2011) discussion of technologies transforming the spectrum of possibilities within which human intentions and choices are conceived, suggests that we need to move away from individual, discrete actors towards thinking the consequences of the actions human–technical assemblages as a whole perform. As she further argues in her chapter in this volume, these assemblages are fundamentally cognitive, emphasizing circulations between humans and nonhumans, and insisting upon the importance of recursive cycling within the assemblage.1 In this chapter, we apply these ideas to understanding
62
Roine and Piippo
various circulations and cycling specifically in relation to agencies of storytelling as well as in terms of affects and affections. As the length of the chapter is quite limited, we will use the platform of Twitter to illustrate our discussion, and instead of taking a singular case study, attempt to demonstrate the more general principles of digital media and the relationship between its platforms and users.
Agencies of Storytelling in Digital Media Alexandra Georgakopoulou, one of the most prominent narratologists in the field of storytelling in social media, has argued that social media platforms such as Facebook or Twitter are transforming our understanding of what narrative is considered compelling. They encourage the increasingly common “life-tellings of the moment” or, as of lately, “the drafting together of life” in the form of Facebook Timeline, for instance (Georgakopoulou 2017a, 35). Following this, Maria Mäkelä has made insightful contributions to the field with her research on “literary craft in social media” (2019, 159), intent on drawing our attention to “the narrative and expressive potential” (ibid., 161) of Facebook as a specific platform. Although taking such an approach to social media has its merits, literary reading and writing strategies are undoubtedly losing their status as our baseline approach to media – and Mäkelä is, herself, conscious of the ways in which her methodological choice “requires the setting aside of some affordances of the medium while – more or less ‘unnaturally’ – foregrounding others” (ibid., 160; see also Chapter 2 in this volume). It is thus important to examine critically the ways in which our entanglement with digital environments, or human– technical assemblages, is affecting the agencies of storytelling, the processes of reading and writing themselves. Literary critic Paul Dawson has taken important steps toward the understanding of larger textual environments with his discursive approach to the narrative communication model, originally proposed by Seymour Chatman in 1978. Dawson’s model “situates the narrative text in a broader discursive formation to investigate how narrative authority emerges out of the relations between subject position within formation” (2012, 110). In their recent article, Dawson and Mäkelä further emphasize the detachment of narrative authority from narrative agency as one of the defining features of the “story logic” of social media. They argue that “in contrast to classical definitions of narrative rhetoric, narrative constellations emerging from viral sharing are by default nonreducible to identifiable agents and situations to which narrative-ethical accountability could be attached” (2020, 28). While such a detachment as well as Dawson’s earlier model acknowledge the context where texts can be produced and reframed by various actors, they still are limited to the domain of human agency and experience. In other words, the more-than-human environment – platforms, other agents, affects and affections, et cetera – of the “narrative constellations” remains unrecognized,
Authorship vs. Assemblage in Digital Media
63
although they importantly contribute to the feel of a story with real authority emerging. In consequence, both the classical and more contemporary models of narrative communication tend to obscure the fact that the vast majority of the content we interpret as stories on various digital platforms are generated according to principles that exceed human perception or are otherwise hidden from us. As Mark B.N. Hansen argues in his discussion of “twentyfirst century media”, we must work towards exposing the environmental effects of today’s media – effects that occur outside our awareness – as we can “no longer take for granted the connection of media and human experience that has informed media history up until today” (2015, 37, 38). Compared with the technical media of nineteenth and twentieth centuries, such as photography and cinema, digital media does not primarily address human sense perception and experiential memory, but stores “bits of data that register molecular increments of behavior and that do not in themselves amount to a full picture of integrated human ‘lived experience’” (ibid., 40). In a similar vein to what we suggested above, Hansen urges us to turn away from the equation of experience and content towards the examination of how relations are composed between technical circuits and human experience (ibid., 43). The attempt to impose an inherently human logic – such as that of narrative – onto such phenomena runs the risk of merely upholding an illusion of how the platforms function and prevents us from understanding the digital environment and its affective logic. In other words, a model with its basis on human agency and experience cannot be used alone to understand agencies of storytelling in contemporary digital media. As regards the ways in which agential capacities of nonhuman things and processes are reduced to our narrative representations, Juha Raipola has made a useful distinction between two different notions of nonhuman agency. The first variety is the agency of the actual, non-specified human and nonhuman “forces” which entail the entangled activities of different agencies. This agency does not, however, consist of any pre-given meaning in the form of narratives, but can only later be interpreted as such: “[T]his kind of agency might be termed a ‘semiotic agency’ or ‘meaning-producing agency’, but identifying it as a ‘narrative agency’ seems like a definite misnomer” (2019, 277). The second type of nonhuman agency Raipola discusses is narrated agency, always ascribed after the fact or in anticipation of a fact, to make sense of the temporal progress of the action. With this interpretative act, agency is projected to individual actors within the system: although the resulting narratives are based on the actual agency of matter, their binding to narrative logic ultimately fails to represent the complexity of systemic behavior. The same distinction between meaning-producing agency and narrated agency can fruitfully be applied to social media platforms. The ways in which content (such as updates, tweets, comments) appears on the level that can be experienced by humans results from the entangled activity of human and computational agencies and cannot be easily distinguished from the
64
Roine and Piippo
machine–human interactions. Naming such an entanglement meaning-producing agency is definitely a more accurate choice than narrative agency. Narrated agency, then, manifests itself on social media platforms as the capacity to recognize, name and frame certain constellations, based on the actual agencies within the human–technical assemblage, as specifically “narrative” (cf. Dawson & Mäkelä 2020, 28; see also Mäkelä’s Chapter 2 in this volume on defining narrativity through moral positioning). In this case, as pointed out by Raipola, the binding of these constellations to narrative logic does not accurately represent the complexity of the assemblage. On the level of content in social media platforms, with narrated agency comes not only the projection of agency to individual actors, but also the projection of authorship or authority. Complex, entangled activity of different agencies is made sense through narrative logic, narrativized.2 This interpretative act consequently projects an anthropomorphic author figure with an intent to communicate something meaningful to others for some purpose (cf. Phelan 2007) as well as endows them with subsequent experientiality (cf. Fludernik 1996, 12). Similarly to Dawson and Mäkelä’s (2020, 12) speculation of storytelling becoming an art of reframing in digital media, we would suggest looking at the act of narrativization on this level as an intentional act of adding another paratext to the text.3 Here, narrated agency manifests itself through an effort to affect the ways in which a text is read through its visual presentation. Following literary theorist Gérard Genette, paratexts are generally known as the thresholds of the text, or signifiers that give sense and meaning to the textual whole as “those liminal devices and conventions, both within and outside the book (or any other text), that form part of the complex mediation between book, author, publisher, and reader” (1997, 5–6). Seeing narrativization in terms of adding paratexts does not necessarily repudiate experientiality as the basis of narrative, as it can simply be understood as an act of reframing something to be read as narrative. However, in an environment where our objects of study adapt to us as we interpret them (Finn 2017, 185) and where, ultimately, this process of adaptation on the platform takes the dynamic role in the work of authoring, the equation of content and experience following human logic is misleading. In other words, such an environment is based on an affective relation or feedback loop of a kind, where the platforms are not only affected by our actions but, in turn, shape and guide our actions. Paratextuality can thus be seen as a textual component or a facet of the assemblage of digital environments (cf. Piippo 2020, 46), and as such, it affords a literary criticism of that assemblage. We will get back to paratexts and the feedback loop between the users and the platform in the last section of this chapter. The concept of cognitive assemblage offers a way to understand such a loop as well as the entangled agencies. It has been put forward by Hayles (e. g. 2017) as a particular kind of network characterized by the circulation of information, interpretation, and meanings by both human and nonhuman
Authorship vs. Assemblage in Digital Media
65
4
cognizers. It is worth emphasizing, again, that Hayles counts digital media among cognizers together with humans: they, too, direct, use, and interpret the material forces on which the assemblage ultimately depends. These cognizers drop in and out of the network in shifting configurations that enable interpretations and meanings to emerge, circulate, interact and disseminate throughout the network. What is especially interesting from the perspective of this article, such shifting configurations are reflected on the contemporary aesthetics, especially the ones related to the internet, which heavily rely on recycling, reusing, and reframing found material that is not always produced, recycled, reused or reframed by a human agent. Nonhuman cognizers do not, however, possess narrated agency by themselves despite being able to identify and make use of various opportunities of action. In many cases of “augmented imagination”, the transformative work that humans and machines can only do together (Finn 2017, 186), agencies entangle into a close collaboration. Similarly, with her term of readingwriting, Lori Emerson (2014, xiv) calls our attention to the fact that due to our constant connection to networks, media poetics is fast becoming a practice of writing through the network. This network possesses what Raipola calls meaning-producing agency: it tracks, indexes, and algorithmizes everything we enter into it, thus constantly reading our writing and writing our reading. Such a feedback loop between reading and writing signals a definitive shift in the literary poetics as well and points out the ways in which our core cultural practices of reading, writing, conversation, and thinking are becoming digital processes, or at least entangled with them. The predictive – and, by large, cognitive – capacities of digital media also show their important role in manifestations of narrated agency. In the bleak vision presented by Matti Kangaskoski in Chapter 4, our quick and affective signaling of what we desire to the cultural interfaces is beginning to register as almost automatic reaction to stimuli and thus, as Bernard Stiegler (2019) has suggested, the prediction begins to precede the individual will and simultaneously empties it. Our analysis of agencies of storytelling is aligned with Kangaskoski’s aim to “create a break” to the inevitability of such automatization through the investigation or aesthetic features, values, and opinions that broader assemblages produce, and, as we argue next, in order to uncover the fundamentally affective nature of digital environments.
Affects and the Human Scope On the subject of understanding and theorizing the dynamic nature of digital texts and environments, significant efforts have already been made, especially in cybertext theory, which grounds itself in the functions of the text instead of medium specificity. The key figure of this approach, Espen Aarseth, breaks the text – be it digital or nondigital – down into two units: textons and scriptons. The latter are revealed or generated from the former and presented to the user of the text by a mechanism Aarseth calls a
66
Roine and Piippo
traversal function (1997, 62). He then proceeds to present seven variables of the mode of traversal: dynamics, determinability, transiency, perspective, access, linking and user function (ibid., 62–4). Markku Eskelinen has added two more variables to Aarseth’s typology: user position and user objective, along with several new secondary variables (2012, 45–46). This typology hence expands and defines the Genettean notion of the duration and frequency of the text,5 as well as the questions of the reader’s engagement and the text’s own self-modifying practices. Cybertext theory thus provides us with a solid understanding of the structure and functions of digital texts and has tracked down the definitive shift discussed above by illustrating the dynamic nature of digital text through the analysis of ergodic literature (e.g. Eskelinen 2012, 288). However, it has not concerned itself with the affects and affections produced in and by the digital platforms and environments, while communication scholar Zizi Papacharissi has prominently shown the logic of affect to be the key characteristic of digital media. Describing digital media as following, amplifying, and remediating the tradition of storytelling in older media, Papacharissi (2015, 4) argues for a view of social media platforms as a storytelling infrastructure that enables the feeling of being present, and allows for affective expression. The emphasis on the present is indeed visible in the way in which, for instance, Twitter shows the timestamps of the tweets in the form of their age: how long ago they were posted instead of when they were posted. Papacharissi further suggests that the structures of connection and expression are characterized, more than anything else, by feelings and emotions. Both newer and older media invite people to participate in events that are physically remote but virtually and affectively close, thus creating the sense of sharedness and activating ties between people. These ties, in turn, are important for generating affective publics, “powered by affective statements of opinion, fact, or a blend of both” (Papacharissi 2015, 129). Following Papacharissi, every single tweet, video or meme can be seen as an “affective attunement” to their narrative, an expressive gesture of affect and engagement. We should be mindful of Papacharissi’s understanding of affect and affectivity, however. Understood in the Deleuzian sense, affection refers to the state of the affected body and includes the presence of the other body that produces the affection (Deleuze 1988, 48–51). Therefore, it is a relation. Affects, on the other hand, create these different kinds of affections – the codependent state of both the affecting and the affected body – depending on the bodies involved.6 Following Deleuze, Claire Colebrook (2014) argues that affects should not be confused with affections, or feelings of the lived body, as the beginning “is not the body and its affections, but the affect” (ibid.). While researchers of storytelling practices in social media such as Papacharissi and Ruth Page – whose concept of shared story describes a “distinctive narrative genre” (2018, 2) with a potential to proliferate into huge numbers of interactions involving thousands of tellers (ibid., 4) – focus on structures of circulating content as well as sharing affections, Colebrook
Authorship vs. Assemblage in Digital Media
67
argues that we should turn away from them, “not to the lived body, but to the quantities and relations of forces from which identifiable bodies and sentiments emerge” (ibid.) In the context of this chapter, we argue that instead of content, discrete actors and their affectivity, our focus should be on examining agencies within the human–technical assemblage as a whole. From our perspective, approaches such as Papacharissi’s and Page’s see contemporary digital environment first and foremost as a human environment, where both the affects and affections are reduced to named and experienced feelings or representing them. Although fitting in some instances, the concept of authorship we discussed above carries a similar bias towards human experience and scope, often evoking the prototypical view of stories of personal experience as compelling and most affective, thus almost forcibly going viral. Mäkelä’s “literary approach” to Facebook describes the readers’ “quest for experientiality” as the reader being “on the lookout for familiar epistemic, affective, and bodily parameters that would yield a presentation akin to the reader’s experiential schemata” (2019, 163). However, some highly viral content, such as trolling, doxing, hate campaigns, viral marketing, and other mass-produced affective material without anything “personal” or “experienced” behind them pose a problem to the supposed experientiality in the content that is being created and shared. Following Colebrook (2014), we argue that what is “really felt” while being entangled with digital environments is specifically our experience of being a part of an assemblage that simultaneously produces us as an experiencing agent. The feelings represented in the compelling stories, for instance, can be seen as a form of “pre-packaged, already-consumed-consumable affections” (ibid.), instances of our bodies only re-living themselves within predetermined schemata instead of acknowledging and processing the feel of the assemblic relation itself. This, of course, plays into the hands of phenomena such as “fake news”, which exploit the recycled affections in the content.7 Moreover, it has been hypothesized that bots already outnumber human traffic online (see Read 2018) – and that people are not necessarily able to distinguish human actors from machinic ones. Empirical evidence and analysis provided by Minna Ruckenstein and Linda Lisa Maria Turunen (2019) further shows that the current platform logics of digital media force human moderators to operate like machines, which creates discontent and diminishes the human capacities of meta-analysis and care over the algorithmic content. In our view, forcefully maintaining the distinction between anthropomorphic authors and other, such as computational or machinic, actors effectively prevents us from analyzing larger phenomena, which supposedly emerge from the sharing of smaller stories that are compelling in their communication of personal experience. Such communication can also be understood as imagined rather than actual. As Benedict Anderson’s classic notion of “imagined communities” (2006) – closely relating to Papacharissi’s “affective publics” – suggests, our sense of communality is in part
68
Roine and Piippo
enabled by current technological affordances, but that sense of communality or connectedness does not precede those respective affordances as such, since the realization and actualization of affordances hinge upon interactive relations established by (or at least in relation to) agents (see Scarlett & Zeilinger 2019, 27). Originally, print capitalism in the form of newspapers and novels, which were written in the vernacular made it possible for us to envision a community imagining together, simultaneously. The same can well be argued about the current data capitalism, which is not simply focused on making profit on selling products (such as newspapers or online services) but precisely on our experiences, as they can be transformed to behavioral data which then can be profited on, as Shoshana Zuboff (2019, 14) argues for what she calls “surveillance capitalism”. In current digital platforms such as Twitter, both human and nonhuman actors participate in realizing and actualizing structures such as connectedness and sharing, and this has important consequences for understanding the agencies of storytelling.
Twitter as an “Imagined Environment” As stated above, our aim is to seek out ways to analyze storytelling in digital environments while taking the actions human–technical assemblage as a whole performs into consideration (cf. Hayles 2017). Within the framework of literary theory, we approach this through a focus on content in a broad sense, that is, on the visible aspects of the digital, including the platforms as well as their interfaces in the analysis. We suggest seeing digital platforms not only as sophisticated and designed, but also imagined from the individual perspective. In the case of social media services such as Twitter, the interface remains the same from user to user – despite undergoing changes in design from time to time – but the feed is assembled in many respects by algorithms and, therefore, appears differently to every user. What appears to an individual user as a single platform is in fact comprised of a multitude of different variations of what is being created and shared on the platform (see Bozdag 2013). “Twitter” is, consequently, an “imagined environment” where, to borrow from affordance theory, agents and actors with the ability to identify and make use of opportunities for action in this precise environment (see Scarlett and Zeilinger 2019, 27) do so by not only writing, sharing, and reading, but also by assembling, reframing, and recontextualizing. These agents and actors can, of course, be both human and nonhuman. Next, we illustrate this through the analysis of the design of Twitter, focusing on its paratextual and affective aspects and thus showing how the visible can offer insights to the hidden as well. As we pointed out above, paratexts are generally understood as the thresholds of the text, giving sense and meaning to the textual whole. As a concept, paratextuality has been coined in the context and for the use of print media and transferring it to the discussion of digital media calls for
Authorship vs. Assemblage in Digital Media
69
slight adjustments (see e.g. Green 2014; Tavares 2017). For instance, one must bear in mind that an individual tweet as well as the entire Twitter feed (or any other user interface of a social media platform) both construct their own paratextual elements, and that the relationship between a tweet and the feed is also paratextual (Piippo 2019, 58). Furthermore, one of the key functions of paratexts, naming the authorship, is connected to the user handles marked with @. They are often associated with verifiable human identities, the “author” of the tweet, but they can also be anonymous aliases or connected to bogus accounts or bots. While the identity of a tweeter carries a lot of interpretational weight in terms of authority (in relation to their social or political status, for example), the more time we spend skimming through the content of a certain platform, “the more we start to pay attention to the repeated words, wordings, themes and affects, while the context provided by the author’s name fades into the background”, as argued by Laura Piippo (2019, 59). It is also worth noting that, in the context of Twitter, some of the elements Genette considered parts of the epitext actually become peritext, or even parts of the actual text.8 A good example of this is a retweet complemented with an added commentary by the retweeter. The user interface of Twitter can, importantly, be also considered a liminal text, something residing on the boundary of the actual text (see Galloway 2009). This is an understanding that brings the interface close to the concept of paratext, which not only shares the same attributes of a threshold (cf. Tavares 2017, 18), but also “in reality controls one’s whole reading of the text”, as suggested by Genette after Philippe Lejeune (1997, 1–2). The user interface is never singular, but plural, for – in addition to the dynamic between textons and scriptons – they tend to incarnate as various versions over the years and across different devices. However, a feature that further connects user interface with the concept of paratext is that the users seem not to pay much conscious attention to its design. As Bonnie Mak points out, readers rarely regard the title of a book with suspicion or interrogate its chapter divisions. The paratext says, ‘The text is thus,’ as if it were a statement of fact. Authorized by its own presence, the paratext is trusted because it exists. (2011, 34) The similarly unquestioned nature of the interface is highlighted by the fact that the changes to Twitter’s user interface, whether significant or mostly insignificant, have been forgotten quite fast, as the users adapt to the new versions.9 In other words, the users have become so accustomed to the interface that it, in turn, has become invisible (see Bolter & Grusin 1999) in the way the most profound technologies do, as famously suggested by Mark Weiser: “They weave themselves into the fabric of everyday life until they
70
Roine and Piippo
are indistinguishable from it” (1991). In a mundane level, the same happens to the paratextuality of the icons indicating likes, retweets, and replies: they blend into the actual message of a post or a tweet. At the same time, the very paratextuality of the interfaces affects the material users write and publish on them. In general, the distinction between textuality and paratextuality seems to leak when texts shared and referred to are all embedded within the same interface with the sources. Consider, for instance, Genette’s (1997, 5–6) assertion that commentary is a paratextual element. If so, how should we categorize a retweet with an elaborate foreword? Here, the authorship over an action in the context of the platform and the design of the user interface become important: in effect, the context of reading defines the paratext. The logic affects, for instance, elements of an online discussion, where the intentions of a singular commentator are not necessarily the ones that matter but the ways in which their comments may come to contextualize the whole discussion – and may be recontextualized again by either human or nonhuman actors. Comments, whether they are made by authentic users or generated by bots, become “the paratexts that surround the main texts and, thus, have an impact on individual tweets’ significance and tone, and – even more importantly – take part in forming the affectivity of the entire platform” (Piippo 2019, 61). In a printed book such as a novel, the tone is created by the text, but on Twitter it is created by the paratext. Hashtag, the most significant feature – and a vital paratext – of Twitter, can be seen as a concrete example of the ways in which human and nonhuman agencies entangle in digital media. According to Paola-Maria Caleffi (2015), hashtags function as metadata, a form of special language, and an arena for linguistic creativity, for example. This is to say that they serve as a means of connecting, collecting and sharing, and can be used to mark a subject, an event, or an occasion (Murthy 2013, 3). As such, they carry the ability to form assemblages, enhance phenomena and divert the authorship of the tweeted and retweeted content. Since the algorithm is indifferent to the various uses and functions of the hashtag, it simply regards the repetitive uses of certain words as a sign to amplify the visibility and circulation of the content. This feature plays into the hands of the affective economies by bundling up equally the ironic, sincere, malevolent, as well as other uses – which then are often hijacked by the most malevolent of the uses. The hashtag also accounts for many of the ways for machinic and algorithmic creation and circulation of content. What then happens to the concept of authorship when the whole of paratext is in a state of flux? Here we can once again see how the concept of authorship begins to lose its analytical power when we move to digital environments. In Twitter, the prevailing act of reading defines the paratext: what we encounter first becomes the text, whether it is a tweet, an indication of an action (a plain retweet or “like”) or an elaborative foreword for a retweet. Dawson and Mäkelä bring up a similar point with their discussion of the #MeToo movement and perpetual
Authorship vs. Assemblage in Digital Media
71
reframing of existing content, where “individual tweets about a user’s experience which when tagged create multiple tellers for the shared experience of sexual harassment and assault encoded as a prototypical narrative in the phrase #MeToo itself” (2020, 24). They thus describe the hashtag as “a cultural script providing a frame for individual tweets” (ibid.), for its part contributing to a story logic where “narrative power resides with the reteller qua reframer of the narrative, and narrative authority can emerge ‘out of thin air,’ as a collective, noncoordinated framing” (ibid., 32). In our view, the logic described by Dawson and Mäkelä is produced by the ways in which the human–technical assemblages as a whole perform. This logic, in part, contributes to the affective relation between the reader and what they are reading, and affects other elements, which in another context could be understood as “human interaction” or “discussion.” Instead, comments and replies become paratexts, controlling one’s whole reading of the text, giving it its significance, tone, and meaning. The platform begins to participate in the process of authoring texts, while the “author” begins to blend into the platform, exemplifying the circulations between humans and nonhumans as well as the recursive cycling with the assemblage (see Hayles, Chapter 1 in this volume). This calls for researchers to make an adjustment that Ruckenstein and Turunen (2019) have dubbed a move from the logic of choice towards the logic of care. In other words, and as also argued by Hayles, we need to move from theorizing the actions, choices and texts attributed to a single human author towards the analysis of the dynamics of an assemblage as a whole. It can be seen as “the risky ethics of uncertainty” that “commands us to give up the comfort of familiarity” (Shildrick 2002, 132). As an approach, it must extend itself to the unknown and nonhuman of the digital, algorithmic environments, yet also staying benevolent and nurturing toward the human experience (Ruckenstein & Turunen 2019). For narratologists, this means a definite shift from asking who or what the author, authoritative figure, or narrating agent is towards the question of how narratives come to be or are being perceived through the entanglements of human and nonhuman agencies in digital environments. In these environments, Perec’s “story-making machine” is taken to the next level by complicating its mechanics beyond the human scale of comprehension. It seems that after the death of the author – and their subsequent resurrection – we must now yet again re-articulate these types of literary co-agencies.
Conclusions In his call for “experimental humanities”, Ed Finn (2017, 195–6) argues that we must simultaneously accept that our relationship with knowledge has become a set of practices for interacting with rapidly increasing complexity and the vital role of the humanistic inquiry in the fields of ambiguity, dissonance, interpretation, and affect. We agree with Finn, and have, in this chapter, focused on
72
Roine and Piippo
assemblage as a possibility for conceptualizing agency of storytelling on digital social media platforms, such as Twitter, which are affective environments generated by both human and machinic actions. Understanding this not only helps us to better grasp the platforms’ dynamic and inherently “nonhuman” nature in relation processes of reading and writing, but also has serious moral and ethical implications. It has a potential to turn our attention away from, for example, individuals writing – or moderating – hateful messages towards the affective logics as a system that itself generates, circulates and intensifies such hatred. Rethinking the human–machine relations in terms of assemblage in digital environments may allow us to free ourselves from “a cycle of responding to one post at a time” and rather offer “a meta-perspective to the discussion by overseeing and nurturing it” (Ruckenstein & Turunen 2019). We are affected by the whole of the network through which we are reading and -writing. Furthermore, setting “the human” aside as a privileged agent requires us to outline a model for textual agency in digital media, which acknowledges the human–machine nature of such an environment. This also allows us to renegotiate the terms and conditions of the said agency, potentially resulting in more ethically and affectively sustainable digital environments and relations.
Notes 1 As Hayles points out herself, assemblage is how agencement is usually translated in Gilles Deleuze and Félix Guattari’s work Thousand Plateaus (1987): for them, it connotes a kind of agency without individual actors, a flow that temporarily creates a provisional and highly flexible arrangement capable of agency. Hayles differs from them in describing connections through which information flows as technically precise and often knowable. Furthermore, in relation to agency, it is worth noting that Deleuze and Guattari’s understanding of the human conscious as an assemblage (e.g. 1972) is a useful comparison: this assemblage is rather a factory than a theatre, in that it “really (not just in a phantasmal sense) produces social subjectifications” (see Sauvagnargues 2013, 88). 2 Both narrativity and narrativization as concepts have been most influentially defined within the cognitive school of narratology. In her groundbreaking book Towards a “Natural” Narratology, Monika Fludernik redefined narrativity as “mediated experientiality” (1996, 12–13, 28–30) as opposed to structuralist definitions rooted in temporality and causality. Furthermore, in narrative meaning-making, markers and effects of artificiality or medium-specificity are set aside by “narrativizing” (ibid. 33) any kind of representation to fit our basic cognitive schemata for story comprehension (see also Mäkelä 2019, 163). 3 Compared with Georgakopoulou’s (2017b) account of narrativity being an emergent property, based on the logic of social networking sites encouraging sharing stories out of the moment and thus using conventionalized story-framing devices such as reference to time, place, and characters to suggest there is a story to be engaged with, we want to emphasize the entanglement of different agencies. 4 Hayles’s usage of “cognition” is a concept re-envisioned on the basis of cognitive biology, which extends cognition as “embodied knowledge” beyond humans, primates and mammals to all living organisms, even unicellular organisms and
Authorship vs. Assemblage in Digital Media
5
6
7 8
9
73
plants. For her part, Hayles extends cognition to technical devices, offering the following definition for cognition: cognition is a process that interprets information within contexts that connect it with meaning. For more detailed discussion, see Hayles 2017 and Chapter 1 in this volume. In his discussion of time in relation to “discourse time” (the time it takes to peruse the discourse, cf. Chatman 1978) and “story time”, Genette focuses on various deviations between them through the categories of order, duration and frequency. In relation to order, Genette calls the deviations “anachronies” and distinguishes between prolepsis (flash-forward) and analepsis (flashback). The deformation of duration, on the other hand, he calls “anisochrony” (1983, 86), and discerns four types of story-discourse relations: pause, scene, summary, and ellipsis (ibid. 95). Frequency, then, outlines the relationship between the number of occurrences in the story and the number of occurrences narrated, and here, Genette distinguished between singulative (telling once what happened once), repetitive (telling many times what happened once), and iterative (telling once what happened several times (ibid., 114–16)). Philosophy of cognition has pointed out that people have always tended to manipulate their environment for affective effects, such as entertainment or pleasure. This serves to underline the fact that our affective states are frequently enabled, supported, and regulated by environmental elements such as physical objects as well as other people (see Sterelny 2010; Colombetti & Krueger 2014; Maiese 2016; Saarinen 2019). Such phenomena have been prominently discussed by Sara Ahmed (2004) under the concept of “affective economies”, although her approach to affectivity differs from Colebrook’s and ours. Book historian Johanna M. Green (2014; see also Piippo 2019) offers a detailed analysis into the paratextuality of both Twitter feed and single tweets as digital pages. For instance, she categorizes tweets, with all their medial features as constituting the text proper; the rest of the user interface surrounding the feed as comprising the epitext; and everything not visible on the screen but still related to Twitter as peritext. The mechanics of the feed have undergone some changes over the years, as has the design of the user interface for different devices. Until 2015, Twitter’s feed simply presented the tweets in reverse-chronological order. After that, the platform has gradually introduced an algorithm-based feed (Newton 2016), which exploits the users’ actions, inactions, networks and engagements. In 2018, the company answered the users’ demands and brought back the option to use the old nonalgorithmic, more up-to-date and in-the-now feed (Newton 2018). The biggest change so far has been the transition from the limit of 140 characters to enabling tweets that are up to 280 characters long.
References Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore, MD: Johns Hopkins University Press. Ahmed, Sara. 2004. “Affective Economies.” Social Text 22, no. 2: 117–139. Anderson, Benedict. 2006 [1983]. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London: Verso. Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham, NC, and London: Duke University Press. Bolter, David, and Richard Grusin. 1999. Remediation: Understanding New Media. Cambridge, MA: MIT Press. Booth, Wayne. 1983. The Rhetoric of Fiction. Chicago: University of Chicago Press.
74
Roine and Piippo
Bozdag, Engin. 2013. “Bias in Algorithmic Filtering and Personalization.” Ethics and Information Technology 15, no. 3: 209–227. doi:10.1007/s10676-013-9321-6. Caleffi, Paola-Maria. 2015. “The ‘hashtag’: A New Word or a New Rule? (Report).” SKASE Journal of Theoretical Linguistics 12, no. 2: 46–69. Chatman, Seymour. 1978. Story and Discourse: Narrative Structure in Fiction and Film. Ithaca, NY: Cornell University Press. Colebrook, Claire. 2014. The Death of the PostHuman: Essays on Extinction, Volume One. London: Open Humanities Press. doi:10.3998/ohp.12329362.0001.001. Colombetti, Giovanna, and Joel Krueger. 2014. “Scaffoldings of the Affective Mind.” Philosophical Psychology 28, no. 8: 1–20. doi:10.1080/09515089.2014.976334. Dawson, Paul. 2012. “Real Authors and Real Readers: Omniscient Narration and a Discursive Approach to the Narrative Communication Model.” Journal of Narrative Theory 42, no. 1: 91–116. doi:10.1353/jnt.2012.0010. Dawson, Paul, and Maria Mäkelä. 2020. “The Story Logic of Social Media: CoConstruction and Emergent Narrative Authority.” Style 54, no. 1, 21–35. Deleuze, Gilles. 1988. Spinoza: Practical Philosophy, trans. Robert Hurley. San Francisco: City Lights. Deleuze, Gilles, and Félix Guattari. 1972. Anti-Oedipus, trans. Robert Hurley, Mark Seem, and Helen R. Lane. London and New York: Continuum. Deleuze, Gilles, and Félix Guattari. 1987. A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi. Minneapolis: University of Minnesota Press. Emerson, Lori. 2014. Reading Writing Interfaces: From the Digital to the Bookbound. Minneapolis: University of Minnesota Press. Eskelinen, Markku. 2012. Cybertext Poetics. The Critical Landscape of New Media Literary Theory. London and New York: Continuum. Finn, Ed. 2017. What Do Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press. Fludernik, Monika. 1996. Towards a “Natural” Narratology. London: Routledge. Galloway, Alexander R. 2009. “The Unworkable Interface.” New Literary History 39, no. 4: 931–955. Genette, Gérard. 1983. Narrative Discourse: An Essay in Method. Ithaca, NY: Cornell University Press. Genette, Gérard. 1997. Paratexts: Thresholds of Interpretation, trans. Jane E. Lewin. New York and Melbourne: Cambridge University Press. Georgakopoulou, Alexandra. 2017a. “Narrative/Life of the Moment: From Telling a Story to Taking a Narrative Stance.” In Life and Narrative: The Risks and Responsibilities of Storying Experience, ed. Brian Shiff, A. ElizabethKim, and Sylvie Patron, 29–54. Oxford: Oxford University Press. Georgakopoulou, Alexandra. 2017b. “Sharing the Moment as Small Stories. The Interplay between Practices & Affordances in the Social Media-Curation of Lives.” Narrative Inquiry 27, no. 2, 311–333. Georgakopoulou, Alex, Stefan Iversen, and Carsten Stage. 2020. Quantified Storytelling. A Narrative Analysis of Metrics on Social Media. London: Palgrave Macmillan. Gillespie, Tarleton. 2010. “The Politics of ‘Platforms’.” New Media & Society 12, no. 3: 347–364. Green, Johanna M.E. 2014. “‘On þe nis bute chatering’: Cyberpragmatics and the Paratextual Anatomy of Twitter.” Studies in Variation, Contacts and Change in English, 15. http://www.helsinki.fi/varieng/series/volumes/15/green.
Authorship vs. Assemblage in Digital Media
75
Hansen, Mark B.N. 2015. Feed-Forward: On the Future of Twenty-First Century Media. Chicago: University of Chicago Press. Hayles, N. Katherine. 2017. Unthought. The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press. Herman, David. 2009. Basic Elements of Narrative. Oxford: Wiley-Blackwell. Hörl, Erich. 2018. “Introduction to General Ecology: The Ecologization of Thinking,” transl. Nils F. Schott. In General Ecology: The New Ecological Paradigm, ed. Erich Hörl with James Burton, 1–74. London: Bloomsbury. Iovino, Serenella, and Serpil Oppermann. 2014. “Introduction: Stories Come to Matter.” In Material Ecocriticism, ed. Serenella Iovino and Serpil Oppermann, 1–17. Bloomington and Indianapolis: Indiana University Press. Kangaskoski, Matti. 2019. “Affordances of Reading Poetry on Digital and Print Platforms – Logic of Selection vs. Close Reading in Stephanie Strickland’s ‘V-Project’.” Image & Narrative 20, no. 2: 35–50. Latour, Bruno. 2002. “Morality and Technology: The End of the Means” , trans. Couze Venn. Theory, Culture and Society, 19, no. 5–6: 247–260. Maiese, Michelle. 2016. “Affective Scaffolds, Expressive Arts, and Cognition.” Frontiers in Psychology, 7. doi:10.3389/fpsyg.2016.00359. Mak, Bonnie. 2011. How the Page Matters. Toronto: University of Toronto Press. Mäkelä, Maria. 2019. “Literary Facebook Narratology: Experientiality, Simultaneity, Tellability.” Partial Answers: Journal of Literature and the History of Ideas 17, no. 1: 159–179. doi:10.1353/pan.2019.0009. Murthy, Dhiraj. 2013. Twitter: Social Communication in the Twitter Age. Cambridge: Polity Press. Newton, Casey. 2016. “Twitter Begins Rolling out Its Algorithmic Timeline around the World.” The Verge, February 10. https://www.theverge.com/2016/2/10/ 10955602/twitter-algorithmic-timeline-best-tweets. Newton, Casey. 2018. “Twitter is Relaunching the Reverse-Chronological Feed as an Option for All Users Starting Today.” The Verge, December 18. https://www.theverge. com/2018/12/18/18145089/twitter-latest-tweets-toggle-ranked-feed-timeline-algorithm. Page, Ruth. 2018. Narratives Online: Shared Stories in Social Media. Cambridge: Cambridge University Press. Papacharissi, Zizi. 2015. Affective Publics. Sentiment, Technology, and Politics. Oxford: Oxford University Press. Perec, Georges. 1978. Life: A User’s Manual, trans. David Bellos. Paris: Hachette. Phelan, James. 2007. Experiencing Fiction. Judgments, Progression and the Rhetorical Theory of Narrative. Columbus: Ohio State University Press. Piippo, Laura. 2019. “Rinse, Repeat: Paratextual Poetics of Literary Twitter Collage Retweeted.” Image and Narrative 20, no. 2: 51–68. Piippo, Laura. 2020. Operatiivinen vainoharha normaalitieteen aikakaudella. Jaakko Yli-Juonikkaan Neuromaanin kokeellinen poetiikka. Jyväskylä: University of Jyväskylä. Raipola, Juha. 2019. “Unnarratable Matter. Emergence, Narrative, and Material Ecocriticism.” In Reconfiguring Human, Nonhuman, and Posthuman in Literature and Culture, ed. Sanna Karkulehto, Aino-Kaisa Koistinen, and Essi Varis, 263–279. London and New York: Routledge. Read, Max. 2018. “How Much of the Internet is Fake? Turns Out, a Lot of It, Actually.” New York Intelligencer, December 26. http://nymag.com/intelligencer/ 2018/12/how-much-of-the-internet-is-fake.html.
76
Roine and Piippo
Roine, Hanna-Riikka. 2019. “Computational Media and the Core Concepts of Narrative Theory.” Narrative 27, no. 3: 313–331. doi:10.1353/nar.2019.0018. Ruckenstein, Minna, and Linda Lisa Maria Turunen. 2019. “Re-humanizing the platform: Content moderators and the logic of care.” New Media & Society: 1026–1042. doi:10.1177/1461444819875990. Saarinen, Jussi, 2019. “Paintings as Solid Affective Scaffolds.” Journal of Aesthetics and Art Criticism 77, no. 1: 67–77. doi:10.1111/jaac.12610. Sauvagnargues, Anne. 2013. Deleuze and Art, trans. Samantha Bankston. London: A&C Black. Scarlett, Ashley, and Martin Zeilinger. 2019. “Rethinking Affordance.” Media Theory 3, no. 1: 1–48. Shildrick, Margrit. 2002. Embodying the Monster: Encounters with the Vulnerable Self. London and Thousand Oaks, CA: SAGE Publications. Sterelny, Kim. 2010. “Minds: Extended or Scaffolded.” Phenomenology and the Cognitive Sciences 9: 465–481. doi:10.1007/s11097-010-9174-y. Stiegler, Bernard. 2019. The Age of Disruption. Technology and Madness in Computational Capitalism, trans. Daniel Ross. Cambridge: Polity Press. Taffel, Sy. 2019. Digital Media Ecologies. Entanglements of Content, Code and Hardware. New York and London: Bloomsbury Academic. Tavares, Sérgio. 2017. Paramedia: Thresholds of the Social Text. Jyväskylä Studies in Humanities. Jyväskylä: University of Jyväskylä. Verbeek, Peter-Paul. 2011. Moralizing Technology: Understanding and Designing the Morality of Things. Cambridge: Cambridge University Press. Weiser, Mark. 1991. “The Computer for the Twenty-First Century.” Scientific American, September, 66–75. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. Kindle edition. London: Profile Books.
4
The Logic of Selection and Poetics of Cultural Interfaces A Literature of Full Automation? Matti Kangaskoski
This chapter examines the contemporary logic of digital cultural interfaces and how this logic influences its users’ sensibility – values, opinions, judgments and desires – in the arena of literature. I call this contemporary logic the logic of selection. In short, this interface logic is driven by the act of user selection – where the users imagine themselves as selecting agents finding and encountering desirable and relevant data from a vast array of possibilities, and where the interface attempts to anticipate the user’s desire to capture her attention and sensibility – and to become selected. When a cultural product – a book, a poem, a song, a movie, an opinion – becomes selected on the interface, this selection is interpreted as success. This gives meaningful feedback to the algorithmically driven technical system, which reorganizes itself to predict future success. Together, the human users and the technical system can be described as a cognitive assemblage of conscious and nonconscious agents (sensu Hayles 2017). Governed by the logic of selection, interaction in the assemblage produces a certain poetics, which pushes for recognizability and a discrete affective stance. As a result, literary forms and ideas of what is good literature are tacitly re-negotiated (sensu Gillespie 2014) to suit the logic of the system while the logic itself recedes into the background and poses as neutral. I investigate these emerging literary forms through case studies from poetry, novels, and from literary criticism, which represents the value of literature through the concept of readability. In the end of the chapter, I espouse the idea that whereas the human agent has the capacity for conscious deliberation, the logic of the interface encourages rather nonconscious (quick and affective) reactions. Thus set, the logical end point of the system is full automation: nonconscious agents operating predictably in the assemblage. In this speculative vision, the agents (human and technical) adapt fully to the logic of the assemblage and are able to fulfil each other’s “desires” seamlessly. Automation of the system is complete when unpredictability is fully eliminated. Although there is increasing scholarly interest in the effects of digital media on various facets of culture and society, including literature, so far less research exists on the logic of these cultural interfaces itself, and on
78
Kangaskoski
how this logic itself influences literary production.1 This is not surprising, since the phenomenon is not only genuinely new, but also admittedly slippery and difficult – slippery because the object of research changes quickly, and difficult because the scholar is necessarily enmeshed in the very assemblage he is researching. I situate this chapter partly on the culturalphilosophical analysis of digital cultural interfaces (what is its logic) and partly to literary research (how it affects poetics). Accordingly, this chapter consist of two main parts. In the first part, I explore the conceptual view of digital cultural interfaces and their logic, and, in the second part, I discuss its potential influence on literary poetics. In the end, in the form of a coda, I present the speculative vision of automation.
The Logic of Selection on Digital Cultural Interfaces Lev Manovich uses the term cultural interface to describe a “human-computer-culture-interface – the ways in which computers present and allow us to interact with cultural data” (2016, 37). However, there is no need to restrict the term “interface” to the computerized sphere; an interface can also be non-electronic, such as that of the codex. For the purposes of this chapter, I define cultural interfaces simply as interfaces that present and allow us to interact with cultural data (for a broader discussion, see Kangaskoski 2019). By digital cultural interfaces I mean digital, usually interactive, interfaces like laptops, smart phones, tablet computers, their respective software and programs and so on, through which we access and interact with cultural data (for a more thorough exploration on the role of digital interfaces, see Kangaskoski 2017a). Quotidian examples of digital cultural interfaces are streaming services like Spotify and Netflix, social media like Instagram, Twitter, or Facebook, commercial sites for literature like Amazon Books, but also non-commercial interfaces such as the BBC in the UK or the YLE in Finland. It is noteworthy that the general logic I aim to explore here is not confined to commercial interfaces, which are its obvious generators. Instead, as I will shortly argue, the logic tends to appear natural, which allows it to almost unnoticeably extend to the public sphere – as well as to the academic, and to the artistic. To begin, let us look at digital interfaces as windows that distribute visibility for whatever they portray. One of the basic tensions of the logic of selection stems from the discrepancy between the small size of these metaphorical windows and the vast amount of theoretically available data. The window is small in three respects. First, relative to space. Only relatively little information fits in on a screen, be it a laptop screen or a smart phone, on one viewing. The “opening window” of a bookshop’s website, for example, typically fits only a few highlighted items (see Figure 4.1). Second, relative to time and attention. The time to capture the user’s attention is short, because the mode of browsing on these small windows is suitable for moving quickly from one thing to the next, typically without
The Logic of Selection
79
Figure 4.1 A screenshot by Matti Kangaskoski of an “opening window” of a bookshop’s website
engaging the “deeper” attentional cognitive modes (see, for example, Hayles 2012; Wolf 2019). The Instagram interface, for example, affords moving from one item to the other swiftly, only to stop for a moment when something grabs the user’s attention. And third, relative to big data. Data is big,
80
Kangaskoski
window is small. Only a small amount of theoretically available data is encountered without further effort. The further effort would require a more sustained engagement, and does not happen by default, if at all, since the interface presents its opening window as the most relevant selection, deemphasizing anything that is not viewed effortlessly. Therefore, the tautological-sounding “window of visibility” refers to, on the one hand, the metaphorical window of our screens, and on the other, to the visibility that they give to something over other. The second part, visibility, is essential as it emphasizes the fact that everything that falls beyond this window remains invisible, unnoticed, excluded, and insofar as we never encounter it, it does not exist at all. Visibility is closely tied with attention, and attention is what the interface tries to capture. From this point of view, the logic of the interface is fueled by the drive to become visible on this small window. That the window is small and the amount of data is big produces the necessity of filtering. The necessity of filtering coupled with the possibilities of digital processing creates new strategies of organizing cultural data into hierarchies of relevance. The background idea is that cultural production can be divided into discrete units, and these units can be ordered and reordered according to a certain logic of filtering. Big data algorithms sieve through large volumes of data in relatively fast procession with the general purpose of drawing useful inferences based on this data. The computational filtering of big data, of course, not only takes the place of necessity but also aims to create new otherwise unattainable insights. An example of a public good drawn from big data could be the prediction and prevention of flu epidemics, as Viktor Mayer-Schönberger and Kenneth Cukier envisioned in 2013. The most pertinent use of big data is, however, profiling, anticipating, and influencing user behavior. Antoinette Rouvroy and Thomas Berns discuss “algorithmic governmentality” to “refer very broadly to a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and preemptively affect possible behaviors” (2013, X). This very application of big data has created a wholly new era of business, which Shoshana Zuboff calls surveillance capitalism (2019). Briefly put, the more precise the data, the more precise the prediction of the user’s behavior, which enables the move from profiling to influencing, and an ever more efficient pre-emptive creation of customer desire and increase of the probability of subsequent purchase. With big data, “our desires are made to precede ourselves”, as Rouvroy and Berns (2013, XXVII) note. I will return to this below. In order to anticipate and capture the user’s attention, the system needs information about its users. For this, interaction is key. The user’s interaction as the signal of desire is the decisive element behind the logic – and makes the user the visible focus of the interface. When a user selects something she gives meaningful feedback to the interface, and this individual selection as well as the aggregate of selections done by other users in turn reorganizes the interface. Other profiling metrics (location, age, various
The Logic of Selection
81
preferences, social class, browser and shopping history, even silence in relation to other users) furnish the profile, but the constitutive act is that of selection: the user selects what captures her attention. Strategies for registering and harvesting the selections are manifold: clicks, likes, hearts, frowns, shares, retweets, time spent on a page, place where viewing of movie stopped, and so on. Although companies that work in affective computing are developing software that aims to recognize the user’s affect based on, for instance, facial expressions and skin temperature, at this moment, the user’s interaction is required in the form of choosing some content over other at the interface by pushing buttons or icons.2 This constitutive act of user selection is why I call this interface logic the logic of selection. As I will argue below, insofar as the act of selection happens before a conscious will or desire has been formed, selecting almost unnoticeably takes the place of conscious choice and simply becomes the affirmation of the algorithmic prediction. Still, this affirmation is crucial for the user to imagine herself as the selecting agent. The logic of selection tends to pose as a neutral, commonly accepted good because its background ideology is freedom of choice and individual tailoring. As Wendy Hui Kyong Chun succinctly puts it: “New media are N (YOU) media; new media are a function of YOU. New media relentlessly emphasize you” (2016, introduction). Indeed, the interface is curated based on your needs to suit your needs. Reading the obligatory notices of privacy and cookie policies we find that relevance, individual tailoring, and better experience are highly revered concepts in arguing for the surrender of personal data. The Guardian, for example, commits to “Showing you journalism that is relevant to you”; “Showing you Guardian products and services that are relevant to you,” and “Working with partners to deliver to you relevant advertising” (2020, emphasis added). This is done in order “to improve your experience on our site” (ibid., emphasis added). Slate magazine is “committed to bringing you information tailored to your individual needs” (2020, emphasis added). Amazon states that “[t]he information we learn from customers helps us personalize and continually improve your Amazon experience” (2017, emphasis added).3 In explaining how the company’s search engine works, Google is “guided by a commitment to our users to provide the best information” and “to find the most relevant, useful results for what you’re looking for” (2020, emphasis added). So, we as users, the YOU, are the focus of the interface. We are already accustomed to selecting our own cultural goods; selecting one unit over the other on the prefiltered information on our small interface windows, be it a piece of news, a social media dispatch, a song, a book, a film, a television show, or a short video. We expect to be able to select and we expect the interface to deliver us a relevant menu to select from. A brief comparison with older media reveals how new and different this logic actually is: the analogue television or radio relied on the logic of broadcasting, where the amount of selection was limited. The time, duration and content were
82
Kangaskoski
preselected and arrived at the consumer portal regardless of the individual user’s preferences (cf. Manovich 2001; see also Kangaskoski 2017a). The contemporary interface user expects whatever cultural content to be available when she wants it, for as long as she wants it, in whatever order she wants it, and in whatever format she wants it.4 The implication is that by constantly choosing from a pretailored menu, we express our individual desires and free agency, and the choices, much like our adaptive interfaces, appear as reflections of our identities. Of course, the logic of the interface is far from neutral and the means of selecting are far from free.
Poetics of Digital Cultural Interfaces: Recognizability and Affective Stance To reiterate my claim: there is a logic and it influences cultural production of which we can say something in the form of a poetics. Said more precisely, the interaction with cultural interfaces, as described above, creates aestheticliterary values and affects reading habits as well as opinions about literature. Readers and authors are initiated into this system and tacitly adapt their sensibility to fit the poetics of the interface. To subtly alter the form and content of one’s dispatch so as to increase its probability of success is what Tarleton Gillespie has called tacit negotiation (2014). Tacit negotiation is close to Rouvroy and Bern’s sense of algorithmic governmentality, the latter term covering a broader range. Through algorithmic governmentality, there is no conscious attempt to create a product that will capture the attention of users, but rather the negotiation is tacit and it happens through changing aesthetic values, through tacitly adopting, or having already adopted, the logic (“a certain type of rationality”) of the interface. The Instagram interface, more of which below, is a good example of a platform for this kind of tacit negotiation. In Zuboff’s (2019) sense of surveillance capitalism, the deliberate aim is to make a product based on “surveillance” data. Examples of these are Netflix’s “Originals”, which directly employ user data to produce new content (see Finn 2017 for an exposition of Netflix’s algorithms). Let the case of Netflix serve as an initial example to the poetics of the interface. When I open my Netflix app, the system uses my profile (region, previous choices and ratings) as well as general criteria like newness, popularity (based on other users’ profiles), trending, and so on, to offer me an opening screen that is curated to my expected needs. Let us say that I then choose the science fiction series Altered Carbon with season 2 out. By selecting the series, I give the system meaningful feedback. This feedback is set in context and interpreted. Then it is acted upon, meaning that my future interface, and that of others, changes based on this feedback. N. Katherine Hayles’s definition of cognition is “a process that interprets information within contexts that connect it with meaning” (2017, 28–9, emphasis in original). By this definition, the interface as technical system is a cognitive agent, albeit nonconscious. Curiously, as Ed Finn shows, Netflix
The Logic of Selection
83
relies on a cyborgian “culture machine” to provide its recommendations. This cyborg is comprised of algorithms and human “taggers” who are employed to watch Netflix’s content and to tag several variables including the level of profanity, the strength of female characters, and the ambiguity or certainty of the outcome, thus producing “a model that fully embraces the gap between computation and culture” (Finn 2017, ch. 3). Referring back to Hayles (2017), together with the human users and taggers the technical system forms a cognitive assemblage in which both conscious and nonconscious agents interact to produce the interface. I will come back to cognitive assemblages in the final part of this chapter. Now the pressing question is the following: why did I select Altered Carbon? Netflix has hundreds of items for me to select from, and yet I choose this. The answer is, however, simple: because it was there, and therefore it looked interesting to me. Why was it there and why was I interested? The answer is that the technical system, the cultural product, and myself as the user have adapted to what is interesting. In other words, the assemblage has produced Altered Carbon as interesting. What I see on my small window influences my selection, and what I select influences what is offered. Selection means success, success means increased probability of future selection in the assemblage, and so on. Looking at the interface in this way puts us in the position of asking what the aesthetic features of interesting are; what makes a product capture our attention and to become selection-worthy, which in turn enables its high position in the hierarchy of visibility. Let us therefore look at the concept of recognizability, which, in the arena of literature, produces the value of readability. Recognisability and Readability Most digital interfaces are browsed through quickly. By now there are many accounts on how digital interfaces afford hyper reading and the cognitive mode of hyper attention, which, more generally, encourage distraction (e.g. Hayles 2007, 2012). Maryanne Wolf in her book Reader, Come Home: Hyper attention, continuous partial attention, and what the psychiatrist Edward Hallowell calls environmentally induced attentional “deficits” pertain to us all. From the minute we awaken to the alarm on one digital device, throughout the day, to the last minutes before we sleep when we perform our final, “virtuous” sweep of email to prepare for the next day, we inhabit a world of distraction. (2019, 71) Wolf adds, in a somber poetic tone: There is neither time nor the impetus for the nurturing of a quiet eye, much less the memory of its harvests. Behind our screens, at work and
84
Kangaskoski at home, we have sutured the temporal segments of our days so as to switch our attention from one task or one source of stimulation to another. We cannot but be changed. (ibid., 72)
The implications for reading are total: The amount of literature we read, how we read, what we read, and why we read all change (ibid.). So what do we do with the “cognitive overload from multiple gigabytes from multiple devices”, Wolf asks (ibid., 75–76), and answers: “First, we simplify. Second, we process the information as rapidly as possible; more precisely, we read more in briefer bursts. Third, we triage” (ibid.). Triage, as the assigning of priority to increase the likelihood of success, is what algorithms already do for us and this I discussed above. Wolf’s two other features, simplification and speed, refer to the first observable poetic feature I wish to present here: recognizability. Recognizability is a broad umbrella term which comprises many subfeatures and is a precondition for others, as we will see. Although it often is the case, recognizability does not necessarily mean ease or simplicity. Instead, it means familiarity and relatability in both form and content. Rupi Kaur’s Instagram poetry illustrates the point. With 4 million followers, Kaur is one of the most popular Instagram poets at the time of writing (fall 2020). Below is a screenshot of one of her poems on her Instagram account (Figure 4.2).
Figure 4.2 A screenshot by Matti Kangaskoski of Rupi Kaur’s poem “responsibility” on her Instagram account
The Logic of Selection
85
The form of these poems is what is most easily recognized as a poem: short and versified. The content is typically a small epiphany which the already profiled target audience will recognize, as it appeals to shared affective meanings (about which more below). To further minimize possible buffers for recognition – such as ambiguity – the poems usually include a picture or a keyword, sometimes even explanation and paraphrase. The reader can react by liking, commenting, and sharing. The form of the poem and the Instagram interface – a prime example for quick and hyper attentive browsing – go hand in hand. The conventional form is able to grab the attention of the reader, who does not need to exert their concentration or to engage in deeper attention in order to recognize and grasp the gist of the poem. To recognize the textual form, to decide quickly what it is about, to agree (tacitly, nonconsciously) on the affect, and then react by pushing a button to express affective stance. Through these reactions, the interface delivers prompt feedback on which forms and contents gather the most reactions, comments and shares, thus providing the author with the necessary information to – subtly or drastically, consciously or nonconsciously – adjust their output to better anticipate the sensibility of such readers in future dispatches. In other words, this feedback-feedforward loop enables the tacit negotiation of forms and values within this cyborgian assemblage of human and non-human agents. Numerous examples could be given and further explorations made on how the demands of recognizability change content and form across the cultural sphere. This would include, for example, the recycling of already known content, themes, stories, characters, and topics as manifested in the abundance of remakes, adaptations, memoirs, and historical dramas of well-known figures, some of which are discussed below. Another development is the amplification of the super star effect (sensu Rosen 1981), by which I mean the recent tendency of super stars to group together to maximize the recognizability of the product.5 For this chapter, the formally most interesting case, however, is the evolving concept of readability. Readability, like clarity and lucidity, is a value term vaguely describing positive qualities in a book. On the one hand, it implies simplicity and flow in language and thought, since complexity, difficulty, and density – all qualifications that figure a buffer of some kind – are often terms opposed to it. On the other hand, readability implies some difficulty, a contradiction of expectations, since we would not evaluate something that is supposed to be readable as readable, like a children’s book or a page-turner thriller. It rather must be something that is expected to be demanding, complex, or profound – or literary –, that we might qualify as readable.6 It is clear that when something has to be digested quickly, like a status update in Facebook, one must use a “readable” form, such as the form of a story. A story told from the perspective of an individual, using a personal point of view and eliciting affective consensus is more likely to go “viral” while operating as a moral exemplum, as the Finnish Dangers of Narratives project has argued (see, for example, Mäkelä 2018). Readability is the literary
86
Kangaskoski
marker of compression, and the contemporary credo is that we must compress lest nobody will have the patience to hear what we have to say. It is obvious that commercial texts hold readability as a high value, testified by the numerous services that offer readability counseling, such as Grammarly, Readable, or Viooly. But it is less obvious how readability as recognizability and ease of access should figure in as a literary value. For this to happen, readability shaped by digital interfaces has to become an internalized good, a norm. To best illustrate the concept within the bounds of this chapter, I will focus on the example of the Man Booker Prize from 2011 and 2018. These examples not only provide with data about what is – and what is not – considered readable, but also give rise to a speculation about the difference between conscious and nonconscious cognition later in this chapter. In 2011, a main criterion of Man Booker Prize judges was explicitly that the nominated books be readable. The chair of judges, Stella Rimington, said: “We were looking for enjoyable books. I think they are readable books. We want people to buy these books and read them, not buy them and admire them.” (Bennett 2011.) Fellow judge Chris Mullin expressed that “such a big factor” for him was that the books “zip along.” Enjoyability and zip-alongability, implying speed, ease, and being entertained, are here connected to readability. Furthermore, Rimington, by saying that these books are not only bought and “admired”, but also read, implied that the “admired” category of books is unreadable, eliciting the good old idea of high literary works not being read by the “people”: an implied argument against elitism. The shortlist selections that literary prize institutions announce bear a similarity to the logic of the interface with their pursuit of offering the best and most relevant selection of all the available data. In her award-announcing speech, Rimington adds “enjoyability” as the subjective and fundamentally un-arguable quality of experience, the ultimate criterion for judging the prize: Read, enjoyed, marvelled at, thought about and even learned from. But definitely enjoyed, because it’s true of most people – that if they don’t enjoy a book, they’ll put it down unread. People enjoy very different things, of course, but I’m delighted that our shortlist has sold so well – and that very many people are telling us that they’re enjoying reading the books. (Rimington 2011, emphasis added) Let us recall that catering to individual experience is a high value of the interface. On commercial platforms this is, of course, aimed at increasing profit. As almost a slip of the tongue, or at least a curious associative leap, also Rimington sets enjoyability in connection with sales, thus showing the tacitly adopted value of enjoyable user experience. That year the prized book was Julian Barnes’s Sense of an Ending (2011), a book whose form and content are arguably conventional. It is linguistically simple and stylistically conversational, and with 150 pages, relatively short. It perfectly captures the essence of a readable book: a high-literary aura without readerly friction.
The Logic of Selection
87
The debate was re-inflamed in 2018 when the Man Booker Prize recipient was Milkman by Anna Burns. Although the novel was generally well received, especially in Ireland, many readers found its lack of names for characters and its particular style “odd,” “impenetrable,” “hard work,” and “challenging” (see Leith 2018).7 A goodreads review from Debbie summarizes the spirit of this difficulty with the title “No no no no no!”: As in hell no I didnt finish it! I made it through a third of the book but I just couldnt take it anymore. It was right up there with a root canal – maybe worse, because it required intense concentration. A fascinating book that sadly was unreadable for me. (Goodreads 2018a) However, “ease of consumption isn’t the main criterion by which literary value should be assessed”, as the Guardian’s Lara Feigel expressed the dilemma in defense of Milkman (Leith 2018). The novel is especially interesting in terms of recognizability, since it makes a point of naming neither characters nor places or the time it is set in. A reader unfamiliar with recent historical events in Ireland has difficulties pinning down the place and time of the novel (not that this knowledge is required in order to read the novel). Moreover, instead of naming the characters, they are designated by their social role in the community or by their relation to the protagonist: middle daughter, maybe boyfriend, brother-in-law, poison girl, or indeed, milkman. In my view, far from being pretentious, this formal choice subtly signals a closed community where people’s social roles dominate over their individual identities. In addition to the buffer of unconventional names, the narration of the novel actively pushes back on its “zip-alongability”: it repeatedly postpones resolutions and halts action to let the narrator recount past events or describe the town’s people and sociopolitical situation. Indeed, the central forward drive – the haunting milkman and the protagonist’s relationship to him – is introduced on the first page and, as is expected, the climax is postponed until (almost) the end. However, Milkman reroutes this expectation by turning the climax into an anticlimax: towards the end (but not as the end) the protagonist hears that the milkman died – unrelatedly to the central interpersonal drama, simply somewhere else. This anticlimax signals that the reader should look elsewhere for the central import of reading this novel, and thus places emphasis on subtler developments. Ingeniously, the novel uses the conventional plot device as a “zipalong” feature but uses it only enough to usher the reader through the long digressions until almost the end, where the device is quite simply dropped, as if to say: it was not important, but it kept you reading. Of course, it does not work the same way for all readers; Debbie, for instance, stopped reading after initial enthusiasm because of the long digressions and density of language:
88
Kangaskoski It was just too friggin’ dense and there were too many tangents. I had to reread and reread sentences until I had severe brain pain. [ … ] Hey, I told myself, I can end this root canal this minute and simply stop reading. (Goodreads 2018a)
The background assumption of readability is that there is a transparent and clear way of using language, and deviating from this is either a flaw or unnecessary ornamentation. Similarly, the background assumption for digital interfaces is that they should strive to be natural and intuitive. According to Richard Seymour, “In medieval English, ‘clarity’ meant ‘divine splendour’. Luminescence, rather than transparency; the presence of inspiration, rather than the absence of clutter. In this sense, clarity might be something you encounter in a dream, rather than in straightforward prose” (2019, 13). Seymour argues that there is a long history of assuming that a neutral way of using language exists, and – despite their etymology as “divine splendor” – this neutrality is expressed with reference to qualifiers of transparency and clarity. The metaphor of the window connects clarity expected from language to the neutrality assumed on digital interfaces. Seymour quotes John Berger as saying “One does not [ … ] look through writing on to reality – as through a clean or dirty windowpane. Words are never transparent” (ibid.). For both language and the digital interface, the metaphor of the window implies transparency and objectivity, and in both cases the materiality of the very medium, its form, becomes an unwanted side note. As Lori Emerson (2014, 13–24) shows, the ideology of the iPad, for example, has been precisely that of transparency, intuition, and naturalness. Sure enough, in popular science fiction future citizens more often than not operate on literally transparent screens (see, for example, the television series The Expanse, 2016–). Transparency is, of course, a facet of readability as lucidity, and from this perspective, any text that foregrounds language as medium, for example by not naming their characters, becomes a hindrance, an obstacle for reading: “pretentious,” “self-indulgent,” “odd,” as with Milkman. Seymour illustrates the issue: “Any writer who seems to be having a good time, reveling in the jouissances of writing, enjoying with rococo swagger the ornate sensuousness of language, is self-indulgent, a source of resentment” (ibid., 27). Instead, Seymour continues, the implication of clear, readable writing is that it is democratic (ibid., 14), the echoes of which we saw in Stella Rimington’s appeal for books that are read and enjoyed “by most people” and not only admired. We could testify to the negotiation of readability in many more arenas, such as digital writing assistants, which, like Viooly, promises to analyze and improve your “Readability: How understandable is your writing? Warmth: How is your writing connecting? Power: What dynamic is your writing conveying?” (Viooly 2020). In a similar ethos, Kindle Direct publishing guidelines prohibit content that is wrong, disappointing, and content “that does not provide an enjoyable reading experience” (see McGurl 2016). Here it suffices to point out that, similarly to the idea of interesting
The Logic of Selection
89
produced in the Netflix assemblage, the values of literature as readable, dense, lucid, or self-indulgent are continually re-produced and re-negotiated in the pressures of compression (triage, speed and simplification) that these literary-cyborgian assemblages encourage and even require. Notably, as Wolf’s quotes from Reader, Come Home (2019) illustrate, the habits of reading, as formed by the cognitive-cultural-material affordances of digital interfaces, are not confined to those interfaces but come to pervade all readerly interactions, including reading paper-bound novels. Readability pertains to the form of language, but recognizability can also be found in the level of content, and that is the case with quick affective stance. Quick Affective Stance The second crucial aspect of the poetics of digital cultural interfaces I wish to highlight here is that of quick affective stance. Of course, any dispatch must first be recognized to be able to be reacted upon quickly. Only after recognition can we have an affective stance towards it. Affective response is harvested everywhere in the digital realm, not only in cultural products or opinions and ideas in social media, but also in services from food to medicine to cleanliness to news and to immigration (see examples in Figures 4.3 and 4.4, and Figure 4.5 below).
Figure 4.3 “Please rate your satisfaction with your experience today” at Food Republic. Photo by Matti Kangaskoski
90
Kangaskoski
Figure 4.4 “How clean are these toilets?” at Heathrow Airport. Photo by Matti Kangaskoski
As stated above, the interface requires feedback in the form of user selection, and one major form of this feedback is affective rating, expressed directly by pushing buttons, electronic or digital. Social media is a prime example of affective interfaces, but the feature is much broader (see, for example, Hillis et al. 2017 for social media’s affects). Additionally, the aspect of pushing buttons to affectively evaluate the quality of cultural data draws us towards the concluding topic of nonconscious agency and automation. Harvesting quick affective response on the interface continues the history of pushing buttons and the embodiment expressed in this gesture, the tactility of affect. We love to push buttons and pushing them is a habitual, everyday action that is so common that it mostly goes unnoticed (cf. Kangaskoski 2017b; for the habituality of new media, see Chun 2016). We have more than a century-long history of pushing buttons during which the electric push button has been remediated onto the screen, which we manipulate with hands either via the proxy of the mouse or by pushing directly on the screen with our fingers (Kangaskoski 2017b). Shaun Gallagher (2017) postulates the concept of “affordance space”: within this cognitive space, our hands play a big part in how we grasp the world. In fact, the reachable peripersonal space around us can be seen as the manipulatory area (ibid., 179–80). We pay more attention to things that
The Logic of Selection
91
Figure 4.5 “How was your immigration experience today?” at Heathrow Airport. Photo by Matti Kangaskoski
our hands can reach, and if there is something within the reach of our hands, we are inclined to handle it (ibid.). In short: a button affords pushing, and when it is within our hand’s reach, we are inclined to push it. This tacit tactile gesturing is a fundamental part of affective participation since, as Gallagher (ibid., 176–7) points out, the hand can move faster than the conscious mind deliberates, making the registering of the affective stance by pushing a button effectively nonconscious: often the hand has already clicked on a link or tapped on an icon, selected a video before the user manages to consciously deliberate on what to do.
92
Kangaskoski
A good example of the harvesting of affect are the above-mentioned, by now ubiquitous, emoticon selections in shops and other services (see Figures 4.3–4.5). The electric buttons on these feedback interfaces are typically placed at waist height and near the exit so that the exiting hand would float as close as possible and be inclined to push the button. Selection happens quickly and draws from affective experience of the moment. As a result, the quality of the service is measured with the metric of quick, nonconscious affect. Moreover, the harvested essence is, of course, your experience. As shown above, the same logic applies on digital interfaces and is similarly connected to the emphasis on individual experience as the measure of success, experience being an especially good harvesting substance because it does not matter whether it is true or untrue, right or wrong – it is always indicative of future success. So, in order to succeed in the system, the product has to amass as many expressions of affective stance as possible. In addition to providing the all-important data for prediction, the simple act of expression (as selection) increases its ranking in the hierarchy of visibility, which increases its probability of further selections. Thus are formed the ubiquitous Top 10 lists – of most read, most viewed, most popular – on cultural interfaces. To reiterate, the cultural product has to capture the user’s attention and to be recognizable without extra cognitive effort. Furthermore, the habit of rating products and services via affective experience changes the user’s engagement with cultural products and pushes it even further towards the affective. Now the measure of a cultural product’s quality is your experience. Your experience monadologizes the engagement by severing the connection to any objective or intersubjective plane and refers to the individual only. Your experience is not about whether something is good or bad, true, false, healthy, useful, ethically sound, or well argued, all of which would refer to a broader network of values and social agreements and require deliberation. Rather, it is just about the YOU, individual affective experience in the moment. This logic pushes cultural products to elicit quick and discrete affective stance so that without knowing the content of the product, the user can swiftly grasp its affective meaning which sets the valence of the user’s experience. Synthesizing previous research, Jens Ambrasat et al. characterize affective meaning as follows. Although deliberate thought is based on the symbolic and denotative meanings of concepts, automatic and intuitive processes are often driven by affective and connotative meanings. Affective meaning differs from lexical or denotative meaning in that it refers to the emotional connotation attached to identities, acts, objects, or the words representing them. Culture and socialization provide humans with stable structures of both denotative and affective meanings of basic concepts of sociality. Affective meanings are sources of implicit culture specific knowledge guiding rapid, automatic social perception and behavior. (2014, 8001)
The Logic of Selection
93
Therefore, eliciting shared affective meaning enables instant reaction and sharing, without extra cognitive effort. The discreteness I refer to here relates to compression and means unambiguity. Discreteness bears a resemblance to a fundamental feature of digitality since “digital”, conceptually speaking, means something that is divided into a finite set of discrete parts. For example, the digital computer reads the voltage differences on its magnetic disk as either “on” or “off”, the symbolic ones and zeros. The hyper attentive recognition of cultural content happens quickly and by excluding ambiguity, since the reconciliation of ambiguity would require conscious deliberation, which, for this system, is too slow. So, fundamentally, the discreteness of the affective stance is a part of recognizability. Affective meaning is therefore harvested with the idea of individual experience, which, however, draws from shared affective meanings. Therefore, they are paradoxically simultaneously individual and shared, but since they are connotative and nonconscious instead of deliberative, they are also simultaneously vague and discrete. A viral hashtag, such as #metoo, is a good example of a discrete signal of an affective stance. When attached to a cultural product, it can signal an unambiguous, instantly supportable good, or the opposite, depending on your affective stance. In reality the content is highly complex. Virality, the ultimate success term for digital cultural interfaces, can only be achieved with an instantly shareable product. To be instantly shareable, a product or its signal must be quickly recognized and elicit discrete affective stance.8
Coda: Automatic Literature? By way of conclusion, I will present a speculative vision of the goal, or end point, of the logic of digital cultural interfaces as the vision of full automation. In order to paint the picture, let us return to the idea of cognitive assemblages. We have established that the digital cultural interfaces act as nonconscious agents in this assemblage, interpreting and acting upon feedback. Similarly, insofar as the logic of selection is propelled by automatic and quick affective response to instantly recognized and affectively arousing stimuli, the human user of the assemblage is also, to this extent, nonconscious. The qualifiers “insofar”, and “to this extent” are crucial, since it is difficult to ascertain the proportion of conscious and nonconscious interaction. To this extent, however, the logic of the assemblage is driven by nonconscious cognitive agents reacting to each other’s stimuli. Without the buffer of the slower, more ambiguous and less predictable consciousness, the prediction of these reactions can and will become more and more precise. When all unpredictability is eliminated, we are left with full automation: the act of selection becomes necessary only as the affirmation of the prediction. But, if the interface serves what we desire, do we not then get what we desire? In this case, the interface is completely transparent in the sense of there being no friction between the selection and the user. Is this not the perfect democratic machine? We get what we want, instantly. Of course, these questions seem innocent only if we forget the form of the interface, its materiality, its habitual contexts of use, the ways in which it works, and the form of our engagement
94
Kangaskoski
with it, all of which have been the topic of this chapter. The form influences how we signal our desire, and that influences what the interface thinks is the object of our desire, which influences what, in turn, becomes the object of our desire, and so on. This signaling is done quickly and affectively, as nonconscious reactions to stimuli, which, when predictable, is ultimately automation. Conscious reflection requires more time and effort, and this the interface design and ideology, its logic, actively attempts to bypass. Stella Rimington and the Man Booker jury in 2011 were criticized for foregrounding readability and zip-alongability in their selection criteria. Looking at the controversy through the lens of this chapter, perhaps it was born of our expectation that the prize juries should operate precisely as conscious agents in the assemblage, producing unpredictable results, less amenable to the easier and faster, automatic reading. What could the prospect of automation mean for the dimly conscious human participant? It means that the assemblage is gradually able to predict the individual’s will to the extent, that, as Bernard Stiegler suggests, the prediction precedes the individual’s will and simultaneously empties it (2019, 8). In other words, the feedback loop tightens as far as to be able to capture the subject’s sensibility by predicting what she wants to see, read, and hear before any conscious engagement takes place, thereby making the act of deliberation futile or preceding it so that it is not needed. Rouvroy and Berns, speaking of predictive algorithms, concur that “[t]he aim is therefore to prompt individuals to act without forming or formulating a desire” (2013b: XIII). In the ultimate vision of logic of selection, the human participant is depleted from her will and conscious desire without actually noticing it because the, by now futile, act of selection replaces it; selection begins to represent individual will and freedom. Here the role of conscious thought is reserved for, at best, confabulation, or fabulation after the fact. The role of this fabulation is, of course, only to justify what has already taken place. In contrast to this bleak vision, I wish to end this chapter on, if not hopeful, at least a positive note. Research has understandably lagged behind the fast emergence of the now ubiquitous digital life, but now, as mentioned in the beginning, its study has attracted increasing attention. By properly identifying the logic of digital cultural interfaces and by investigating the aesthetic features, values and opinions that the broader assemblage produces, we can begin to ask the question of whether this kind of logic and this kind of poetics are, upon conscious deliberation, desirable. My hope is that the current momentum, of which the volume at hand testifies, has the ability to create a break, or at least a buffer, to the seeming inevitability and even hegemony of the logic of digital interfaces – even if this hope is a confabulation, an afterthought, a dim glimpse of consciousness in the otherwise seamless progression of the natural logic of selection.
Notes 1 In terms of literary research, the focus has been on remediation and intermediation, by which I mean how media forms travel from one medium to another (e.g. N. Katherine Hayles 2005; 2012); on how word processing software has influenced writing (see
The Logic of Selection
2
3 4 5
6
7 8
95
Kirschenbaum 2016); how a digital interface for literature such as Amazon Books has influenced US literature (McGurl 2016), on how digital interfaces influence reading (e.g. Wolf 2019), or on digital literature per se (cf. Kangaskoski 2017a; Rettberg 2019). See, for example,.the company Affectiva, which promises that “[w]ith facial coding and emotion analytics advertisers can measure unfiltered and unbiased consumer emotional responses to digital content. All your panelists need is internet connectivity and a standard webcam” (Affectiva n.d.). Amazon updated its privacy terms in 2020. In the new policy the quoted sentence is edited into less experiential terms: “We collect your personal information in order to provide and continually improve our products and services” (Amazon 2020). We expect content to stay on interfaces such as Spotify, and to be available to us at all times. At the same time, we fully expect the interfaces to change from one visit to the next. Ed Sheeran and Beyoncé perform a song (“Perfect Duet” 2017) together; the cast of a movie groups as many super stars together as possible. In literature, two recent Finnish examples testify to this trend: Kari Hotakainen, a well-known Finnish writer published a book about Kimi Räikkönen, a Finnish and international super star Formula 1 driver; and Jari Tervo, similarly well-known Finnish author, wrote the biography of a Finnish entertainment icon, Vesa-Matti Loiri. A literary case in point was the mixed reception of Donna Tartt’s The Goldfinch (2013). It presented a high-literary aura while employing conventional plot-structure and clichéd characters. It similarly divided its reception to those who lauded it as a literary masterpiece and those who debunked it as children’s literature. See, for example, Peretz 2014. See also the reviews on goodreads.com where the “most popular answered question” concerns the lack of character names in the novel (Goodreads 2018b). It should be stressed that the affective meanings need not be only positive. As Schröder and Thagard (2013, 259) explain, representation of affect in sociological research can be mapped onto three dimensions, which are 1) evaluation, 2) potency, and 3) activity. The first concerns the positive/negative evaluations, second the power/weakness, and third the arousal/disarousal, excited/calm, or indeed active/passive delineations. I would speculate that the virality of the dispatch depends on the power and activity of the connoted meanings, be they positive or negative.
References Affectiva. n.d. “Solutions.” Accessed October 10, 2019. https://www.affectiva.com/ what/products/. Amazon. 2017. “Amazon Privacy Notice.” Last modified August 29, 2017. https://www. amazon.com/gp/help/customer/display.html?ie=UTF8&nodeId=16015091. Amazon. 2020. “Amazon Privacy Notice.” Last modified January 1, 2020. https://www. amazon.com/gp/help/customer/display.html/ref=sxts_snpl_5_1_f942be4f-5b17-47d3bea7-61297ed40b47?pf_rd_p=f942be4f-5b17-47d3-bea7-61297ed40b47&pf_rd_r= N2J6TA9T0HCCQQF05H14&pd_rd_wg=YJYOP&pd_rd_w=ErUQc&nodeId= 468496&qid=1570635271&pd_rd_r=999a42d0-761e-493a-8140-04d3c34bbe8b. Ambrasat, Jens, Christian von Schevea, Marcus Conrad, Gesche Schauenburgb, & Tobias Schröder. 2014. “Consensus and Stratification in the Affective Meaning of Human Sociality.” PNAS, no. 111(22) (June 3): 8001–8006. Bennett, Catherine. 2011. “The Man Booker Judges Seem to Find Reading a Bit Hard.” Guardian, September 11. https://www.theguardian.com/commentisfree/ 2011/sep/11/catherine-bennett-dumbed-down-booker-prize.
96
Kangaskoski
Chun, Wendy Hui Kyong. 2016. Updating to Remain the Same. Habitual New Media. Cambridge, MA and London: MIT Press. Kindle Edition. Emerson, Lori. 2014. Reading Writing Interfaces: From the Digital to the Bookbound. Minneapolis: University of Minnesota Press. Finn, Ed. 2017. What Algorithms Want. Cambridge, MA and London: MIT Press. Kindle Edition. Gallagher, Shaun. 2017. Enactivist Interventions: Rethinking the Mind. Oxford: Oxford University Press. Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies. Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot, 167–193. Cambridge, MA and London: MIT Press. Goodreads. 2018a. “Debbie’s Reviews > Milkman.” Review on Goodreads website. n.d. https://www.goodreads.com/review/show/2630302362?book_show_action= true&from_review_page=1. Goodreads. 2018b. “Milkman.” Goodreads website. n.d. https://www.goodreads. com/book/show/36047860-milkman. Google. 2020. “How Search Works.” How Google search works. n.d. https://www. google.com/intl/en_uk/search/howsearchworks/. Guardian. 2020. “Cookie Policy.” Cookie Policy. Last modified August 20, 2020. https:// www.theguardian.com/info/cookies. Hayles, N. Katherine. 2005. My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago: University of Chicago Press. Hayles, N. Katherine. 2007. “Hyper and Deep Attention: The Generational Divide in Cognitive Modes.” Profession 13: 187–99. Hayles, N. Katherine. 2012. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago Press. Hayles, N. Katherine. 2017. Unthought. The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press. Hillis, Ken, Susanna Paasonen, and Ken Petit. 2017. Networked Affect. Cambridge, MA and London: MIT Press. Kangaskoski, Matti. 2017a. Reading Digital Poetry: Interface, Interaction, and Interpretation. Doctoral dissertation. Helsinki: University of Helsinki. Kangaskoski, Matti. 2017b. “From Pressing the Button to Clicking the Mouse – The Shift from Static to Dynamic Media”. In Dialogues on Poetry: Mediatization and New Sensibilities, edited by Stefan Kjerkegaard and Dan Ringgaard, 127–148. Århus: Aalborg Universitetsforlag. Kangaskoski, Matti. 2019. “Affordances of Reading Poetry on Digital and Print Platforms – Logic of Selection vs. Close Reading in Stephanie Strickland’s ‘VProject’.” Image [&] Narrative 20, no. 2, 35–50. Kirschenbaum, Matthew G. 2016. Track Changes. A Literary History of Word Processing. Cambridge, MA: Belknap Press. Leith, Sam. 2018. “Pretentious, Impenetrable, Hard Work … Better? Why We Need Difficult Books.” Guardian, November 10. https://www.theguardian.com/books/ 2018/nov/10/anna-burns-milkman-difficult-novel. Mäkelä, Maria. 2018. “Lessons from the Dangers of Narrative Project: Toward a Story-Critical Narratology.” Tekstualia 4, no. 1, 175–186. Manovich, Lev. 2001. The Language of New Media. Cambridge, MA: MIT Press.
The Logic of Selection
97
Manovich, Lev. 2013. Software Takes Command: Extending the Language of New Media. London and New York: Bloomsbury Publishing. Manovich, Lev. 2016. “The Language of Cultural Interfaces.” In New Media, Old Media. A History and Theory Reader, edited by Wendy HuiKyong Chun and Anna Watkins Fisher with Thomas W. Keenan, 37–51. New York and London: Routledge. Mayer-Schönberger, Viktor, and Kenneth Cukier. 2013. Big Data. A Revolution that Will Transform How We Live, Work, and Think. Boston, MA and New York: Eamon Dolan. McGurl, Mark. 2016. “Everything and Less: Fiction in the Age of Amazon.” Modern Language Quarterly 77, no. 3, 447–472. doi:10.1215/00267929-3570689. Peretz, Evgenia. 2014. “It’s Tartt – But Is It Art?” Vanity Fair, June 11. https://www. vanityfair.com/culture/2014/07/goldfinch-donna-tartt-literary-criticism. Rettberg, Scott. 2019. Electronic Literature. Cambridge: Polity Press. Rimington, Stella. 2011. “Dame Stella Rimington’s speech From the Man Booker Prize 2011.” The Booker Prizes, October 16. https://thebookerprizes.com/news/ 2011/10/16/dame-stella-rimington%E2%80%99s-speech-man-booker-prize-2011. Rosen, Sherwin 1981. “The Economics of Superstars.” The American Economic Review 71, no. 5, 845–858. Rouvroy, Antoinette, and Thomas Berns. 2013. « Gouvernementalité algorithmique et perspectives d’émancipation », Réseaux 2013/1 (no. 177), pp. 163–196. doi:10.3917/ res.177.0163. Schröder, Tobias, and Paul Thagard. 2013. “The Affective Meanings of Automatic Social Behaviors: Three Mechanisms That Explain Priming.” Psychological Review 120, no. 1, 255–280. doi:10.1037/a0030972. Seymour, Richard. 2019. “Caedmon’s Dream: on the Politics of Style.” Salvage Quarterly 6, 7–29. Slate. 2020. “Slate’s Privacy Policy. Our Data-Collection Practices.” Privacy policy. Last modified June 5, 2020. https://slate.com/privacy. Stiegler, Bernard. 2019. The Age of Disruption. Technology and Madness in Computational Capitalism, trans. Daniel Ross. Cambridge: Polity Press. Viooly. 2020. “Viooly.” Website home page. n.d. http://viooly.org/login. Wolf, Maryanne. 2019. Reader, Come Home. The Reading Brain in a Digital World. New York: HarperCollins. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. The Fight for a human Future at the New Frontier of Power. London: Profile Books.
5
Ghosts Beyond the Machine “Schizoid Nondroids” and Fictions of Surveillance Capitalism Esko Suoranta
Between the iron gates of fate, The seeds of time were sown, And watered by the deeds of those, Who know and who are known; Knowledge is a deadly friend When no one sets the rules. (Peter Sinfield, “Epitaph” on In the Court of the Crimson King, 1969)
Introduction: Schizoids Among Us In 1969, King Crimson and lyricist Peter Sinfield invoked the 21st-century schizoid man as the dark fate of future generations arising from the militaryindustrial excesses of the time. That same year, Philip K. Dick’s novel UBIK described a series of false realities penetrated by the cold logic of aggressive consumption, both capitalist and cannibalistic in nature. Fifty years later, with the advent of intricate digital networks and the acceleration of information capitalism, King Crimson’s schizoid men and Dick’s schizoid androids have been joined by a new system that is hostile to life, the schizoid nondroid.1 In this chapter, I aim to connect aspects of digital and technological development, posthuman traumatic materialism, and global surveillance capitalism into a figurative concept that I call the schizoid nondroid. The analysis builds on N. Katherine Hayles’s observation that “[h]uman complex systems and cognitive technical systems now interpenetrate one another in cognitive assemblages, unleashing a host of implications and consequences that we are still struggling to grasp and understand” (2017, 176). Schizoid nondroids are such an implication, a speculative synthesis of humans and technology as well as information capitalist systems that profit from the collection and modification of behavioral data. As such, my approach is closely aligned with the environmental understanding Susanna Lindberg and Hanna-Riikka Roine outline in the Introduction to this collection. The schizoid nondroid emerges as an example of the pervasive environmental technologies whose “effects [on]
Ghosts Beyond the Machine
99
human ethics and justice” are obscured and with which “digital reality is primarily constructed” to benefit private capital (2021, 10). My central claim is that schizoid nondroids are made up of cognitive assemblages in Hayles’s sense, that is, they include networks of human and technological cognizers, and they wield the powers of surveillance capitalism in Shoshana Zuboff’s sense.2 Further, they operate through affordances provided by both networked technologies and the humans enmeshed with them, engendering an accelerated mode of what Pramod K. Nayar terms traumatic materialism. In Nayar’s view, networked posthuman existence in general marks individual bodies as interfaces that are central to the flow of information. These intersections of data and flesh can be traumatizing, even when they provide the benefits of technologically extended cognition.3 I develop Nayar’s view by showing that when such interfaces are present in schizoid nondroid assemblages, one of the effects of traumatic materialism is that humans adopt schizoid tendencies in service of the assemblage and its drive for increasing profits. These features come to light in my analysis of two works of speculative literature, Dave Eggers’s The Circle (2013) and Malka Older’s Infomocracy (2016). Both novels present views into storyworlds where cognitive assemblages are at the heart of technologically advanced societies, contemporary in one case and a future one in the other. Hayles posits that an essential quality of cognitive assemblages, and one that separates them from networks, is the way they are “constantly adding and dropping components and rearranging connections” (Hayles 2017, 2). In the novels, however, the operation of complex systems gives rise to struggles over such free participation in cognitive assemblages and the circulation of information within them. Further, the novels represent different takes on the direction to which surveillance capitalism can take civilization. In The Circle, the world succumbs into techno-capitalist totalitarianism in a satirical fashion,4 while in Infomocracy the tools of surveillance capitalism have been put to the use of a new, global form of technologically mediated democracy. Thus, the novels also illuminate different aspects of the schizoid nondroid. Eggers’s novel shows how the nondroid is dependent on its human constituents to exert control on the members of the assemblage, while in Older’s debut the ideals of cognitive assembling can be seen, in a chilling parallel to information warfare in the 21st century, to harbor the potential for severe cases of traumatic materialism. Both novels also stand apart from those mainstream narratives, referred to in the Introduction, where issues of ethics are engaged through positing humanoid robots or androids as narrative devices for their interrogation. To further clarify my neologism, I want to underline that the nondroids are schizoid because they act without regard to the effects they have on humans within their networks. Furthermore, they are nondroids because they incorporate the anxieties long associated with artificial humans, but are distinctly nonhuman wholes, despite including and being dependent on
100
Suoranta
human action. They seek to manipulate both technology and people in order to commodify behavior in the form of data, creating revenue in the process. In so doing, schizoid nondroids are indifferent to humans, mostly because their negative effects on individuals are externalized and, usually, do not directly affect the revenue created in any significant way. This is due to the context of increasing machinic autonomy in which cognitive assemblages function. As Hayles puts it in Chapter 1, “As machines communicate more with each other than with us, the intervals and pervasiveness of machine autonomy increase – areas where machines make decisions that affect not only other machines but also humans enmeshed in cognitive assemblages” (2021, 40). That autonomy, regardless of a priori design or intention, is double-edged, affording both morally laudable and ignoble results for anyone within the sphere of influence of a cognitive assemblage. The expansive nature of surveillance capitalism makes opting out of such assemblages increasingly difficult. For Hayles, cognitive assemblages are characterized by the unrestricted movement of constituents that circulate information, interpretations, and meanings between them. This leads to a distribution of both cognition and decision-making powers among both the human and the technological constituents (Hayles 2017, 4, 116). My analysis intends to show that this movement and distribution become warped when cognitive assemblages are a part of systems with a capitalist profit-drive. It is then that they are incorporated into schizoid nondroids: assemblages of humans and technology, where the whole extracts data and labor from its members in order to make it impossible for the members to leave their exploitative assemblages. I first spell out the analytical move from schizoid androids to nondroids. Then, I illustrate the dependence of schizoid nondroid assemblages on their human constituents and the spread of a schizoid ethos through the expanding assemblages. Third, I expand the concept of traumatic materialism to more accurately describe experience in body/technology interfaces that constitute life under information capitalism. Finally, I suggest that understanding the interfaces of bodies and technology are central for the critical reassessment of our surveillance capitalist present.
From Androids to Nondroids I employ the analytical move from androids to nondroids to tap into the specific changes that the latest stage of information capitalism has brought onto contemporary experience as well as fiction’s attempt to make sense of that experience. The droids and the phenomena of which they are abstractions are illustrated by a brief description of a transhuman character in William Gibson’s science fiction novel The Peripheral (2014):5 “Her head was perfectly still, eyes unblinking. He imagined her ego swimming up behind them, to peer at him suspiciously, something eel-like, larval, transparently boned … And then she smiled. Reflexive pleasure of the thing behind her eyes” (12).
Ghosts Beyond the Machine
101
Publicist Wilf Netherton, the focalizing character, perceives Daedra, his erstwhile lover and current client, in a distinctly nonhuman, even monstrous light, going so far as to separate her embodied self from the workings of her seemingly alien mind. This is much like the way Hayles reads similar characters in Philip K. Dick’s novels as schizoid androids that “represen[t] the coming together of a person who acts like a machine with a literal interpretation of that person as a machine” (Hayles 1999, 161–2). According to Hayles, schizoid androids are intelligent, unable to feel empathy, do not understand others as humans, and are often gendered female. For her, they are the figure at the center of systems incorporating cybernetics, capitalism, gender, delusion, and reality (1999, 161). There lies a conceptual difference to nondroids: where the android is a personified figure, residing at the center of an earlier stage of complex assemblages, the nondroid can be seen as a system that encompasses such assemblages and that has a distinctly inhuman modus operandi. The schizoid nondroid is not a singular figure into which the relations of capital, technology, and power flow or emanate from, but instead they represent the emergent schizoid tendencies of cognitive assemblages commodified by surveillance capitalism. Similarly, Hayles’s recent focus on “nonconscious cognitive assemblages through which … distributed cognitive systems work” (2017, 2) is brought to bear on her earlier analysis of the schizoid ethos lurking in networked experience under capitalism. Even as information capitalism and its technological systems have taken great strides since Dick’s career and Hayles’s seminal How We Became Posthuman, personified androids, as imagined by both Dick and Gibson, do not walk among us. For Dick, androids often represent the inhuman and unethical as they incorporate dimensions of artifice and power (Suvin 1992, 12–13), while similar transhuman characters in Gibson’s fiction often coincide with the dystopian powers of advanced technologies in the service of shadowy global elites (Suoranta 2016, 18). The same schizoid ethos is at work in more commonplace systems incorporating humans and technology, when intelligence becomes disjointed from understanding others as humans. Such systems cannot, however, be fruitfully represented, in fiction or theory, by a personified metaphorical figure such as the android. Therefore I suggest that nondroid captures complexity in a way that android does not – while still retaining the aspect of a technological lifeform. Schizoid nondroids are especially evident in contemporary speculative fiction that tries to narrativize worlds shaped by technology and capitalism. For instance, in Dave Eggers’s best-selling novel The Circle, the eponymous corporation seeks to eradicate privacy completely in an attempt to achieve a fully connected, transparent society without crime or corruption, so as to create share-holder value and, ultimately, techno-capitalist totalitarianism. In an ironically heavy-handed way, Eggers represents the nature of the company through a parallel between the three heads of the company and three sea creatures brought, in a Muskian fashion via one person submarine, from the Mariana Trench to the Circle campus. Reclusive coding prodigy Ty is likened to a
102
Suoranta
docile seahorse, hiding in corals and seaweed, while Eamon Bailey, “the public face of the company” (2013, 24) and a believer in the utopian potential of complete transparency, has an octopus as his counterpart, both probing and reaching to know all that happens. Finally, Tom Stenton, CEO and arch capitalist, finds his parallel in an immense, blind shark whose digestive process is in full view due to its translucent skin. The three marine lifeforms are placed in a shared tank at the end of the novel, and it is no surprise that the shark goes on to immediately consume both its tank-mates as well as all the seaweed and coral that are supposed to simulate their natural habitat. Onlookers see how all the variety of the environment is turned into uniform gray ash in the shark’s intestines. The sequence illustrates all the features I attribute to schizoid nondroid assemblages: they expand so as to become inescapable without regard to life within their purview; they induce traumatic effects on their constituents and restrict their possibilities of action; and they are dependent on humans to bring their power into fruition. Next, I turn to analyzing the contemporary stage of information capitalism and attempt to show the process through which human constituents within schizoid nondroid assemblages enact the ethos of the system. These tendencies can be located in the arc of The Circle’s plot and the development of Mae, the novel’s protagonist.
Surveillance Capitalism with a Human Face From the perspective of cognitive science, posthumanist theory, and digital technology studies, Hayles, Nayar, and Zuboff, respectively, chart similar waters of networked experience in the contemporary world. But due to their different starting points, their approaches also emphasize its different elements. For Hayles, studying how cognitive assemblages work “provides crucial resources for constructive intervention and systemic transformation” and such study becomes especially important when the applications of cognitive assemblages are being developed in areas like autonomous weapon systems or increasingly sophisticated facial recognition (Hayles 2017, 143). Zuboff, for her part, calls attention to the saturation of bodies with data and the surveillance capitalist prospects of such a process, whereas Nayar notes that a “posthuman condition” emerges when “info-flows are materially produced through a mix of human and nonhuman actors where the possibility of action is embodied as both territory and bodily locations” (Nayar 2014, 66). This way, all three can be seen to describe similar technocultural developments, and thus, the spheres of sophisticated cognitive assembling under surveillance capitalism and networked posthuman existence can be seen to converge. In a schizoid nondroid assemblage, the dynamics the three scholars describe work as follows. Posthuman individuals interface with technology and join it in cognitive assemblages. In the confines of surveillance capitalism this results in people generating data through their actions, which is then extracted and exploited without regard to individual well-being or their privacy rights (Zuboff 2015, 83). Thus, the assemblage not only incorporates
Ghosts Beyond the Machine
103
humans, technology, and information capitalists, but also becomes schizoid in its behavior. In their operation, schizoid nondroid assemblages accelerate the negative effects of traumatic materialism, inherent in the posthuman condition. The synthesis of these views by Hayles, Nayar and Zuboff, reveals how saturation, interfacing, and assembling are increasingly co-opted and commodified by the forces that control information infrastructures. This is especially evident when human cognizers are needed to exert control over other humans with means that are not available to technological cognizers. That is, schizoid nondroids need a human face to mask their schizoid tendencies and to get people to act in ways that are beneficial for the system as a whole, often at the cost of human well-being. Shoshana Zuboff characterizes surveillance capitalism as a new mode of information capitalism and situates its origins in Google’s shift from managing search engines to selling the behavioral data that is created through the use of its services (2015, 78–81). As more and more digital activity occurs on surveillance capitalist platforms, the solicitation of ever further engagement becomes increasingly important to the accumulation and refining of data and the models based on it. This also explains the expansive drive of the most successful surveillance capitalist platforms like Facebook and Google: the extension to more aspects of everyday experience becomes a prerequisite for further growth. This expansion is clearly visible in Dave Eggers’s satirical dystopia. The novel tells the story of 20-something Mae who gets a job at the Circle, the most exciting company in the world. She starts out in its customer experience team, rises through the ranks and, in the end, becomes one of the most important spokespeople for the company and its projects. These include striving for a monopoly in online searches, advertising, commerce, surveillance, elections, and, in fact, most aspects of technologically mediated private and public life. In so doing, the Circle harnesses the environmental potential of its digital platforms, highlighting the close relationship of algorithmic systems and contemporary governmental and corporate environments to which Hayles draws attention. For her, computational media has the ability “to address humans in the microtemporal regime, underneath the temporal horizon of consciousness” (2021, 37). An analogous operation can be seen in the novel as the progress of the Circle’s projects is constantly kept at a distance, just beyond Mae’s conscious understanding, leading to her crucial lack of critical appraisal with regards to her employer and the digital transformation it seeks to engender. One of the symbols of the advancement of the company’s quest to “COMPLETE THE CIRCLE” (325, emphasis original) are the multiplying screens Mae needs to do her job. Starting out with a mere two, one for managing customer requests, the other for intra-company communication, Mae ends up with six different screens and a camera necklace broadcasting whatever she sees to the totality of the connected world. At the very end of the novel, the
104
Suoranta
reader leaves Mae thinking about the possibility of accessing the thoughts of Annie, her friend now in a coma, through technology, underlining the last vestiges of privacy in the novel’s storyworld. In this way, the distinction between online and offline existence loses its significance as digital technologies in league with capitalism strive to engulf the totality of lived experience. The ghost is no longer contained by the machine. Zuboff coins the phrase “Big Other” to signify this shift, thus separating the surveillance capitalist model from those previous systems that were characterized by an Orwellian Big Brother – an idea that has become obsolete as a “totalitarian symbol of centralized command and control” (2015, 82). In contrast to the classically panoptic surveillance architectures, or the hierarchical surveillance of the workplace, this is a world where “habitats inside and outside the human body are saturated with data and produce radically distributed opportunities for observation, interpretation, communication, influence, prediction, and ultimately modification of the totality of action” (2015, 82). In surveillance capitalism, there are fewer and fewer places of escaping Big Other, as we take it wherever we go with devices we are increasingly reliant on. As such, surveillance capitalism restricts one of Hayles’s central features of cognitive assemblages: the ability to make “choices and decisions that create, modify, and interpret the flow [of information],” thus limiting the power of human cognizers to “direct their powers to act in complex situations” (2017, 116). It becomes increasingly harder to choose whether to remain a part of schizoid nondroid assemblages. These restrictive impulses can be observed in Eggers’s novel, as other people begin to enforce the Circle’s way of thinking on Mae. As a result, her development becomes a gradual, and ultimately total, acceptance of a schizoid ethos. An important part of the project is the incorporation of her private life into the purview of the schizoid nondroid assemblage. In the passage quoted below, Mae has been summoned by Dan, her supervisor in Customer Experience. She is still new at the company, but already has perfect scores in her key performance indicators. This is covered quickly by Dan who then goes on to say that maybe Mae has missed some elements of what it means to work at a place like the Circle: “Okay, let’s focus on Thursday at five fifteen. We had a gathering [that] was a semi-mandatory welcome party for a group of potential partners. You were off-campus, which really confuses me. It’s as if you were fleeing” (177). This is a question about what Mae, as a worker, has done on her free time on a Thursday evening. Dan’s confusion is the first hint of a schizoid inability to see Mae as an independent human being, as he has trouble at thinking of a good reason for Mae’s departure from the company campus. Also note how the Circle is able to keep tabs on its workers at all times. The scene continues: “Mae’s mind raced. Why hadn’t she gone? Where was she? She didn’t know about this event. It was across campus … how had she missed a semi-mandatory event? The notice must have been buried deep in her third screen” (178).
Ghosts Beyond the Machine
105
Of course, many companies with an innovative bent tend to emphasize informal gatherings among colleagues both for team-building and the occasional impromptu breakthrough. Mae is aware of this, but has an explanation: ‘God, I’m sorry,’ she said, remembering now. ‘At five I left campus to get some aloe at this health shop in San Vincenzo. My dad asked for this particular kind … ’ (178). Mae’s father suffers from multiple sclerosis and she often helps him and her mother. Dan goes on to lecture Mae about the superior shopping (and other) facilities at the campus, but then begins an inquiry about the following evening: “And Friday night? There was a major event then, too.” “I’m sorry. I wanted to go to the party, but I had to run home. My dad had a seizure and it ended up being minor, but I didn’t know that until I got home.” (178) Again, Mae’s first reaction is to apologize and frame her very reasonable decision as slightly misguided: the seizure was not anything big after all. Dan’s reaction is seemingly sympathetic but still accusatory: “That’s very understandable. To spend time with your parents, believe me, I think that is very, very cool. I just want to emphasize the community aspect of this job. We see this workplace as a community, and every person who works here is part of that community. … “Listen. It totally makes sense you’d want to spend time with your parents. They’re your parents! It’s totally honorable of you. Like I said: very, very cool. I’m just saying we like you a lot, too, and want to know you better. To that end, I wonder if you’d be willing to stay a few extra minutes, to talk to Josiah and Denise. … They’d love to just extend the conversation we’re having, and go a bit deeper. Does that sound good?” (179, emphases original) Dan is not fazed by Mae’s admission of what happened that Friday night, a potentially severe medical event in her immediate family. In fact, he dismisses its significance for their conversation and goes on to emphasize the Circle’s point of view and, in fact, its needs. While being with one’s family is, in quite an understatement, “very, very cool,” the Circle should be taken into account as a community that deserves similar dedication as close relatives. To act otherwise suggests that one does not belong and cannot fulfill the needs of the company, a prerequisite for working there. Finally, the question about continuing the discussion with HR personnel at the end of the passage is really not a question at all. There is only one way Mae can answer and as a result the scene goes on, at length, with a very
106
Suoranta
thorough interview about Mae’s antisocial behavior. It turns out that she failed to share her weekend activities of watching basketball and kayaking with the Circlers on the company’s social media site and has not posted about her father’s condition once. Even the marine life she observed during her kayaking trip was recorded in a paper notebook rather than posted on the network, a practice which Josiah emphatically protests against, not wanting to “call it selfish but – ” (188). As these examples indicate, the managers at the Circle have internalized the models of behavior that the schizoid nondroid expects of them and exerts those expectations onto others like Mae without much regard for their personal circumstances. The narrative and values of the nondroid become the narrative and values of people within its networks. For them, it becomes increasingly difficult to circulate “information, interpretations, and meanings” and to have a say in the “distributed agency” of cognitive assemblages (Hayles 2021, 39). When the schizoid nondroid assemblage expands, this exertion of values and the nondroid as a whole become increasingly harder to control. The Circle is filled with off-hand mentions of how the FCC, EU, environmental groups, privacy activists, and the like try to curtail its power, but in vain. The fact that the references to such measures are mere side notes in the narration focalized through Mae show that within the schizoid nondroid assemblage many of the negative effects they have on the real world remain unseen or appear insignificant. For Mae, specifically, Annie’s work in pacifying various political opponents is shadowed by the drama of her interpersonal relationships at the company as well as the excitement at its projects to change the world. Mae’s perspective is satirized throughout the novel as she repeatedly falls into a pattern of first trying to protect her own privacy and personal identity, but in the next turn expects others to relinquish such rights for the service of the company’s ambitions. The results of this dynamic are painstakingly depicted, as Mae becomes the Circle’s most fervent advocate, despite the consequences to her family, loved ones, and society in general. The Circle uses its vast surveillance capitalist power and the human cognizers in its networks to “become all-seeing, allknowing,” so that “[a]ll that happens will be known” (71). When Mae is confronted by Mercer, her ex-boyfriend, a King Crimson fan (130), and an antiCircle entrepreneur, he tells her about having been offered a gadget so as to scan all the bar codes in his home to automatically replenish his products. The following exchange sums up the stealthy schizoid dynamic: “You know how they framed it to me? It’s the usual utopian vision. … I mean, like everything you guys are pushing, it sounds perfect, sounds progressive, but it carries with it more control, more central tracking of everything we do.” “Mercer, the Circle is a group of people like me. Are you saying that somehow we’re all in a room somewhere, watching you, planning world domination?”
Ghosts Beyond the Machine
107
“No. First of all, I know it’s all people like you. And that’s what’s so scary. Individually you don’t know what you’re doing collectively.” (260–1, emphases original) As we have seen, in cognitive assemblages, decisions over the creation, modification, and interpretation of information are central for arriving at meaning, but a schizoid nondroid like the Circle tries to limit the decisions by its human cognizers. Schizoid nondroids are indifferent to the needs of their human cognizers and use some of them to further that indifference onto others. Zuboff has called attention to the same process, emphasizing that companies like Facebook do not handle the people in their network as users or even the product. Rather, they are the raw material from which behavioral data is extracted and refined for sale (Zuboff 2019). Similarly, the workers at the Circle are reduced to fueling the system without a clear view of its total function and effects. The Circle’s insidious feedback loops that incorporate technological networks, the logic of hegemonic surveillance, and the human cognizers in their sphere of influence can always mask the significance of their accumulating choices that ultimately lead to a neo-Orwellian dystopia. In sum, Mae starts out by accepting the challenging working conditions, then the even more challenging norms of digital social activity, and later, having internalized the modus operandi of the schizoid nondroid, questions any and all invocations of privacy rights. She even ends up coining a trio of slogans for the Circle’s project, “SECRETS ARE LIES … SHARING IS CARING … PRIVACY IS THEFT” (305, emphases original), and starts recording her every waking hour by “going transparent” to share “all she saw and could offer to the world” (306). In this way, the dissemination of the schizoid logic onto human cognizers leads to dismantling free exchange of meaning. As a result of Mae working in concert with the schizoid nondroid network, “America becomes a totalitarian state without anyone noticing” (Pignagnoli 2018, 187).
Traumatic Materialism 2.0 This capitalist-totalitarian expansion of the schizoid nondroid logic can be seen to correspond to what Nayar terms “traumatic materialism,” the way in which networked posthuman reality marks individual bodies as interfaces that produce a “distributed subjectivity.” This subjectivity is central to the flow of information that is produced by “a mix of human and non-human actors” (Nayar 2014, 66), and is reminiscent of Hayles’s conceptualization of cognitive assemblages. Nayar argues that this intersection of the material and immaterial can traumatize the individual body, and he uses Cayce Pollard, the protagonist of William Gibson’s Pattern Recognition (2003), as a central example (Nayar 2014, 66–9).6 Cayce has an almost supernatural ability to decipher the semiotics of brands and predict their performance on the market, even though she suffers from a severe allergy to brand imagery
108
Suoranta
like the Michelin Man (97, 98) or the utterly unimaginative designs of Tommy Hilfiger (17–18). Nayar argues that Gibson takes “the body as the site of [the] intersection of material and immaterial but shows the body as traumatised by this intersection” (2011, 55–6). He locates Gibson’s critique of capitalism as it “thrives primarily on the spread, control and collection” of data in Pattern Recognition’s instances of traumatic materialism. Through them, Gibson critiques the use of data in capitalism and highlights how the circulation of data requires bodies to cognitively register, channel, and assimilate it (Nayar 2011, 60). Nayar’s conceptualization of traumatic materialism can be usefully developed beyond the discussion of Gibson’s novel and applied to the depiction of schizoid nondroid assemblages in other works of fiction and, by extension, to experience of life under surveillance capitalism. In his reading of Pattern Recognition, Nayar finds three distinct intersections “of data and flesh” in the novel. First, there is Cayce’s sensitivity to brands, discussed above, “a materialization of information” through Cayce’s body (Nayar 2014, 66). Second, Magda, a viral marketer, influences consumer behavior at bars and clubs through word-of-mouth and her listeners disseminate the information further; here data is transmitted through bodies to other bodies with conspicuous efficiency that results from the human-to-human interactions. Third, Nora, the artist behind the mysterious “footage” (4) around which the plot revolves, turns out to be paralyzed due to a piece of shrapnel lodged in her brain, communicating only through the editing of video. As the narrator puts it: “Only the wound, speaking wordlessly in the dark” (305). Nora’s case is the most dramatic of the intersections and the clearest instance of an originating traumatic event that is translated into media data and disseminated online. However, Nayar leaves out the analysis of a fourth instance of traumatic materialism in Gibson’s novel. It is one that, to me, seems useful in bringing traumatic materialism to bear on understanding experientiality in the cognitive assemblages of surveillance capitalism as potentially traumatic to the human bodies within them. In my reading, a key instance of traumatic materialism in Pattern Recognition is quite mundane, that is, Cayce’s engagement with “Fetish:Footage: Forum” (3, F:F:F for short), a message board for footage enthusiasts. There, competing schools of “footageheads” (4) argue between origin theories, pour over poststructuralist interpretations, and make friends as in any nerdy Usenet group or Reddit community. For globetrotter Cayce, the forum is as “familiar as a friend’s living room” (3), “a way now, approximately, of being at home,” and “a familiar café … somehow outside of geography and beyond time zones” (4). This commonplace intersection of bodies and data veers toward trauma when the forum interactions become co-opted by machineries of surveillance and capitalism. Bigend, the novel’s villainous advertisement magnate, uses Cayce to find the origin of the footage, but she also becomes a person of interest for the security operatives of a Russian
Ghosts Beyond the Machine
109
oligarch through the forum. In the midst of such conflicting agendas, F:F:F is revealed to have been infiltrated by paid lurkers who follow the discussions and profile users. As Cayce emerges as one of the most interesting commentators, she is also interpreted as possibly linked to an intelligence operation directed against the oligarch Volkov. As a result, the Russians break into the records of Cayce’s therapist, and Dorotea, her professional rival, manages to exploit her allergy to brands, among other attempted assaults. Even without the full arsenal of a surveillance capitalist internet, the intersection of bodies and data puts Cayce in danger as the F:F:F forum is exploited by factions each bent on protecting their own interests. In a stark contrast to the Circle’s schizoid ignorance of people’s individuality, the shadowy Bigend and Volkov are depicted as surprisingly amiable. Cayce’s troubles and turmoils are presented as failures of their systems of surveillance as the attention she garners is mostly due to misunderstanding and outdated Cold War protocols. The two apologize for this, which shows that individual well-being is something they at times consider, despite their otherwise ruthless capitalist and criminal methods. The example of F:F:F, even with its effects on Cayce, shows that traumatic materialism as a theoretical concept does not need to be as dramatically trauma-inducing as some of Nayar’s examples would indicate. Instead, it encompasses the various unexpected results that ubiquitous connectivity can have in the contemporary moment. A real-world parallel to the online-born danger in Gibson’s novel could be the sadly common contemporary doxing operations against feminists, journalists, and celebrities that are propagated in the depths of forums like 4chan, and coordinated Twitter harassment. However, the most important implication of the concept of traumatic materialism is not that interfacing with technology is automatically traumatizing or a physical, violent penetration of the body by technology – as cyberpunk tradition often has it – but that the quality and severity of trauma are dependent on the uses that networked technologies are put into. This means that in cognitive assemblages with human and technological cognizers, it is not one or the other that necessarily induces trauma, but rather that the details of their interfacing affect the results, some of which can be traumatic in nature. With this additional aspect, Nayar’s concept becomes more useful: traumatic materialism is what can result from the intersection of bodies and data when the technologies that enable this intersection in the first place are not used in ways that prioritize the wellbeing of bodies in such assemblages. Next, I argue that it is in the surveillance capitalist context that the probability and severity of traumatic materialism are increased. In other words, schizoid nondroid assemblages are especially viable in inducing trauma to the bodies in their expanding sphere of influence. Malka Older’s debut novel Infomocracy explores progressive political alternatives to nation-state mandated representational democracy, and offers a less-
110
Suoranta
than-dystopian view of surveillance capitalism.7 In the novel’s imagined future, an organization called “Information” (15) manages a global network, striving to adhere to ideals of transparency, accountability, and informed decision-making. Nation states have largely been abolished and replaced with a global “microdemocratic” (22) system, where “centenals” (13), groups of a hundred thousand, elect their governments from a variety of possible parties. Some are conglomerates of corporations like “Liberty” (16) or “PhilipMorris” (33), while others like “Earth1st” (166) and “YouGov” (50) are centered around shared policies, and yet others like “ChouKawaii” (123) are composed of locally popular fringe groups. The overall tone of the novel is more positive than The Circle. Clearly, Information as an organization wields similar powers as the Circle to be able to maintain a global internet and administer democratic institutions with it, but the difference seems to be in the overall goals of the surveillance capitalist masters. Older understands the workings of large organizations, UN agencies, and internet infrastructure too well to simplify Information into a group of wise, benevolent rulers administering a perfect system, but the way the system is primed to strive toward the sharing of information and communication aligns it with the more ideal cognitive assemblages Hayles describes, rather than the schizoid nondroid – even when the same tools that enable surveillance capitalism are in use. The title of the novel suggests as much: the rule of information is here, and hence, knowledge remains power, but is there a way to distribute it more evenly and in a democratic fashion? Therein lies the central conflict in Older’s novel, as Information’s dependence on the free movement of information, rather than hegemonic control over it, leaves it vulnerable to attempts of exploitation and to the potential for traumatic materialism that resides in cognitive assemblages. In fact, Older shows how traumatic materialism can go beyond the experience of an individual and spread to larger demographic populations through the multitude of interfaced bodies in cognitive assemblages. A major driver for the techno-thriller plot in Infomocracy is the near-subliminal plan of Liberty, one of the corporate conglomerate parties, to influence latent impulses of several nationalistically oriented populations and thus to swing the election in their favor and win the global “Supermajority” (21) of centenals. Their operation is designed to appeal to certain demographic and regional cohorts, while staying off the radar from Information and the other parties. Liberty targets specific areas where histories of tense borders and fantasies of national supremacy have a long history – for example, Aceh, Taiwan, Cyprus, and Okinawa – and caters their message to the age-groups who have a personal connection to those discourses. In this way, Liberty taps into generational trauma by means of data dissemination, thus extending the traumatic materialism of bodies and data over vast and diverse populations. In the novel, this also carries with it the potential of war, something that the micro-democratic transformation has largely abolished in Older’s fictional world. Infomocracy was published before the revelations about the role of social networks in the 2016 presidential elections in the US and the Brexit vote in
Ghosts Beyond the Machine
111
the UK. In such instances, exactly the kinds of techniques Older describes in her fiction have been employed with the technologies that surveillance capitalism currently employs, namely, by gathering and using behavioral data to modify future behavior. In such cases, the potential for traumatic materialism inherent in cognitive assemblages has been, and will be, exploited in ways that follow the schizoid nondroid ethos where goals and needs that do not align with those of the nondroid are ignored. I have attempted to expand Nayar’s concept of posthuman traumatic materialism to account for these mundane applications of networked technology that surveillance capitalism and the resulting schizoid nondroid assemblages are built on. My analysis of The Circle and Infomocracy may help us understand traumatic materialism as a potential that resides in cognitive assemblages. That potential can then be activated in various ways: employing seemingly neutral or even benevolent networks to inflict harm; restricting the movement of information to control people within the assemblage; and manipulating the dissemination of data to influence behavior, even over large populations. Each application engenders traumatizing effects of different kinds.
Conclusion: De-Schizoing the Schizoid? The King Crimson stanza from Epitaph ends with the pessimistic couplet “The fate of all mankind I see / Is in the hands of fools.” Sinfield’s speaker finds no trust in the helmsmen of society in setting boundaries for the deadly potential of knowledge in the anxious atmosphere of the late 1960s. While nuclear catastrophe has not wiped out civilization, the 21st century schizoids, made into flesh, circuitry, and clouds still compete for the control of information to take the world toward their careless ends. As a result, nearly all of the fictional applications of surveillance capitalist technology have become commercially available since The Circle was published in 2013. While their totalizing, dystopian effects are arguably not as dramatic as in Eggers’s novel, concerns over the control of digital infrastructures and their harmful effects on individuals dependent on them remain. Over a matter of days in February 2019, both The Verge and Reuters reported on the poor well-being of outsourced Facebook content moderators, resulting from continued exposure to the worst the internet has to offer (Newton 2019, Vengattil and Dave 2019). Such news adds to the widely reported instances of disinformation, far-right discourses, and toxicity that mark the cognitive assemblages of today, with grave results for marginalized populations as is commonplace for capitalism throughout its history. I have defined the schizoid nondroids as cognitive assemblages reined by the forces and motives of surveillance capitalism. Within them, possibilities of free dissemination of information diminish and the potential for traumatic materialism is activated. As in The Circle, the schizoid nondroids need human cognizers within their power to internalize and spread their schizoid ethos.
112
Suoranta
For the schizoid nondroids of our time, it is important to not be revealed as being indifferent to the consequences of their actions on people and communities. Thus far, however, regulation, boycotts, or other forms of resistance have not had significant effect on their function. The benefits of these assemblages continue to be seen to outweigh their negative effects, and the convenience of the many trumps the threatened rights of the few. The implicit narrative is also largely defined by the nondroids themselves: let us do as we please, trust us when we say that we care, and things will turn out for the best. Thus, uncovering and critically examining the intersections of bodies and technologies must be seen as a starting point for amassing gravitas to reverse the schizoid nondroids’ influence, so that we can start making our nondroids more humane and avoid some of the dangers of the twenty-first century. Seeing schizoid nondroids as both an abstraction of our contemporary surveillance capitalist moment and a literary interpretation of those complexities is one avenue of this project of critical detection. As such, it is also one answer to Hayles’s call for ways of talking “about the enmeshments of humans, nonhuman others, and our computational symbionts without obliterating important distinctions and yet also acknowledging commonalities” (2021, 43). While fiction, speculative or otherwise, is not the most effective vehicle in engendering technical knowledge or political and societal change, its unique characteristics as a cognitive environment with which to think about the complexities of experience within cognitive assemblages can create affordances for imagination beyond resignation. As she notes in Unthought, for Hayles the potential “for constructive intervention and systemic transformation” into cognitive assemblages lies at the intersections of bodies and technology (2017, 143). Thus, the choices we now make in developing or resisting different kinds of cognitive assemblages “will have extensive implications … for the kinds of future we fashion for ourselves and other cognitive entities with whom we share the planet” (Hayles 2017, 132). The themes and techniques of speculative fiction, as I have tried to demonstrate by analyzing novels like The Circle and Infomocracy, can be seen as the beginnings of these interventions through which avenues of resistance against the schizoid logic might emerge.
Notes 1 The schizoid nondroid is ostensibly an update on Hayles’s schizoid android, a figure she uses to discuss the complexities of cybernetics through Philip K. Dick’s fiction in How We Became Posthuman (1999), whose central characteristic is its inability to understand others as humans. To me, the schizoids of Sinfield and Dick correspond to Hayles’s and my use of “schizoid,” while, for example, Janelle Monáe’s “schizo running wild” in “Come Alive (War of the Roses)” (2009) has a Deleuze-Guattarian subversive potential as her alter-ego Cindi Mayweather enacts a violent rejuvenation sparked by the pressures of a dystopian society. 2 Following Zuboff, by surveillance capitalism I mean the system of business models of technology giants like Google, Facebook, Amazon, Apple, and Microsoft whose core business is about amassing behavioral data through their services and selling that data to influence consumer decisions. Surveillance capitalism is thus a historical
Ghosts Beyond the Machine
3
4
5 6
7
113
sub-category of information capitalism as the general economic structure of, for example, Western market economies (however, see Morozov [2019] for a criticism of Zuboff for not paying enough attention to the relationship between capitalism and surveillance capitalism). It should also be noted that information capitalism is necessarily material as well, insightfully exemplified by Levenda and Mahmoudi (2019). For a historical analysis that goes against terminology like postindustrial capitalism see Beckert (2015, esp. 440). Nayar does not explicitly refer to terms like “extended cognition” from cognitive science, but I adopt the idiom to highlight how Nayar, Zuboff, and Hayles all chart waters that can be fruitfully analyzed through the so-called enactive approach to cognition. For more information, see, the special issue of Style (48, no. 3, 2014) on cognitive literary study or Terence Cave’s Thinking with Literature: Towards a Cognitive Criticism (2016). The Circle is indeed a satire of, at least, Silicon Valley corporate culture, the privacy/ security discourses in the US after 9/11 and the war on terror, as well as of the attention economies of social media. While this affects the poetic techniques Eggers employs, my focus is on the novel’s themes that are similar to much of speculative and dystopian fiction concerned with information technology and capitalism. For a more detailed analysis of transhumanity in The Peripheral, see Suoranta 2016. Mercer’s name and his stand against Mae on issues of collectivity and individualism, spiritualism and technofetishism, as well as transparency and privacy align him with the martyr-like Mercer of Dick’s Do Androids Dream of Electric Sheep? (1968) whose Sisyphos-like sufferings are felt by his followers through “empathy boxes.” Similarly, The Circle’s Mercer emerges as an affective center of the novel through his various sufferings (and, ultimately, death) that are caused by Mae’s myopic dedication to the Circle. For a discussion of Malka Older’s protagonist Mishima as another victim/beneficent of such interfacing, see Suoranta 2018.
References Beckert, Sven. 2015. Empire of Cotton: A Global History. New York: Vintage Books. Cave, Terence 2016. Thinking with Literature: Towards a Cognitive Criticism. Oxford: Oxford University Press. Dick, Philip K. 2007. Do Androids Dream of Electric Sheep? In Four Novels of the 1960s. Edited by Jonathan Lethem. New York: Penguin Group. Dick, Philip K. 2007. UBIK. In Four Novels of the 1960s. Edited by Jonathan Lethem. New York: Penguin Group. Eggers, Dave. 2013. The Circle. New York: Vintage Books. Gibson, William. 2003. Pattern Recognition. London: Viking. Gibson, William. 2014. The Peripheral. New York: Putnam. Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago and London: Chicago University Press. Hayles, N. Katherine. 2017. Unthought: The Power of the Cognitive Nonconscious. Chicago and London: University of Chicago Press. Hayles, N. Katherine. 2021. “Three Species Challenges: Toward a General Ecology of Cognitive Assemblages.” In The Ethos of Digital Environments – Technology, Literary Theory and Philosophy, edited by Susanna Lindberg and Hanna-Riikka Roine, 000–000. Abingdon: Routledge. Levenda, Anthony M. and Dillon Mahmoudi. 2019. “Silicon Forest and Server Farms: The (Urban) Nature of Digital Capitalism in the Pacific Northwest.”
114
Suoranta
Culture Machine 18. https://culturemachine.net/vol-18-the-nature-of-data-centers/ silicon-forest-and-server-farms. Lindberg, Susanna and Hanna-Riikka Roine. 2021. “Introduction: From Solving Mechanical Dilemmas towards Taking Care of Digital Ecology.” In The Ethos of Digital Environments – Technology, Literary Theory and Philosophy, edited by Susanna Lindberg and Hanna-Riikka Roine, 000–000. Abingdon: Routledge. Monáe, Janelle. “Come Alive (War of the Roses).” Recorded 2009. Track 10 on The ArchAndroid. Wondaland Arts Society and Bad Boy Records. Compact disc. Morozov, Evgeny. 2019. “Capitalism’s New Clothes.” The Baffler, February 4. https:// thebaffler.com/latest/capitalisms-new-clothes-morozov. Nayar, Pramod K. 2011. “Traumatic Materialism: Info-flows, Bodies and Intersections in William Gibson’s Pattern Recognition.” Westerly 56, no. 2: 48–61. https:// westerlymag.com.au//wp-content/uploads/2016/02/Westerly-56-2.pdf. Nayar, Pramod K. 2014. Posthumanism. Cambridge: Polity Press. Newton, Casey. 2019. “The Secret Lives of Facebook Content Moderators in America.” The Verge, February 25. https://www.theverge.com/2019/2/25/18229714/cognizantfacebook-content-moderator-interviews-trauma-working-conditions-arizona. Older, Malka. 2016. Infomocracy. New York: Tom Doherty Associates. Pignagnoli, Virginia. 2018. “Narrative Theory and the Brief and Wondrous Life of PostPostmodern Fiction.” Poetics Today 39, no. 1: 183–199. https://read.dukeupress.edu/ poetics-today/article/39/1/183/133532/Narrative-Theory-and-the-Brief-and-WondrousLife. Sinfield, Peter. “Epitaph.” By King Crimson. Recorded June–August 1969. Track 3 on In the Court of the Crimson King. Island Records. Vinyl LP. Suoranta, Esko. 2016. “The Ironic Transhumanity of William Gibson’s The Peripheral.” Fafnir – The Nordic Journal of Science Fiction and Fantasy Research 3, no. 1: 7–20. http://journal.finfar.org/articles/538.pdf. Suoranta, Esko. 2018. “Surveillance Capitalism and the Data/Flesh Worker in Malka Older’s Infomocracy.” Vector 288: 54–60. https://vector-bsfa.com/2018/11/08/sur veillance-capitalism-and-the-data-flesh-worker-in-malka-olders-infomocracy/. Suvin, Darko. 1992. “The Opus: Artifice as Refuge and World View (Introductory Reflection).” In On Philip K. Dick: 40 Articles from Science-Fiction Studies. Edited by R.D. Mullen, Istvan Csicsery-Ronay, Jr, Arthur B. Evans, and Veronica Hollinger, 2–15. Terre Haute & Greencastle: SF-TH Inc. Vengattil, Munsif and Paresh Dave. 2019. “Some Facebook Content Reviewers in India Complain of Low Pay, High Pressure.” Reuters, February 28. https://www. reuters.com/article/us-facebook-content-india-feature-idUSKCN1QH15I. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30: 75–89. Zuboff, Shoshana. 2019. “The Real Reason Why Facebook and Google Won’t Change.” Fast Company, March/April. https://www.fastcompany.com/90303274/ why-facebook-and-google-wont-change.
Part 3
The Ethos: Entanglement and Delegation
6
The Zombies of the Digital What Justice Should We Wait For? Frédéric Neyrat
“Hope is not memory held fast but the return of what has been forgotten.” T. W. Adorno, “On the Final Scene of Faust”
What do we try to give up through the delegation at play in moral machines, for instance driverless cars that have to “choose” between killing passengers or pedestrians, old or young people?1 Is there any aspect of a moral decision that cannot be delegated to a machine, however clever it might be, without destroying the idea of decision itself? To answer these questions, it is necessary to examine happens when abstractions, “immaterial” acts (like moral decisions) are turned into material, concrete, planned operations that machines – for instance self-driving cars, but also military robots – can take charge of. Yet it is impossible to understand such transformations – from the immaterial realm to the material level – without considering, conversely, their ontological counterpart: the digitalization of the world, the transformation of analogic reality into a system of zeros and ones, a trans-mutation without which programs helping doctors in their medical diagnosis, or facial recognition systems in airports, would be impossible. In this chapter, I shall argue that this double process – the two-way exchange between the virtual and the actual – is always incomplete, and necessarily fails; partially at least. The analogic always resists its digitalization and gives rise to what I call “the zombies of the digital”; conversely, the actualization of virtual entities always represses some potentialities and leads to what I call the specters of the analog. Haunted by their Haitian predecessors, the zombies of the digital resist the virtualization of the world; dialectically, the specters of the analog await a collective, material body able to support their ontological claim regarding certain abandoned emancipatory projects. Both complain about their discarded mode of being, about their exploitation, the oppression they endure. Metaphors jamming the dominant ontological – but also, and maybe more than anything, political and economic – operations of transformation, the zombies of the digital and the specters of the analog ask us not to delegate our desire for justice.
118
Neyrat
Automatic, Autonomous: A Double Bind Analog specters and digital zombies: how might we use this monstrous theoretical matrix when analyzing moral machines, these de-territorialized superegos hosted in technological shells? One could define moral machines as self-governing systems able to produce the good, or at the least to avoid the worst: that is to say, machines that, stemming from the development of autonomic computing and ambient intelligence, have the capacity and the right to take decisions leading to actions that we can consider as morally good, or bad.2 Let us take the example of driverless cars faced with situations in which they would have to choose between whom to kill and whom to spare. My question is: what does it mean, in this situation, for a driverless car to take a moral decision? We could agree that there are two main ways to consider the production of a moral decision. The first is, to use Kant’s vocabulary, “heteronomous,” meaning to apply a commandment, a tradition, a code, without having to found it or to test its foundations.3 In this philosophical frame, an agent is supposedly moral when she respects and follows the sacred or quasi-sacred text without having the right to question it, except to transgress it. We know how Hannah Arendt challenged this morality while analyzing the Eichmann case, by showing how automatic respect leads to the denial of any moral law (2006, 133–7). The second way to make a moral decision can be described as “autonomous”: it requires using reason defined, with Kant, not as the understanding, Verstand, but as the faculty of ends, Vernunft, that is to say the capacity to determine a priori what is good and what is evil. I shall not say more in this article on the very well-known distinction between autonomy and heteronomy, but I wanted to remind us of it because I fear that, without this basic clarification, the expression “moral machines” could be misleading. To explain why it would be misleading, let us return to the driverless cars: would we really want cars to autonomously determine the rational grounds of their moral actions? Would we want them to decide that, all things considered, it is better to kill young children than dogs, because dogs suffered unjustly for ages from subjection to humankind and its baseless anthropocentrism? Or that cars should kill human beings because the latter are responsible for climate change and so the only way to deal with the ongoing ecocide is to exterminate half of the world population, like Thanos does for the entire universe in the film Avengers: Infinity War (Russo and Russo 2018)? Or that it is better to kill young children than old persons because we should respect old people as they are wise? Or that it is impossible for smart cars to decide whether or not they should spare a woman or a man, because such choice cannot be morally justified on the ground of reason? Real autonomous driverless cars would illustrate a classic sci-fi nightmare, to which I shall shortly return, but before analyzing this nightmare, we could argue that real moral machines would be apocalyptic: we
The Zombies of the Digital
119
should not want them, and I think that in the end we do not want them: what we really want are mechanical morals, the mechanical implementation of what we consider morally good. We want automatic superegos, to cite Marcuse: mechanical zombies.4 If we want mechanical zombies, it is because we fear and desire at the same time the autonomization of automatization. We fear this autonomization because it is always the occasion of a humiliation, a “narcissistic wound,” à la Freud, stemming from this traumatic experience: seeing machines capable of performing actions we had assumed to be feasible only for us (Freud 1969, 135–44; see also Sloterdijk 2017, 217–36). But we also want this autonomization, because a very old promethean wish is at play in it: the capacity to create something that would be alive, intelligent enough, promoting us to the rank of divine creators. This contradictory desire leads to the double bind of the autonomization of automatons: “Be autonomous; but obey” is the command we want to give to wannabe smart machines. In other words: “Be autonomous; insofar as you do not really think, choose, or create.” In this regard, and to borrow from Freud again, the double bind of the autonomization of automatons could be translated as an unconscious desire, a phantasy that I base on Freud’s famous “a child is being beaten,” which I turn into the following formula: “a human being is being beaten,” hit and defeated by a machine (see Freud 1997, 1–30). To go through this fantasy, that is to say pour traverser le fantasme, as Lacan said, to escape the trap of our own desire, we need to answer the following questions: what do we really (want to) delegate to the moral machines? More precisely, what do we seek to give up through this delegation? Let us answer these questions in the second part of this chapter.
Subjects, Objects, Noojects Our question is: what do we really delegate to the moral machines? A first answer could be: we delegate nothing more than what we delegate to any form of artificial intelligence, like in the case of programs that help doctors in their medical diagnoses, smart beds in geriatric hospitals that are able to detect whether the patient is still on the bed or has fallen on the floor, facial recognition systems in airports, AI that help manage job interviews, etc. What seems to be delegated in all these cases is a cognitive operation: producing a diagnosis, recognizing, analyzing, selecting, paying attention to something. But what do we do when we delegate a cognitive operation? Do the human beings then lose a part of their brain, triggering their becomingzombie? Let us not forget the lesson of German philosopher Gotthard Günther: the cognitive operations I identified above are not essentially human (2008, 205–26). Günther distinguished between two types of machines: the first are classical or “Archimedean-classical” machines, which work through mechanical and moving parts (lever, axle, wheel, propeller) and perform
120
Neyrat
their activities via the movement of their parts, for example a car; but as well, Günther says, the action of rolling something on a tree trunk. With its articulated limbs, the human body is the prototype of the classical machine. The second type of machines are trans-classical or non-Archimedean machines, which work without mechanically moving parts: a trans-classical, or say, cybernetic machine, provides information more than work and tends to function like a brain. Like a brain (and Günther insisted on this point), one cannot reduce a transclassic machine to its components: it is not a tool or a system of tools, neither is it the sum of its parts and of its materials – in other words, it is not an object. But one also cannot reduce a trans-classic machine to the human subject who produced it: neither an object, nor the imitation of a human subject, a transclassic machine – or what Günther also calls a “mechanical brain” – reveals a specific ontological domain in its own right, this domain compelling us to rethink the subject/object Great divide underpinning the discussions about the relations between human beings and technologies. I said that the production of mechanical brains “reveals” an ontological domain, what does this mean exactly? The mechanical brain can be described as a set of processes of reflection – like memory, attention, projection into the future, and so on – that are not human-specific mental operations. Therefore, it is inexact or at least incomplete to describe trans-classical machines as merely the product of a process of externalization, the externalization of human skills. In other words, artificial intelligence is not, or at least not only, the externalization of human interiority: AI rather delimitates, reveals an ontological domain. So, if the construction of trans-classical machines has revealed an independent region of being, this independence does not mean that trans-classical machines replace and ontologically harm human beings, but that we need to rethink what subjects (human subjects for instance), objects, and what I would like to call, instead of using the term trans-classical machines, noojects (noos meaning understanding, or mind) are, as mutually transcendent entities. Of course, it is possible to merge these independent entities, but fusions are always incomplete: if you merge a process of reflection with an object, you get something like a nooject assembled with a classical machine, but you do not get a subject. What you get then is the proof that “ça pense,” it thinks, in other words the materialist proof that matter thinks. Or if you identify the human subject with the process of reflection, in a pure humanist gesture, you leave aside the object. Or if, à la Hegel, you imagine the supreme identity between the subject and the object, there is a strong chance that Absolute knowledge will stay apart from this synthesis, in the form of a machine able to compute in a very sovereign manner. There is the effect of Günther’s trivalent logics: the dual representation governing our representations of the relations between human beings and technologies is obsolete; from now on, we need to count with three terms, one of them representing what the identification between the two others cannot include (I developed this analysis in Neyrat 2011, 147–78).
The Zombies of the Digital
121
In this regard, is there really any problem with delegating cognitive activities to moral machines? Not at all, if we only consider these activities, or more precisely if we reduce a moral choice to a certain kind of cognitive action, insofar as this cognitive activity is not, essentially, a human one. So, thanks to Günther, have we got rid of the fantasy structuring the human– technology relation? Maybe we forgot that something deeper is delegated in the case of moral machines. Let us try now, in the third part of this text, to answer the second question I raised: what do we seek to give up through the delegation at play in the case of moral machines?
Civilization as a Driverless Car The first time I saw the Moral Machines website, “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”, showing “moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians”, I was struck by several things: firstly, the limited frame of the choice we are asked to make, the fact that everything seems to occur in a very narrow environment; secondly, the feeling of temporal inevitability: we are at the end of a process and everything is already set up – on the way to hell; thirdly, the either-or structure, only conjuring up dual situations; fourthly, the necessity to act, as if morals were necessarily required us to do something. These limitations are not accidental: they completely define the sort of morals at play with these moral machines and their impoverished universe. Let us first analyze the fourth feature I identified: morals understood as action, something to be done, to be realized, implemented, with a tacit ontology according to which something is always better than nothing, action always better than non-action. This very Western conception of morals is homogeneous with the modern imperative that we can formulate thus: “Realize the possible, all the possible.” In other words: “Act as if nothing was impossible.” As Günther Anders wrote in The Obsolescence of Man, What can be done must be done … The possible is generally accepted as compulsory and what can be done as what must be done. Today’s moral imperatives arise from technology … Not only is it the case that no weapon that has been invented, has not also effectively been produced, but every weapon that has been produced, has been effectively used. Not only is it a rule that what can be done, must be done, but also that what must be done is inevitable. (Anders 2015) When everything becomes possible, the impossible withers and eventually disappears. Hannah Arendt’s meditation on the camps can help to think the modern disappearance of the impossible: “The concentration and extermination camps of totalitarian regimes serve as the laboratories in which the
122
Neyrat
fundamental belief of totalitarianism that everything is possible is being verified,” Arendt argues in The Origins of Totalitarianism (1976, 437). Not “everything is allowed” – which is the motto of nihilism – but “everything is possible”, that is to say, there are no moral limits. So, when Arendt argues that, in the camps, “the impossible was made possible” (ibid., 459), the impossible then designates the horror perpetrated in the camps. But I would also argue that, in the camps, what happened was the extermination of the impossible as such: it does not only mean that the impossible was made possible, but that the impossible was made impossible. What kind of impossible? Precisely what Arendt calls “spontaneity,” a term that must be understood along with Kant’s Critique of Pure Reason, in its transcendental sense, as the faculty of “spontaneously beginning” something that is not determined, as the ability to start a new world in the world, as “an absolute causal spontaneity beginning from itself a series of appearances.” (Kant 1998, 484). In the camps, Arendt explains, spontaneity, as transcendental freedom, was destroyed, “for to destroy individuality is to destroy spontaneity, man’s power to begin something new out of his own resources, something that cannot be explained on the basis of reactions to environment and events” (1976, 455). To destroy spontaneity is to destroy the “incalculability” and “the unpredictability which springs from the fact that men are creative, that they can bring forward something so new that nobody ever foresaw it” (1976, 458). That is why life in camps is like “life after death,” (ibid., 445); that is why camps are, Arendt concludes, the place of the “living dead,” not first because of moral horrors, but because of the destruction of spontaneity (ibid., 437, 441). The horror of making the living dead possible requires the prior abolition of the inaugural power of the incalculable. Now, let us make a U-turn to our driverless cars, to calculate their metapolitical trajectory and identify the “camp” in which they circulate: do these cars not constitute the perfect metaphor for our civilization? We (post) modern subjects can choose whatever we want – except the possibility to question the mode of civilization that leads to the Sixth Extinction. Yet as Schelling wrote in his Stuttgart Lectures of 1810, “He who chooses does not know what he wants and consequently does not really have a will. All choice is the consequence of an unilluminated will” (1994, 204). What Schelling can help us to think is that a choice between killing pedestrians or passengers never questions the situation in which a car is on the verge of taking such actions. However, a real moral act should deal with the possibility of not being trapped in this kind of binary situation, this dead end. Concerning a certain number of technologies, a moral question should take the following form: “Act as if it were not necessary to act.” Or, “Act as if it was possible to think twice before realizing the possible.” Or, “Act as if it were possible to utter something like: I’d prefer not to.” In this Bartlebian configuration, the good is not first the object of a choice between two different options, but based on the conscious rejection of the evil of which we
The Zombies of the Digital
123
are capable, an evil that is a part of us. It is not that, as Plato famously argues, “no one wants to commit injustice, but all those who do it, do it involuntarily,” but that only those who really know what evil they could have done, and still could do, are able to do the good (Plato 1979, 88). In the conception of the good I propose, a conception leaning on Schelling’s metaphysics, the good is not severed from evil, but comes from it. Evil is never far away from the good, it is the abyss from which the good comes. A machine able to experience its own abyss, to confront its dark side, its unconscious, to do the good against the background of an unactualized evil, would be a moral machine.
The Zombies of the Digital But what about what I call the zombies of the digital? What about their moral dimension? Their abyss? Let us go step by step. For the moment, the only zombies we have heard about are the ones that Arendt described, the “living dead” who have lost, in the concentration and extermination camps, the possibility to be incalculable, incommensurable, unpredictable, an unpredictability that is – contrary to Heidegger’s thesis about death – the real possibility of the impossible. (For Heidegger, death is “the possibility of the absolute impossibility of Dasein” [Heidegger 1985, 294]). Am I going to say, with Agamben, that the camp is “the ‘nomos’ of the modern” and even of the postmodern? (1998, 166–80). Definitely yes; but for the following reason: I think that, vis-à-vis a certain number of choices that society and capitalism pretend to offer, we – we the citizens of the global mall, we the anthropocynicals – are not far away from occupying the position of zombies, at least of what we imagine as zombies. Zombies would be those who, believing that they autonomously drive their cars, have to “choose” whether they are going to kill pedestrians crossing at a green light or a red light. Zombies would be what capitalism wants us to be: driverless machines driven by FAANG. Or fans of the episode of Black Mirror entitled “Bandersnatch,” in which viewers make decisions for the main character, the young programmer Stefan Butler, these decisions leading to different stories and different endings. Maybe not that different, all things considered: stuck in a world in which all the alternatives amount to the same thing, Butler learns from his 1984 computer screen that a company from the future, Netflix, controls him. Control by and from the future is not a paranoid idea: it is called data mining. But data mining is not a pure virtual process implemented by driverless computers. Actually, Netflix and other high-tech companies use an army of zombies whose choices underpin their knowledge and their profits: Big Browser does not need to watch us, for we watch and click in its place. I refer here to digital labor, understood as the exploitation of unpaid labor underpinning the creation of content for social media. In his recent book En attendant les robots: Enquête sur le travail du clic (Waiting for the Robots:
124
Neyrat
Survey on click work), sociologist Antonio Casilli (2019) defines digital labor as what he calls “tâcheronisation,” a term we could translate as piecework or “pieceworkization,” that is to say, the fact of working on one very specific thing and, more precisely, with one finger. Clicking fingers are required for just-in-time applications that provide access to services or products, like Uber or Deliveroo. Paid one or two cents per job, clicking fingers also put labels on images, transcribe short texts, organize information, or record voices for online platforms. Among digital laborers, there are also all those who are not paid at all, that is to say us, all of us, when we watch videos, look at pictures, write short texts, comments, etc., for social networks. Day and night, Casilli reminds, we – the users, the digital zombies – select, label, tag, clean the data that the so-called artificial intelligence will harvest. While we should never forget that datafication requires digital labor, it is exactly what we do: we forget that the digitalization of the world implies the production of monsters, people who, like zombies, have a limited form of activity and moreover a reduced use of their body – bad news: Marx’s General Intellect concealed disciplined fingers (see Virno 2007). But do we really know what zombies are capable of? Zombies are obedient, they are good soldiers, as Fela Kuti sings about soldiers in dictatorships: Zombie Zombie Zombie Zombie
no no no no
go go go go
go, unless you tell am to go stop, unless you tell am to stop turn, unless you tell am to turn think, unless you tell am to think.
Yet obedience and passivity only define one aspect of zombie psychology. As Sarah Lauro explains in The Transatlantic Zombie: Slavery, Rebellion, and Living Death, a zombie is “a two-headed monster,” both dead and alive, both the “incarnation of the slave and the slave-in-revolt” (2015, 30). On the one hand, it is the figure of the dispossessed, the disempowered, the object in the hands of a master – as Haitian poet René Depestre said, “The history of colonization is the process of man’s general zombification” (1971, 20). But on the other hand, the zombie is also the one who, like Jean Zombi, a mulatto warrior famous for his violent actions against white people during the Haitian revolution, knows that she can sacrifice herself for the revolution because she has nothing to lose, because she is already dead, because she is living “in a kind of living dead human-object state” (ibid., 62; for more on Jean Zombi see Dayan 1995, 36–8). Thus, zombies revolt; they can revolt. They can represent the ideal worker of capitalism and its colonial control, like in White Zombie (Victor Halperin 1932); but they are capable of breaking the machine. To also provide a contemporary illustration of how zombies revolt, we could think about what happens in World War Z (Mark Forster 2013), when zombies climb over each other to cross a giant wall that is supposed to protect the living population. The lesson is that zombies do not respect walls, borders,
The Zombies of the Digital
125
or the gap between life and death – they spread, they overwhelm: no fences, no gated community can prevent them from destroying everything. They shatter any fantasy of absolute security, any immunitarian approach to politics. They represent, in the end, the ultimate possibility to turn death against those who produce it. Then, what about the digital zombies, for we seem to be so far away from a revolution against digital capitalism? It is true that I did not hear about Yellow Vests or any communal uprising in the Silicon Valley. However, the Silicon Valley – as Franco Berardi argues (2017) – is everywhere, situated in every computer, in every connected brain, and we cannot know in advance whether or not there will be a zombie insurrection. Moreover, we never know if zombies will be on the side of nationalism or on the side of internationalism, if they will fight for emancipation or for a repressive state, if they will feed the machine or break it. The only thing I know is that the zombie time has come. The abyss from which morals comes is unleashed; the dark side of the psyche rules human behaviors; the unconscious finds fewer sublimations than actings-out. So, can we do something to prevent nationalism, right-wing populisms, and fascisms from shaping the zombie fight to come? I shall try at least – in the last part of this chapter – to light up the political terrain that digital zombies could share with analog specters in a common fight for justice.
The Specters of the Analog Structurally, ghosts and zombies are completely opposed. While zombies are both dead and alive, specters are neither dead nor alive. While zombies imply Voodoo techniques meant to simulate death or to make it as if the dead being is not dead, but still alive, a specter attests to death’s reality: a living dead entity is the denial of death; but a specter is the affirmation of the inescapability of death. A revenant comes back to tell us that someone has died, but that no tomb – symbolically speaking – was made for her: something was forgotten, or repressed. Let us think about the movie Poltergeist (Tobe Hooper 1982): the ghosts manifest themselves because a village was built on a native American cemetery, which entailed its secret displacement. A revenant is an entity that has come back to force human beings to recognize the existence of a wrong. What I call analog specters are revenants that follow from the ontological and political injustice that the processes of materialization, realization and concretization entail in the digital capitalist era. Let us shed some light on these twilight creatures. Leaning on Derrida’s hauntology, cultural critic Mark Fisher wrote about what he calls the “slow cancellation of the future.” By this, he meant that some promises of the past have not been kept: “What haunts is the specter of a world in which all the marvels of communicative technology could be combined with a sense of solidarity much stronger than anything social democracy could muster” (Fisher 2014, 26). In this passage, Fisher refers to the repressed dreams
126
Neyrat
of the past, the dreams that digital capitalism did not realize – dreams of happiness, of solidarity, dreams about another world, not a world-beyond, but our world as it should have been. Let us focus a moment on the social-ontological function of dreams, following now the crucial analyses that Bernard Stiegler devotes to what he calls “noetic dreams” and the process of “exosomatization” in his book Automatic Society (2016, 65–93). Noetic dreams are those from which one can invent new social, anthropological, “non-inhuman” forms – to use Stiegler’s adjective – commensurate to the world, to each current condition of the world. These dreams project unexpected social forms, individual and collective forms of life able to metabolize – to narrate, to symbolize – the advent of new technologies, new machines, new fluxes of matter and affects. I completely follow Stiegler when he reminds us, with, Jonathan Crary, that 24/7 capitalism prevents people from dreaming, as Burroughs had already said in 1969: America is not so much a nightmare as a non-dream. The American non-dream is precisely a move to wipe the dream out of existence. The dream is a spontaneous happening and therefore dangerous to a control system set up by the non-dreamers. (1974, 102) I also agree with Stiegler when he explains that the requirement of permanent attention underpinning digital capitalism creates an artificial sphere severed from cosmic rhythms, that is to say alternations of day and night, activity and inactivity (2016, 244–7). Leaning on Stiegler’s analysis, I think it is possible to make a distinction between two kinds of non-realization. Firstly, to return to Mark Fisher and his complaint against the destiny of communicative technology, there is the non-realized understood as the repressed possibilities for emancipation, explaining why, according to Walter Benjamin, history is written by the victors. These possibilities of emancipation and happiness have not disappeared, they haunt the future as what could have been. Having considered the twitterization of communicative technology as a cognitive disaster, we need to listen to the ghosts who ask us to maintain the dreams of cognitive blossoming, of political and social liberation, dreams about societies that would not be inhumane: we need to dream these dreams again, with new forms, and to realize them. Secondly, this does not mean that every dream should be realized. If sleep and dream time are defined as moments of de-activation during which actions are inhibited, then society should protect these moments and the separation between reality and that which must be kept into the state of unrealized dreams, of fantasies – of fantasmas, that is to say, ghosts in Spanish. In other words, in order to realize the dreams thanks to which a civilization is possible, we need to make a distinction between what has to be done and what should remain undone. Killing, spreading chaos into the world, realizing all the bad
The Zombies of the Digital
127
that we are capable of, turning the death drives into “necropolitics” (to borrow from Achille Mbembe 2003, 11–40), transgressing every prohibition, denying death – for instance in favor of transhumanism – and treating nature as a mere resource: there is a long list of dangerous dreams that should have remained in a dream state. The “blind hopes” that, according to Aeschylus, Prometheus has provided to human beings to enable them to stop foreseeing their death, should have not been turned into the passion for fire that led to what anthropologist Alain Gras calls our “thermo-industrial civilization,” that is to say a civilization leaning on energy coming from the combustion of fossil fuels (Aeschylus 2009, 327; Gras 2017, 3–29). The world wide web should have been a milieu for cosmic individuations, not for capitalist success. What I try to say is that there is a bloody war in the kingdom of the revenants, a ghost struggle concerning the right to existence. The right to existence concerns the life of the mind – in its material form – and the salvation of the body – when it is exposed to the drive to the digital that our civilization symptomatically manifests. The right to existence leads to the politicization of the ontological processes of virtualization and actualization. This politicization consists of refusing to consider the abstraction that is at stake in any process of automatization as a mere ontological process, or as a mere economic necessity, as the telos of history condemning a priori any luddism, any refusal of the machine, any desire to build another technological milieu, a technological milieu that would be at the same time a cosmic milieu. Because there is no telos of history, there are contingent decisions, forced bifurcations, binary choices about what has to be abstracted, extracted, exploited, and digitalized. And the ghost struggle is a struggle about the possibility to reveal these contingent decisions and to recall the promises of the past – to recall them or to forget them and to bury them forever.
Wreck and Hope What I call in my text the zombies of the digital are potential luddites, potential fighters able to turn their slowness into an embodied manifesto against any accelerationist claim, any drive to progress; but to avoid the nationalist closure, they need to take note of the transhistorical program that the specters of the analog have conceived. Zombies have a marvelous skill: they know how to act and to fight in the present and they know how to resist white power and its accelerationist drive, the abstract, digital power of the white mega-machine that has produced them and despises them; but they would benefit from the memory of the ghosts, the memory that revenants have of the past and of the future – the cancelled futures, those that should have happened. Conversely, the revenants do not know how to embody their melancholic knowledge: they do not constitute any avantgarde, but rather an after-garde; they always arrive too late, and this is their damnation, they are the sweeper-cars of history and they merely try, after the fact, to remedy injustice. What the revenants of the analog need is a
128
Neyrat
terrain of struggle, a here-and-now: this is what the zombies of the digital provide, in the relentless, repetitive fight that they can engage in against any present power. A temptation would be, after all these metaphors, to finish with concrete analysis, with a real political program about machines, moral machines, and the kind of morals we need to implement through these machines. But what I have tried to explain is that a questioning about moral machines risks preventing us from engaging in political reflection, a political analysis of the ontological processes of virtualization and actualization. To attempt a political analysis, to understand what is at play in the digitalization of our capacity to take moral decisions, I decided to use a metaphorical language, speaking about zombies and specters. The reason for this is that the metaphorical dimension is a battleground on which noxious abstractions can be fought, that is to say, on which it is possible to oppose abstractions to other abstractions, dreams to other dreams, hopes to other hopes. As we can read on the website of the leftist journal Salvage, “your hope disgusts us” but the hope I have backed in my article is not the one that, quite rightly, Salvage rejects, the hope to maintain the same ongoing disaster, the hope to avoid the collapse of our civilization while practicing business as usual. The hope I promote pertains to the survivors of the wreck that we call capitalism (be it geo-capitalism or digital capitalism, both are linked anyway), zombies and ghosts, that is to say us, all of us, now, from the past, and from the future: it is the hope that “creates / From its own wreck the thing it contemplates” (Shelley 1959, 300).
Notes 1 See the Moral Machine website (http://moralmachine.mit.edu/), “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”, which I investigate at length in this chapter. 2 On autonomic computing and ambient intelligence, see Hildebrandt and Rouvroy 2011. 3 On the difference between autonomy and heteronomy, see Kant, Critique of Practical Reason, §8, Theorem 4, (Kant 2002, 48–9). 4 On the automatization of the superego, see Marcuse 1966, 93–4.
References Aeschylus. 2009. “Prometheus Bound.” In The Complete Aeschylus, Vol. II. Oxford: Oxford University Press. Agamben, Giorgio. 1998. Homo Sacer: Sovereign Power and Bare Life. Palo Alto, CA: Stanford University Press. Anders, Günther. 2015. The Obsolescence of Man, vol. 2, On the Destruction of Life in the Epoch of the Third Industrial Revolution. Posted by Alias Recluse, January 6, libcom.org, https://libcom.org/files/ObsolescenceofManVol%20IIGunther%20Anders. pdf.
The Zombies of the Digital
129
Arendt, Hannah. 1976. The Origins of Totalitarianism. New York: Harcourt Brace Jovanovich. Arendt, Hannah. 2006 [1963]. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Penguin Books. Berardi, Franco “Bifo”. 2017. “Franco “Bifo” Berardi on the global Silicon Valley.” Verso Books / YouTube, https://www.you tube.com/watch?v=sAwpAQxbtRs&t=4s. Burroughs, William S. 1974 [1969]. The Job: Interviews with William S. Burroughs. New York: Grove Press. Casilli, Antonio. 2019. En attendant les robots: Enquête sur le travail du clic. Paris: Seuil. Dayan, Colin Joan. 1995. Haiti, History, and the Gods. Berkeley: University of California Press. Depestre, René. 1971. Change n°8. Paris: Editions du Seuil. Fisher, Mark. 2014. Ghosts of My Life. Writings on Depression, Hauntology and Lost Futures. Winchester: Zero Books. Freud, Sigmund. 1969. “A Difficulty in the Path of Psycho-Analysis.” In The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XVII. London: Hogarth Press. Freud, Sigmund. 1997. “A Child is Being Beaten.” In On Freud’s “A Child is Being Beaten.” New Haven, CT: Yale University Press. Gras, Alain. 2017. “The Deadlock of the Thermo-Industrial Civilization.” In Transitioning to a Post-Carbon Society: Degrowth, Austerity and Wellbeing, edited by Ernest Garcia, Mercedes Martinez-Iglesias, and Peadar Kirby, 3–29. Basingstoke: Palgrave Macmillan. Günther, Gotthard. 2008. “La ‘deuxième’ machine.” In La conscience des machines. Une métaphysique de la cybernétique. Paris: L’Harmattan. Heidegger, Martin.1985. Being and Time. Oxford: Basil Blackwell. Hildebrandt, Mireille and Antoinette Rouvroy, ed. 2011. Law, Human Agency and Autonomic Computing: The Philosophy of Law Meets the Philosophy of Technology. Abingdon: Routledge. Kant, Immanuel. 1998. Critique of Pure Reason. Cambridge: Cambridge University Press. Kant, Immanuel. 2002. Critique of Practical Reason. Indianapolis and Cambridge: Hackett Publishing Company, Inc. Lauro, Sarah. 2015. The Transatlantic Zombie: Slavery, Rebellion, and Living Death. New Brunswick, NJ: Rutgers University Press. Marcuse, Herbert. 1966 [1955]. Eros and Civilization: A Philosophical Inquiry into Freud. Boston, MA: Beacon Press. Mbembe, Achille. 2003. “Necropolitics.” Public Culture, 15, no. 1: 11–40. Neyrat, Frédéric. 2011. “Das technologische Unbewußte.” Die technologische Bedingung - Beiträge zur Beschreibung der technischen Welt, edited by Erich Hörl. Frankfurt/Main: Suhrkamp. Plato. 1979. Gorgias. Oxford: Oxford University Press. Russo, Anthony, and Joe Russo, dir. 2018. Avengers: Infinity War. Marvel Studios. Schelling, Friedrich Wilhelm Joseph von. 1994. “Stuttgart Seminars.” In Idealism and the Endgame of Theory, edited by Thomas Pfau. Albany: State of University of New York Press. Shelley, Percy. 1959. Prometheus Unbound. Seattle: University of Washington Press.
130
Neyrat
Sloterdijk, Peter. 2017. “Wounded by Machines.” In Not Saved: Essays After Heidegger. London: Polity Press. Stiegler, Bernard. 2016. Automatic Society: The Future of Work. Cambridge: Polity Press. Virno, Paolo. 2007. “General Intellect.” Historical Materialism 15, no. 3: 3–8.
7
Just Machines. On Algorithmic Ethos and Justice Susanna Lindberg
What is the role of algorithmic governance in the organization of an ethical and just world today? In the first part of this article, I shall ask how algorithmic governance relates to ethos in the contemporary world. In the second part, I shall see, through a concrete example – that of algorithms used in recruitment to jobs and to higher education – if such algorithms can be just. My approach is neither technological nor sociological but philosophical: I ask how the concepts of ethics and justice change under the pressure of digitalization.
Antique Ethos and Algorithmic Governance In what follows, ethos does not mean moral philosophy as a part of axiology that recommends given concepts of right and wrong. Ethos is understood in its antique sense that originally meant accustomed place, and derivatively custom and habit. Ethos is the character that is proper to a community’s beliefs and customs, but it can also be a person’s moral character, especially that manifested by a character of ancient tragedy or brought forward by a speaker in classical rhetoric. These examples already point at the conflictual nature of ethos. Acting according to custom does not necessarily mean acting in the morally right way, as ancient tragedy shows time and again. In contemporary philosophy, the fundamental interpretation of ethos as accustomed place has been defined by Martin Heidegger in his Humanismusbrief (1967), which examines ethics in the ancient sense of place of habitation and residence. As Jean-Luc Nancy (2001) shows, this leads to thinking ethos as an “originary ethics” that is already ontology (see also Raffoul and Pettigrew 2002). Heidegger explains ethos by retranslating Heraclitus’s fragment no. 119 ethos anthropo daimon, which should, according to him, not be translated as man’s character is his demon but as man’s habitation is for him the open place in which God can make a sign (1967, 351–3). Instead of explaining this strange translation in detail, I shall simply transpose Heidegger’s sentence into more accessible language. Heidegger says that the human being’s (Da-sein) habitation is the Da, the dwelling place of his/her historical community. The dwelling place is not the
132
Lindberg
institutions consciously produced by the Dasein but the historical destination, Geschick, that precedes the Dasein immemorially and orients all his/ her intentional making and building (cf. Heidegger 1927, § 72–4). Like nomos, it is a historical community’s inarticulate rule, a non-conscious and non-rational articulation of how things are because they have “always” been so. According to Heidegger, action is possible only on this ground: historical action in particular repeats an unheard-of possibility of the historical inheritance that in a sense is already there, but that in another sense has never appeared but signals as if from the future. Heidegger’s particular logic of historicity does not concern us here (on its problems, see LacoueLabarthe 2002), but his fundamental definition of ethos as destined place of dwelling remains important for contemporary thought. Richer than Heidegger’s idea of originary ethics is Hegel’s description of ethos in the chapter of Phenomenology of Spirit titled Ethical world (Sittlichkeit) (Hegel 1807 and its interpretations by Derrida 1981, LacoueLabarthe 1986, Lacan 1986, Butler 2000, George 2006). For Hegel, the fullest expression of the ethical world is the ancient Greece whose essence he explains through a reading of Sophocles’ Antigone. For Hegel, the stakes of this play are the conflict between the ethical world and the political world. The ethical world is the nocturnal underwordly region of women who, like Antigone, care for the family. The women’s emblematic ethical work is the burial of the family’s dead. The political world is the daytime world of men who, like King Creon, care for the public sphere, law and state. Ultimately, the men’s political work is war. In the play Antigone, the two laws – the nocturnal ethical law and the diurnal political law, or the family law and the state law, or the women’s law and the men’s law – fall into contradiction, collide, and destroy the feminine law in the figure of Antigone (but Creon is crushed, too, and the feminine, as the “eternal irony of the community”, will not die for good but re-emerges time and again and tears the state apart from the inside). For Hegel, this is the fundamental contradiction of the Greek world. The most important part of Hegel’s analysis is the double role of Antigone, who stands for the ethos (Sitte). On the one hand, she follows the ethos as the “nocturnal law” commanding her to do the woman’s work (burying her brother); the nocturnal law is not a written law instituted by humans but ordained by the gods. It is an unconscious “heart’s law” that does not result from reflection but from the pressure of the “gods of the underworld”. As a non-conscious immemorial law, the ethos resembles the destination described by Heidegger in Humanismusbrief, except that ethos imposes the love of family and not the destiny of a people. On the other hand, the ethos of Antigone is also her character, the principle of action that makes her a tragic heroine who rises against the tyrannical state law, becoming conscious of the laws both of the family and of the state as well as of their contradiction. Hegel’s description of ethos is richer than Heidegger’s because Heidegger lacks the description of ethical action, whereas Hegel
On Algorithmic Ethos and Justice
133
underlines the tragic action that is simultaneously saintly and culpable. This is also why Heidegger’s description of ethos is fundamental-ontological while Hegel’s description of ethos focuses on the undecidable conflict between right and wrong. This is how philosophical tradition sees ethos: as the immemorial nonconscious law that orients people’s work and action in conformity with their community, albeit not necessarily in conformity with anything like enlightened reason. To what extent could one say that the ethos of the contemporary world reflects what Éric Sadin has called “algorithmic life” (2015, see also 2009 and 2013) or what Antoinette Rouvroy and Thomas Berns, working in a Foucaldian framework, call “algorithmic governmentality” (2010; see also Costa 2016, 43–65; Rieder 2020, 10)? Should one say that the ethos of the contemporary world is no longer governed by immemorial habits commanding the love of family, kin and people, but by algorithms and other digital dispositifs? Or that today even kinship is mediated by algorithms? This question arises because of the extraordinary proliferation of different algorithms that manage the social space. Note that “algorithmic governance” is not the same thing as just any use of digital technologies, although the two are of course intertwined. The recent global COVID-19 epidemic has confirmed the salutary contribution to social life of digital technologies that have permitted work, study, social life, and even medical aid in confinement. It was particularly delightful to see how quickly people learned to use these technologies imaginatively. But reorganizing social interaction by means of digital technologies is not the same thing as being administrated by automatic processes: this is what the term “algorithmic governance” refers to. It is no secret that algorithms are being used more and more extensively in the management of very different areas of society. Everybody has noticed how commerce and marketing have gone online, and so has banking: much of the stock exchange is entrusted to algorithmic trading and an increasing amount of private banking is not only online but also automatized (loans, investment counseling, et cetera). Media is not only published on the net; more and more media, and even art content, are being produced by bots, moderated by other bots, and suggested by algorithms to potential clients. Even politics has moved into the digital space, where politicians are busy creating profiles that would correspond to the elector profiles that survey algorithms claim to have discovered. Algorithmic governance affects individuals’ spheres of action, for they adapt to their digital environment not only by using its services but also by adapting their professional and private digital identities to its demands. Most of us have noticed that, while our digital avatars enable certain actions, they also limit others, especially by enclosing us in the so-called “filter bubbles” constructed by the algorithms of our telephone’s and personal computer’s search engines, which tend to inform us only about the things that we are already familiar with (Ken Liu’s short story “The Perfect Match” [2016] depicts this aptly). Today the power
134
Lindberg
of algorithms over our lives is not limited to our intentional activity in the digital world but extends over the entire place that we are assigned in society. There are algorithms that determine whether a client should be able to obtain an insurance policy, and for what price; what kind of health services she is entitled to; how much taxes she should pay; what kind of education she deserves and where she can be recruited. In certain countries, there are also algorithms that define with whom the citizen can communicate online, that track her in the public space, and that can even determine what punishment she deserves if she commits a crime (or just might commit one). This is how algorithms furnish and manage much of the social space; and if we want to live in this society, we are not free to refuse its algorithmic setting. By and by, these technological conditions have arranged a new kind of a social space characterized by what Rouvroy and Berns call statistical governance. It does not control what is real but it structures what is possible, and at the same time it tends to suppress divergent virtualities (Rouvroy and Berns 2009). Even if some areas of this governance are designed by public powers (Alston 2019), much larger areas, and its ultimate technological architecture, are created by big companies that function along the principles of what Shoshana Zuboff calls surveillance capitalism (2019), which “aims to predict and modify human behavior as means to produce revenue and market control” (Zuboff 2015, 75) and actually “thrives on unexpected and illegible mechanics of extraction and control that exile persons from their own behavior” (ibid., 85). However, as Rouvroy and Berns point out, the world is not run by a huge self-conscious mega-AI, such as HAL of 2001: A Space Odyssey or the Singularity foretold by Kurzweil (2005), which play the role of a huge “world brain.” Even though more and more algorithms are run by so-called artificial intelligence, AI is not a thinking mind but only a set of complex machine learning programs. Through them, the world is managed by innumerable large and small algorithmic systems that are more like fragments of the nerves of the contemporary social body. Everything and everybody is being monitored all the time, not by somebody or some consciousness, but by innumerable impersonal automatic mechanisms that constitute what Dominique Quessada calls sousveillance (“subveillance” in contrast to traditional surveillance: 2010, 56). Now, when the sociological space is thus increasingly managed by an algorithmic setting, do the algorithms not thereby frame the individual’s ethical situation and the limits of his or her ethical action?
Frameworks of Ethical Action On what grounds could one say that in the contemporary world, algorithmic governance affects and even overdetermines the ethos? The rapprochement is motivated by certain similarities of structure between the classical notion of the ethos and what could be called social algorithms. Let us see what likens algorithms and ethos, and then what tells them apart.
On Algorithmic Ethos and Justice
135
Algorithmic governance plays the same role as the classical ethos for two principal reasons: firstly, both configure the social space; secondly, both do so in a nonconscious way. Firstly, like the classical ethos, social algorithms frame what can be done (what is good, useful, permitted, right), and thus they configure the possibilities that make up the social space. The classical ethos dictated what had to be done and by whom. Antigone had to bury her brother; this is what a woman and a sister had to do; by the same token, as a woman, she could not engage in politics and war. This is how ethos dictated her possibilities and impossibilities. Contemporary social algorithms do not qualify actions as morally right, but by organizing and distributing different possibilities of doing things, they orient practical life nonetheless. They can be used to determine if you are the sort of person who can obtain a bank loan, better health care, a place in an institution of higher education. They can select the persons who obtain various positions, such as a service, a job or an education. These algorithms are praised for being more sophisticated than the traditional ethos because they can select persons on the basis of their individual capacities and not on the basis of crude particular features like sex or race (e.g. Harari 2018 [chapter Philosophical car]) – although in practice it has turned out that, instead of diminishing discrimination, they can actually enforce it (O’Neil 2016).1 This is how both the ethos and the social algorithms provide possibilities of social life that have the imperative force of mores: they could be otherwise and they could change, but as long as they are valid, they determine the society very strongly. Both ethos and certain social algorithms are meant to manage a fair and even a just social space. Of course, what “justice” means varies. For Antigone, it would have been just that she could bury her two brothers equitably. In a welfare state, an income declaration algorithm is just if it taxes everybody according to the common law, and a school admission algorithm is just if it finds a fitting place for every youth in society. In a capitalist system, a banking algorithm is just if it maximizes the bank’s gains and a recruitment algorithm is just if it finds the best worker for the company; and what happens to those whom the algorithm judges to be bad investments is irrelevant. In all these cases, justice is understood only in the sense of the good functioning of the society and not in the sense of the democratic moment of discussing the law or the sovereign moment of making it. Both ethos and social algorithms realize justice only by executing the law that is already there: they support the bureaucrat’s task of governance. Social algorithms contribute to the administration and management of a just society, and as their use generalizes, they will soon be its essential supports. Accordingly, many authorities have recently felt the need to draw up ethical guidelines specifically for the use of AI. (Reports have been published by the EU and by some of its member states as well as by the OECD and G20. Among them, the French mission directed by the mathematician Cédric Villani, For a Meaningful Artificial Intelligence, towards a French and
136
Lindberg
European Strategy is no doubt the best, because it is well-informed and farsighted).2 Below, we will take the example of algorithms that are expected to distribute places in the society fairly (algorithms of recruitment and selection for higher education), but we could as well look at algorithms that are expected to distribute services fairly (for example, estimate if someone should have a bank loan, an insurance policy, or healthcare). Algorithms are being used in such areas of society because they are in principle capable of comparing large numbers of candidates to complicated sets of criteria very quickly. Moreover, they are supposed to be incorruptible and unbiased. However, as the abovementioned ethical guidelines also underline, these expectations are justified only in part. Firstly, as already noted, it has turned out that recruitment algorithms can actually increase racial, sexual and class biases instead of eliminating them. Secondly, facing the algorithm, a person who is excluded from the distributed positions (for good or for bad reasons) really has no way of getting around it: one cannot negotiate with an algorithm, one can only fit in the case or not (O’Neil 2016). Thirdly, an algorithm sees statistical profiles, not persons (Rouvroy and Berns 2010, 92), and hence they can turn out to be existentially frustrating and even unjust. The second reason for comparing algorithmic governance and ethos is that they organize the social space in the same way. They do not appear as positive laws to be discussed but as prevailing rules that are nonconscious and nonetheless incontestable. Ethos is the “divine law” that is not discussed by Antigone but simply followed by her. The social algorithm is obviously not an ethical law dictated by “gods,” but a program coded by someone following rules formulated by someone else. But as O’Neil and Zuboff in particular have shown, the person whom they select cannot question this rule any more than Antigone could: she is simply sorted and managed by them. In other words, ethos commands life with the force of the unconsciousness, and social algorithms command it with the force of the unthought, to put it in the words of N. Katherine Hayles (2017). Social algorithms function in this way because, being technical constructs, they function as what Bernard Stiegler has called epiphylogenetic memory (the collective technological memory, as distinct from the epigenetic individual memory and the phylogenetic social memory, see Stiegler 1994, 185–6). An epiphylogenetic memory is deposited in technical objects, where it carries a collective memory of how the natural world functions; it can very well function unconsciously, for the individual members of the collective can know how to use a technical object without understanding why it functions. The idea of an epiphylogenetic memory is a critical development of Heidegger’s idea of Geschick (the destination that articulates the ethos of an epoch) and of Ge-stell (which is the specific destination of the epoch of technique, see Heidegger 1954). But unlike Geschick, which is generally defined as being spiritual and national, and even unlike Ge-stell, which is hardly anything more than its photographic negative, contemporary technics is material, and thanks to this it passes ethnic and ethical boundaries easily.
On Algorithmic Ethos and Justice
137
This is why it functions so well as the material support of globalization. While a “spiritual destination” sketches a unitary sense, contemporary digital technologies are multiple systems that do not operate globalization as unification of a people but as dispersion of isolated individuals (at most connected as a community of the confined). Now, digital technologies are epiphylogenetic memories that consist of data stocks and algorithms that remember definite operations. People use them and rely on them without really thinking about them, and this connects them as a supranational technological community that has a lot of common rules but that does not think of itself as anything like a people. Stiegler’s theory of technics is also a critical continuation of Foucault’s theory of power dispositifs, and they both point at the same ambiguity (Foucault 1971, 1975; Stiegler 2008). On the one hand, no human community functions without a technological context, and each technology is also a stimulus to create new works and techniques. On the other hand, both material technics and power dispositifs format people to certain forms of life and by doing so they hide other possibilities of freedom and creativity. Both the power dispositifs and the technical hypomnemata are nonconscious structures that orient human lives mostly in an imperceptible way. Hence, it is difficult to make them visible as such and submit them to direct epistemic or political critique. In this sense, algorithms function in the same way as the immemorial laws of the ethos. Most of the time, they orient life and thought collectively and unconsciously; it is possible to make them visible in part, but this is difficult and can hardly be done completely. Unlike the classical ethos, technical dispositifs do not address human types or classes but individuals; this is why they tend to make collective action impossible. Having examined the analogies between ethos and social algorithms, let us note their differences. The essential difference between them is the possibility of acting against a given situation, and this difference is based on the different temporalities of the two systems. Ethos and algorithmic governance have different relations to the past. Both function as nonconscious memories that organize the social space, but the origins of these memories are different. The ethos has no definite origin, but it is a habit that has “always” been there and that may be justified by a myth; for instance, Antigone attributes its law to the nocturnal law of infernal gods. The ethos is valid simply because people keep reproducing it, but they can also stop doing so. The social algorithm, on the contrary, most definitely has an origin: a society or a company has set its aims, a team of programmers has constructed it, and then it simply realizes the program. Most of the time, the ethos and the social algorithm function without us questioning them. When they are interrogated, this happens differently in the two cases. The ethos, symbolized by “gods,” is open to interpretations, re-interpretations, and rebellion, like Antigone’s rebellion against the city law.
138
Lindberg
Every religion has its theological debates, every custom can be changed by a new fashion. As hermeneutic philosophers like Heidegger and Gadamer have long demonstrated, the collective past is a plastic dimension that people – every individual – reflect about: it formats them (in Bildung) but it can also be transformed by them (in interpretation). Although an algorithm can be updated or changed for a new one, if the society or the company decides so, the design of the algorithm is not open to the persons that it administers. As illustrated by O’Neil and Zuboff in particular, the algorithms feed on data concerning persons that is sometimes provided by the persons themselves but more and more often harvested by different mining programs unbeknownst to them. The data concerning persons is accumulated automatically, following principles of selection chosen by the companies and not decided by people themselves. The programs are constructed by specialists who may or may not be competent in human evaluation. Both the data and the programs are most often inaccessible black boxes to the persons evaluated and selected by them, which is the main problem pointed at in all ethical guidelines, which underline that automatic systems should be transparent and auditable to be socially acceptable (Villani 2018, 113) but fail to state how this could be realized. An algorithm cannot be changed by those who are submitted to its power, and it is no use trying to reason with it. In principle, one can always step out of the system (abstain from using Facebook or refuse to use online banking) – but in this way one soon finds oneself isolated from society. Or one can adopt hacker morals and find ways to twist given algorithms – but this is a possibility open to very few; moreover, this approach does not amount to criticizing given systems fundamentally but only to parasitizing them, using them otherwise. In sum, one can revolt, refuse, say no to the ethos but not really to an algorithm (Rouvroy and Berns 2010, 95). These differences are rooted in the different types of future projected by the ethos and the social algorithm. A machine’s temporality is fundamentally different from existential time. By necessity, technical objects function in a linear time, in which past events determine future events causally. This is obvious in the case of a simple object, in which a moral norm is inscribed so that it is automatically realized each time the object is used – this would be the case of a security belt (Latour 1996) or of metro doors and airplane autopilots (de Mul 2009) – although it would be an exaggeration to claim that they have moral dignity in themselves (Latour 2002, 254). Algorithms are much more complicated since they are not closed objects but operations that can generate choices and decisions. A traditional computer program is a hand-written rule that states what conclusions can be drawn from individual cases. Insofar as the rule has been written by a programmer, it can also be explained by a human being. New machine learning techniques are different since they write their own rules by finding regularities in the available data. This is why they are increasingly capable of what Yuk Hui distinguishes from mechanical repetition by calling it recursivity, which is “characterized by the looping movement of returning to itself in order to determine itself,
On Algorithmic Ethos and Justice
139
while every movement is open to contingency, which in turn determines its singularity” (2019, 4, more precisely see 124–9). In this case, insofar as the rule has been produced by the system itself, it is much more difficult, if not impossible, to explain its operation to a human observer. Explaining the process does not resemble discovering a fixed rule but following the course of a life that is too rich and evolves too quickly to be reproduced by the human observer (which is why the system was established in the first place). This is why the “black box effect” is much greater in the latter case. “Black box is a term used to describe algorithms completely opaque to their users” (Hui 2019, 38): a black box occurs when people see the input and the output but they do not see what happens between (Villani 2018, 114–18). Although some researchers claim that at least this recursivity likens the AI to the human mind insofar as both are now capable of new and unexpected results, I would claim that the AI nonetheless functions on linear time because it builds on past possibilities. Hence, it is essentially different from existential time, which develops by encountering impossibilities. Existential future opens when something impossible happens and necessitates the invention of a different future (as we shall see more closely below). This kind of imprevisibility is characteristic of existential time, which is precisely openness to unexpected chances and the strokes of destiny. This is why, as Frédéric Neyrat says, The blind imagination of the societies of clairvoyance [arranged by algorithms] consists in not seeing what is impossible, what is irremediably obscure, what is lacking, not seeing the multiple temporalities, rhythms, becomings, the ontological out-of-jointedness out of which worlds are born and die. It is like an imagination contesting itself, an imagination that serves functionality so blindly that it cannot bear the absence that is the origin of images. Poor and sad clairvoyance … (2010, 111, tr. S.L.) In sum, the difference between ethos and algorithmic governance is rooted in their different temporalities. Now, the difference between the two types of social setting – one constituted by the ethos, the other one constituted by algorithms – is a conceptual difference that is meant to help us understand, by contrast, the algorithmic reality that is being built everywhere around us. But this is hardly a real difference, for our ethos is already penetrated and expressed through algorithms, they already go together. In order to tease out real differences, we should study the kind of action that is possible in a social setting. It seems that, by nature, ethical action cannot be programmed, and this is precisely why it has to be just by itself. Can algorithmic governance be just?
Just Machines Can justice be programmed in an algorithm? Traditionally one would think that justice needs some mechanisms but that it cannot be reduced to a
140
Lindberg
mechanism: justice is not a machine. The relation between justice and mechanics has been studied in particular by Jacques Derrida in Force de loi (1994), Préjugés – devant la loi (1985), Death penalty lectures (2012; 2015) and others. Especially in Force de loi he clarifies it by formulating a paradox of justice. On the one hand, the law should function like a machine, because it is a rule to be followed and a consequence to be calculated: it would be just that the decisions of justice followed from the law quasi mechanically, equally and impartially. On the other hand, justice cannot be just a machine, because a machine is incapable of understanding the case, therefore of knowing when to show clemency, severeness, or mercy, and therefore of really carrying the weight of the juridical decision and the responsibility that should go with it. Justice itself cannot be a machine, for it is the incalculable singular decision that cannot avoid being an “instance of folly” that the law cannot legitimate. Law needs to be just, because otherwise it is just violence, and justice needs to be lawful, because otherwise it is tyrannical, but justice cannot be reduced to law. Justice is the incalculable, necessarily violent decision to subsume such and such a singular case to the universal law although it is never exactly fitting. For Derrida, this is also why law is constructible and deconstructible while justice is nondeconstructible. Besides, if a blindly mechanical juridical machine appears cruel, is its cruelty limited to the instruments of justice (that inventions like the guillotine can presumably make more humane), or is cruelty intrinsic to the human will that exercises justice?3 Now, can an algorithm overcome the paradox of justice? Thanks to ever developing programs, it is technically more and more possible to entrust decisions of justice to algorithmic systems. In what follows, I will reflect on algorithmic justice in two concrete cases, not the solemn ones of crime and death, but the everyday situations of admission to higher education and recruitment to jobs. Today, these tasks are increasingly attributed to what, in honor of the Hogwarts sorting hat, I call “sorting algorithms”. These are profiling algorithms that resemble those used in advertising but have a different aim, for they do not try to sell things but to find the best candidates for higher education and open positions. They participate in the task of realizing a certain justice (or fairness) in the sense that they aim at distributing the positions in a society in the best possible way. This is why they invite us to rethink the relation between justice and machines. One example is the recruitment algorithms that have been used for a while already by private companies especially in the USA, and that are mordantly described by Cathy O’Neil in Weapons of Math Destruction. Another example is a public service, the Parcoursup algorithm that operates admission to higher education in France since 20184 (although similar algorithms are used elsewhere, too.)5 Both systems are supposed to increase fairness in the job market or in access to higher education, for in principle a machine has no reason to make biased choices. As we have already noted, it has turned out that the recruitment algorithms can actually increase racial,
On Algorithmic Ethos and Justice
141
sexual and social biases instead of diminishing them (O’Neil 2016, Villani 2018, 116–17). Parcoursup has been accused of favoring candidates with privileged backgrounds, too. Although recruitment algorithms are used by private companies and university admission algorithms by public powers, both have a similar structure and both have been criticized for being, precisely, unjust – and not, for instance, false or inefficient. The reasons for their injustice have been attributed to similar factors. Firstly, the selection algorithms may be based on bad programs, the programmers being not qualified to recognize a good candidate or not given good enough criteria to do so. Secondly, even a good program may give bad results if it operates on irrelevant or faulty data – which is less the case when candidates smoothen their net profiles in order to seduce the recruiters (as described by Cousserand-Blin and Pinède 2018) and more the case when data concerning persons is harvested by third party private companies (type Cambridge Analytica) and then sold to recruitment companies (Zuboff 2015). Thirdly, the algorithms are inflexible like law, but unlike law, they are not transparent for their users, who cannot point at eventual biases, influence the selection criteria or prepare to present themselves in the best light.6 The algorithm is a “black box” that sorts people on the basis of a secret, pre-established, nonnegotiable and sometimes questionable set of givens (as explained in Villani 2018, 114–19). These flaws do not result (only) from poor conception and realization of the algorithms. As the Villani report says, they are partly inherent to the black box character of the machine-learning technology itself. But we can also show, in principle, why even the best algorithmic justice could not meet the demands of moral sense: just machines are inevitably also unjust machines because they are just machines. Let us examine three philosophical problems underlying the “sorting algorithms.” Firstly, if recruitment and school admission algorithms hurt precisely the sense of justice of the people they sort, what kind of justice is it? The choice of the best candidate is neither a matter of criminal justice (like in the cases analyzed by Derrida) nor a matter of distributive justice (like in Rawls’s Theory of Justice) but a matter of the kind of justice examined in Plato’s Republic, which aims to assign everybody the role that befits him/her best in the city. As is well known, The Republic is fundamentally a vast thought experiment in which Socrates makes the hypothesis that if a just city were to be realized, it would no doubt be a city in which people were happy and content with their lives. People would be content if they lived the kind of life that suited them best: those whose souls were governed by wisdom would be fit to be guardian rulers, those whose souls were governed by courage would be fit to be guardian warriors, and those whose only virtue was temperance would actually be happy with ordinary working life compensated by some family life and private property. The “sorting algorithm” profiles people in order to fit the right person to the right place. In this sense, it takes on the function of the Platonic educator, who seeks the right
142
Lindberg
“soul” for the task at hand. In principle, a recruitment algorithm aims mainly at the company’s flourishing while the higher education admission algorithm hopefully also aims at the candidate’s personal flourishing – but both try to fit individuals into communities. When people feel that the algorithm has sorted them unjustly, they generally feel that they have not been recognized for who they really are. This is what the hypothetical Platonic educator was supposed to find out: he or she was to examine the soul of the candidate in order to see if it was governed by wisdom, courage or temperance. An algorithm cannot penetrate a “soul” because it can only collect and organize available digital data (cf. Neyrat 2010, 105; Rouvroy and Berns 2010, 92). Of course, this data can be more or less pertinent. It can reside in school records and in work history. It can also reflect the candidate’s personality in the form of data that comes from her use of social media, entertainment and news, consumption, health records, and myriads of other bits and pieces of information that she has scattered around the Net. Profiling companies use data-mining programs to harvest this and then to sell it on to other companies, such as recruitment companies or political campaigns. They claim to know people better than people know themselves, as if the companies’ programs are mechanical psychoanalysts of the digital era that can see right into our unconscious desires. However, as anybody looking at the ads one receives online knows, the accuracy of such evaluations is questionable. Fundamentally, the sorting algorithms are not bad judges of the soul because of bad programs or insufficient data, but because they can only judge the digital traces of a person and not her “soul.” By the quaint Platonic word “soul” I do not mean anything like a spiritual or a religious substance, but only what a person truly is: it is the answer to the question who are you? Firstly, who a person is certainly results from her conscious and unconscious past experiences; they do not depend only on her but reflect the world in which she lives and the people whom she has encountered. But secondly, who a person is also constantly changes when she more or less freely imagines what she could make of this field of virtualities. When it comes to the first aspect of a person, her past experiences, an algorithm can follow the digital traces of many events that the person has come across, but it cannot capture how (digital and nondigital) events that have happened to a person have been met by her and transformed into genuine experiences (experience being a response to an event, not just a mechanical reaction to it). They only chart the digital territory in which the person lives, most of which consists of non-solicited advertisements, social media messages, and tasks defined by school and work. They know very little about how these signals are actually experienced (or overlooked) by the person. Furthermore, the digital traces of the person reflect only the first of the two aspects of a personality, not the second one: they represent what she has been but say nothing of her capacity for making something of it. The only way to know at least something of the latter is to face the person and ask her. This is not
On Algorithmic Ethos and Justice
143
because she would know the answer or tell it, but because there really is no other way to relate to a person’s freedom. To put it in the Derridean terms that we started with, making a just decision on somebody requires not only repertoiring her deeds but also hearing her and speaking with her (really, instead of letting her converse with a chatbot). This brings us to the second philosophical problem underlying the sorting algorithms, namely the ambiguous existential consequences of being sorted by inexorable machines. Now, getting to study something and obtaining a job are major existential events in life, especially when one is just out of school. Any Sartrean existentialist could explain to a youth in five minutes, why, although existential freedom is not limited to the choice of orientation, orientation is a true existential choice. Even if, at 18, we did not know what to do with our lives, it was of paramount importance to ask oneself what we might want and what not – or what we could bear and what not: could we, or could we not, be soldiers, nurses in elderly peoples’ homes, factory workers, or perhaps prostitutes? As the sociologist Cécile Van de Velde says, Parcoursup “lets a ‘system’ explicitly administer, classify, and order the dreams of a generation. [ … ] In its very procedure [ … ] it lacks fundamental and explicit respect for everybody’s freedom and potential” (2018, tr. S.L.). Of course, the sorting algorithm may also give solace to a youth who does not know what to do with her life. She does not need to know who she is because the machine has already analyzed her potential. She does not need to ask what to do because the machine will orient her to a reasonably fitting place, and besides, studying this or that, working here or there, will not take from her the liberty of living the public and private life of her own choosing. Are schools and working places not alienating disciplinary spaces anyway, so that it is better to exercise one’s existential freedom at leisure? Still, it is not for nothing that magical sorting hats and other sorting machines are such an important anxiogenic element of contemporary teen films (The Giver, Divergent, Welcome to Gattaca … ): it is obvious that one will practice one’s existential freedom in the very situation provided by one’s more or less alienating studying and working place. As suggested above, the sorting algorithms are not fundamentally bad existential counselors because of bad programs and insufficient data but because a machine’s temporality is fundamentally different from existential time. Even the most intelligent algorithms can only function in a linear time, in which past events determine future events along a more or less rigid causality. They can calculate very large amounts of information very quickly, but they can only calculate, and calculation means conserving the same truth after successive operations. Machines are essentially conservative because they can only project a future that is a consequence of the past. They operate on the basis of data that comes from the past – that only exists in the form of data, that is, of traces of more or less relevant and more or less conscious choices that the candidate has already made. If the human being were only a machine, her past choices would indeed causally
144
Lindberg
determine her future actions. But existential temporality is different, as Heidegger says in Being and Time. According to him, existential time is the time that comes upon me from the future (Zu-kunft); the future enlightens what has been (Gewesenes) in the double light of my mortality and my world’s historicity; and makes me choose, in this very moment (Augenblick), how to repeat, or not, the past possibilities. The past, including the epiphylogenetic past conserved in hypomnemata (Stiegler 1994), is not a simple given. It is the inheritance that can be known or ignored, taken over or rejected, conserved or destroyed, celebrated or abandoned. Existential time is the freedom to relate to one’s past in many different ways (repeating it, modifying it, forgetting it, rejecting it and starting all over again): the machine can only indicate a possible behavior in the future, not what human liberty will actually do. Or to put it in other words, the machine cannot invent another future, while (human) liberty can. As Derrida puts it in L’invention de l’autre (Derrida 1987), a machine can predict possible futures, but invention is open to the impossible, that is, to whatever appears impossible now in the light of past experience. The third philosophical problem underlying the sorting algorithms pertains to the very nature of these algorithms as technological constructs. I have already mentioned some of their essential features. One is the data, which is the true resource – raw matter and fuel – of the sorting algorithms. Profiling algorithms feed on enormous data archives: a person’s own research history, comparisons with other people’s web records, a person’s data history, the tendencies of entire populations discovered by vast samples of data mining. Never has so much information been available. However, a lot of information does not mean sufficient relevant information, it only means the information that happens to be available in that archive. As Derrida has shown more precisely in Mal d’Archive (1995), an archive is never a simple record of the past, but a construct that reflects available recording technologies and prevalent principles of selection. An archive is also a changing entity in which information is sedimented, lost, reorganized, and transformed constantly, and not necessarily knowingly. In order to find a good candidate on the basis of available archives, one ought to have records on relevant domains. For instance, do school reports really show who would be a good police or nurse? And does the history of web searches in a given situation of life really tell what web searches the same person would perform in another situation, and do either of these web searches really tell so much about the personal qualities of the person, or rather simply of the scholarly, social and economic situation that she is in? It has been shown that recruitment algorithms tend to locate people in similar socioeconomical situations as they have previously been in. This is not only because their logic is conservative but also because they ultimately profile the situation rather than the person. The other essential feature of an algorithm is of course the program itself. A computer program does not think: it is not really an artificial intelligence but
On Algorithmic Ethos and Justice
145
only a machine learning system that learns only what it has been programmed to learn. It is not any more perspicacious than its programmer, it simply executes its task more quickly. One of the most thought-provoking features of different profiling algorithms used in selection and recruitment is the jealously guarded secrecy of the algorithms themselves. The objects of profiling cannot know how they are being profiled, when, in what respect. In theory, the Parcoursup profiles students mainly on the basis of their previous school reports: this is still relevant, although it does not appear just if a child’s mediocre performance at school blocks her way to the university where the same person, grown into an adult, might show high ambitions and unexpected capacities. But certain recruitment programs rely on other factors that are based on traces in social media, entertainment use of the internet, health monitoring device records, shopping, and whatever. Why such things would be relevant to a job, for instance, is hardly more than a programmer’s guess. Besides, seeking such information is not respectful of the candidate’s privacy. It is difficult to see why people should not be asked to prepare themselves for the task, instead of tracing them when they are not aware of it, like an ominous Big Brother (in Parcoursup, however, the candidate also writes a short motivation letter – if the letter is really written by the candidate herself and read by the selection committee, neither of which is guaranteed). The profiling algorithms are inevitably technological black boxes because their programs are not known, but especially in the cases we are studying, they are also juridical and even democratic black boxes because people cannot really know why they have been selected and why rejected; they cannot prepare themselves, defend themselves, contest the selection criteria nor debate them in public. Thus, they appear contrary to the spirit of the ethical guidelines previously mentioned. For example, the French Data protection act from 1978 states that no decision (involving legal consequences) can be taken solely on the basis of automatic processing of personal data, and that the individual has the right to know and to challenge this information and the logic underlying the automatic processing (Villani 2018, 124); both the Villani report and the EU guidelines recommend that the data sets and processes of decisions concerning persons should be traceable, explainable and auditable, and the end users should not be subject to decisions based solely on automatic processing (Villani 2018; Madiega 2019). For all these reasons, what was expected to be a just machine is, after all, just a machine. As Bernard Stiegler has shown, all technical systems have their pharmakon effects, which can also debilitate the specific skills that they are expected to supplement. The “sorting machines” are expected to help in the construction of a more just society in which people get the study and work places that are fit for them. If the machines fall short in this task, and in many cases, on the contrary, increase the feeling of not finding one’s place in society, this is once again not because the algorithms are not sophisticated enough or because they are not directed to the task of finding the right place for everyone. This happens when “real humans” withdraw behind the
146
Lindberg
sorting algorithms and give them a role that they are essentially incapable of fulfilling. We have seen why an algorithm cannot really evaluate a candidate because it cannot really encounter her. It can compare her background data to that of other candidates, but it cannot encounter the candidate as an instance of becoming and individuation that is imaginative and free. Fundamentally, the algorithms are incapable of justice because they are totally incommensurable with the fundamental requisite of justice, namely public space. This might sound surprising since these algorithms are expected to connect people with each other and with institutions and establish a new kind of a community, a digital community in which people get in touch with one another more quickly and more extensively than ever before. Surely this is sometimes the case in the private use of social networks. But in the case of the sorting algorithms, the digital connection is a vast space of comparisons established mainly in order to avoid the trouble of making community. People connect to it as isolated individuals, not as individuals who face each other in the same space and therefore contribute to one another’s individuation. The connection they make is thin because the algorithm itself is secret and not subject to public discussion. Justice needs a public space in which one can speak and be heard in front of others; and therefore be treated justly or unjustly, but always in the function of justice. The algorithm does not present a person as a person but only as a statistical function. What is the role of algorithmic governance in the realization of an ethical and just society today? Assigning tasks pertaining to ethics and justice to machines is an old fantasy but a very recent reality. Now that machine learning systems (AI) are getting capable of collecting enormous amounts of information on persons and of comparing it to given sets of rules and criteria, it is tempting to discharge humans not only of the trouble of getting to know other persons’ thoughts and actions, but especially of the weight of the decision of accepting or rejecting another’s demand (the case that we have been studying here), not to speak of the even greater weight of the decision of punishing her or not (in the case of juridical institutions). Such decisions are always potential sources of doubt and guilt, and this is why the idea of lessening this weight by using a supposedly impartial algorithm is so tempting. I hope to have shown why the algorithms for evaluation of persons cannot be just, not because of technical flaws but because of their very nature. This is also why I hope to have shown why recourse to these algorithms should be limited and controlled so that every decision that seeks to be just should always be based on encounters and decisions of real humans. The ultimate test of ethics is tragedy, the ultimate test of justice is culpability: who tries to hide tragedy and culpability behind an algorithm ends by hiding ethics and justice themselves.
Notes 1 The observation is widely spread, see, for example, Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women”. Business
On Algorithmic Ethos and Justice
2
3
4
5
6
147
News, October 10, 2018. Miranda Bogen, “All the Ways Hiring Algorithms Can Introduce Bias”. Harvard Business Review, May 6, 2019. The European Parliament Research Service has produced a report, EU Guidelines on Ethics in Artificial Intelligence: Context and Implementation, published in September 2019 and containing links to analogical Finnish, German, and British reports (Madiega 2019). Promoting “human-centric AI that is respectful of European values and principles,” the report lists good principles but underestimates risks and falls short of telling how the good principles could be implemented. The same shortcomings weigh on G20 “AI principles” (http://k1.caict.ac.cn/yjts/ qqzkgz/zksl/201906/P020190610727837364163.pdf). A much better text is the French Parliament’s mission directed by Cédric Villani, For a Meaningful Artificial Intelligence. Towards a French and European Strategy (Villani 2018). The paradox of a just machine can also be read as an answer to the thought experiment of a moral machine. The most famous moral machine is the MIT online survey (http://moralmachine.mit.edu) on the moral reactions caused by different types of accidents that self-driving cars might provoke, like, if the car had to choose between two accidents, should it prefer killing two law-abiding elderly persons or three jaywalking children? Obviously this test does not prove the morality of the car’s “choices”, but it exemplifies the emotional reaction of human beings before different kinds of accidents that self-driving cars might run into. What the thought experiment really shows is the hollowness of the trolley problem that it is based on. As Parcoursup is a brand new system (started in 2018), its effects have not yet been studied scientifically, but academics have commented on it in newspapers such as Le Monde. See, for example, Mattea Battaglia and Camille Stromboni, “’Le vrai coup de stress’, c’est Parcoursup, plus que bac” Le Monde, June 15, 2019; Camille Stromboni, “Parcoursup: une deuxième année moins chahutée” Le Monde, July 22, 2019. Sociologist Pierre Merle, “Parcoursup constitue un retour en arrière de deux siècles” Le Monde, June 6, 2018. Professor of economics François Legendre and professor of administration Joan Le Goff, “‘N’accablons pas Parcoursup’”, Le Monde, June 6, 2018. Sociologist Cécile Van de Velde, “Parcoursup’ laisse explicitement un ‘système’ administrer, classer, ordonner les rêves d’une generation” Le Monde, May 30, 2018. The personnel’s point of view can be seen for instance in “Parcoursup: mode d’emploi critique”, see https://obs-selec tion.rogueesr.fr/parcoursup-mode-demploi-critique/. See, for example, Times Higher Education reports on universities’ interest in using AI in student admission and the risks thereof. Rachel Pells, “Universities Will Use AI to Select Students, Says Alice Gast” Times Higher Education, February 7, 2018. David Matthews, “Ethicist Warns Universities against Using AI in Admissions” Times Higher Education, September 20, 2019. D.J. Pangburn describes the potential pros and cons in a longer article, “Schools are using software to help pick who gets in. What could go wrong?” FastCompany, May 1, 2019. https://www. fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-getsin-what-could-go-wrong. For example, during the COVID-19 epidemics in 2020, as the candidates for the matriculation exam in (among others) Finnish international baccalaureate organization (IBO) schools could not participate in their final exams, their final grades were determined by algorithms that were based, not only on the students’ results but also on forecasts of their school’s performance. The algorithm made big mistakes, so that several candidates were first accepted to their ideal study program at the university directly (without participating in entrance exams) and then, when it was too late to participate in the entrance exams organized by the university, their grades dropped so that they were denied the place they had already been promised (Helsingin Sanomat, July 30, 2020). The teachers were unable to explain this to their students because they
148
Lindberg
did not know how the algorithm works. Furthermore, the automatic corrections programs of entrance exams to the Finnish faculties of law and economics made mistakes that cost many candidates their deserved place at the university (Helsingin Sanomat, July 11, 2020). These errors are known because they were noticed and luckily, some of them were corrected, but others may have passed unnoticed during the calamitous distance entrance exams of 2020. However, many of them should not have taken place in the first place because, as professor of public law Tomi Voutilainen states, automatic decision processes are forbidden by the Finnish data protection act (Helsingin Sanomat, July 10, 2020). As the IBO case also contrary to GPDR (EU General Data Protection Regulation), at least the Norwegian Data Protection authority has asked IBO for explication of their procedure. But as automatic decision processes are cheaper than human decision processes, it is unfortunately probable that higher education institutions will be tempted to make better automatic systems rather than hire human examiners.
References Alston, Philip. 2019. Extreme Poverty and Human Rights, a report to the United Nations General Assembly, distr. October 11, (A/74/493). https://undocs.org/A/74/493. Butler, Judith. 2000. Antigone’s Claim. Kinship between Life & Death. New York: Columbia University Press. Costa, Luiz. 2016. Virtuality and Capabilities in a World of Ambient Intelligence. New Challenges to Privacy and Data Protection. Law, Governance and Technology Series. Cham, Switzerland: Springer. Cousserand-Blin, Isabelle and Nathalie Pinède. 2018. “Digitalisation et recrutement.” Presentation of the special edition of Communication et organisation 53. Derrida, Jacques. 1981. Glas 2. Que reste-t-il du savoir absolu. Paris: Denoël/ Gonthier. Derrida, Jacques. 1985. Préjugés – devant la loi. In La faculté de juger. Paris: Minuit. Derrida, Jacques. 1987. “Invention de l’autre.” In Psyché, Inventions de l’autre. Paris: Galilée. Derrida, Jacques. 1994. Force de loi. Paris: Galilée. Derrida, Jacques. 1995. Mal d’Archive. Paris: Galilée. Derrida, Jacques. 2015 [2012]. Séminaire La peine de mort vol. I–II. Paris: Galilée. George, Theodore. 2006. Tragedies of Spirit. Tracing Finitude in Hegel’s Phenomenology. Albany, NY: SUNY Press. Foucault, Michel. 1971. L’ordre du discours. Paris: Gallimard. Foucault, Michel. 1975. Surveiller et Punir. Paris: Gallimard. Harari, Yuval Noah. 2018. 21 Lessons for the 21st Century. New York: Spiegel & Grau. Hayles, N. Katherine. 1999. How We Became Posthuman? Virtual Bodies in Cybernetics, Literature and Informatics. Chicago and London: University of Chicago Press. Hayles, N. Katherine. 2017. Unthought. Chicago and London: University of Chicago Press. Hegel, Georg Wilhelm Friedrich. 1970 [1807] Phänomenologie des Geistes. Frankfurt am Main: Suhrkamp. Translated by Terry Pinkard 2018. The Phenomenology of Spirit. Cambridge: Cambridge University Press. Heidegger, Martin. 1978 [1967]. “Brief über den ‘Humanismus’.” In Wegmarken. Frankfurt am Main: Vittorio Klostermann. Translated by William McNeill, 1998. “Letter on ‘Humanism’.” In Pathmarks. Cambridge: Cambridge University Press.
On Algorithmic Ethos and Justice
149
Heidegger, Martin. 1984 [1927]. Sein und Zeit. Tübingen: Max Niemeyer. Translated by John Macquarrie and Edward Robinson. 1978. Being and Time. Oxford: Basil Blackwell. Heidegger, Martin. 1994 [1954]. “Die Frage nach der Technik.” In Vorträge und Aufsätze. Stuttgart: Neske. Translated by William Levitt, “The Question Concerning Technology.” In The Question Concerning Technology and Other Essays. New York: Harper & Row. Hui, Yuk. 2019. Recursivity and Contingency. London and New York: Rowman and Littlefield. Kurzweil, Ray. 2005. The Singularity is Near. When Humans Transcend Biology. New York: Penguin. Lacan, Jacques. 1986. Le séminaire, Tome 7: L’éthique de la psychanalyse (1969– 1960). Paris: Le Seuil. Lacoue-Labarthe, Philippe. 1986. L’imitation des modernes. Typographies II. Paris: Galilée. Lacoue-Labarthe, Philippe. 2002. Poétique de l’histoire. Paris: Galilée. Latour, Bruno. 1996. “Les cornéliens dilemmes d’une ceinture de sécurité.” In Petites leçons de sociologie des sciences. Paris: Le Seuil, 25–32. Latour, Bruno. 2002. “Morality and Technology: The End of the Means.” Theory, Culture & Society 19, no. 5–6: 247–260. Liu, Ken. 2016. “The Perfect Match.” In The Paper Menagerie and Other Stories. New York: Saga Press. Madiega, Tambiama. 2019. EU Guidelines on Ethics in Artificial Intelligence: Context and Implementation. European Parliamentary Research Service PE 640.163. de Mul, Jos. 2009. ”Des machines morales.” Cités39, no. 3: 27–38. Nancy, Jean-Luc. 2001. “L’éthique originaire de Heidegger.” In La pensée dérobée. Paris: Galilée. Neyrat, Frédéric. 2010. “Avant-propos sur les sociétés de clairvoyance.” Multitudes 40, no. 1: 104–111. Neyrat, Frédéric. 2015. Homo Labyrinthus. Humanisme, antihumanisme, posthumanisme. Paris: Éditions Dehors. O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown. Quessada, Dominique. 2010. “De la sousveillance. La surveillance globale, un nouveau mode de gouvernementalité.” Multitudes 40, no. 1: 54–59. Raffoul, François, and David Pettigrew. 2002. Heidegger and Practical Philosophy. Albany, NY: SUNY Press. Rieder, Manfred. 2020. Engines of Order. A Mechanology of Algorithmic Techniques. Amsterdam: Amsterdam University Press. Rouvroy, Antoinette and Thomas Berns. 2009. “Détecter et prévenir. De la digitalisation des corps et de la docilité des normes.” https://works.bepress.com/antoinette_ rouvroy/30. Rouvroy, Antoinette and Thomas Berns. 2010. “Le nouveau pouvoir statistique, Ou quand le contrôle s’exerce sur un réel normé, docile et sans événement car constitué de corps ‘numériques’ … ” Multitudes 40, no. 1: 88–103. Sadin, Éric. 2009. Surveillance globale. Paris: Climats. Sadin, Éric. 2013. L’Humanité augmentée: L’administration numérique du monde. Paris: L’échappée.
150
Lindberg
Sadin, Éric. 2015. La vie algorithmique: Critique de la raison numérique. Paris: L’échappée. Stiegler, Bernard. 1994. La technique et le temps I. La Faute d’Épiméthée. Paris: Galilée. Stiegler, Bernard. 1996. La technique et le temps 2. La désorientation. Paris: Galilée. Stiegler, Bernard. 2001. La technique et le temps 3. Le temps du cinéma et la question du mal-être. Paris: Galilée. Stiegler, Bernard. 2008. Prendre soin de la jeunesse et des générations. Paris: Flammarion. Villani, Cédric, et al. 2018. For a Meaningful Artificial Intelligence. Towards a French and European Strategy. A mission of the French Parliament September 8, 2017 to March 8, 2018. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, 75–89. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. London: Profile Books.
8
Automation Between Factuality and Normativity Marc-Antoine Pencolé
The use of automated machines turned to worrying when they became complex and powerful enough to impose their own norms on human activity.1 The inflexibility of a factory system certainly strips its workers of most of the control they could have over their labor: its pace, its form and its meaning and destination. Over the last decades, the problem has arisen under a new guise with the development of digital automata and their increasing capacity to calculate, evaluate, organize and anticipate human activity. Should we see this as an inevitable deepening of the dispossession of our prerogatives as subjects in favor of an extending network of cold and impersonal algorithmic administrators? Our collective existence is regulated through different means. First – and this was indeed long the main topic of classical political philosophy – laws, decrees and rules formally organize social life and shape congruent subjectivities. Then, another powerful regulator consists in the normativity immanent to any social activity: the bundles of common norms and expectations that determine what is a successful and respectful interaction. This normativity explains, for instance, why people spontaneously queue in a certain order in front of the bakery without any law to prescribe it. Finally, technology may also act as a regulator, materializing certain norms within fixed processes, architectures and artifacts. Technological normativity hasn’t always been a problem – beyond the workplace at least: concern arose when the development of computer science extended the scope and depth of technological regulation over domains where rules were traditionally set by law. The power of artifacts rests on their very factuality: the factual properties of cement and stone determine the capacity of a certain architecture to organize visibility and circulations; the diversity and complexity of software, functions, libraries and databases on one side, and the miniaturization of chips and processors on the other, is what grants digital devices the power to take very adequate (“intelligent”) decisions. However, as soon as it comes to effects that cannot be reduced to physical phenomena but tend to be assimilated to pseudo-decisions, like the kind of behaviors manifested by intelligent subjects, then the possibility arises that actual subjects may hand the exercise of their defining faculties over to factual objects, and thus end
152
Pencolé
up being dispossessed of some aspects of their subjectivity. Indeed, a subject is traditionally seen as a being that is – at least partially – able to reflect on itself, and act freely, thus whose most proper domain is that of moral and political deliberations and decisions: delegating decisions about the good life and justice to impersonal machines would mean to alienate oneself from one’s very autonomy and responsibility as a subject. Now, we would like here to question the inevitability of such a trade-off between the power offered by digital automata and the preservation of human autonomy and responsibility.
The Normativity of Technical Artifacts First, we must examine different conceptions of the normativity of technical artifacts to specify the precise conditions of the trade-off. The most common one may be the use theory of norms, according to which the norm is extrinsic to the artifact and stands entirely in the diverse ways that subjects choose to use them. Artifacts would then be no more than neutral instruments. This applies quite well to abstract tools: for instance, a screwdriver may be used as a weapon as well as a repair tool or any other purpose we can think of that requires a short and rigid metal rod. It also covers the normativity of indeterminate technological fields, understood as a sum of knowledge beyond any determined application, like informatics in general, which can lead to an indefinite variety of actual devices. Yet it fails to account for the specific normativity of concrete technical individuals upon their environment (Simondon 2017, 63). An opposite theory, which we could call the substantial theory of norms, pretends to account for concrete technical individuals by stating that the norm is intrinsic to the artifact. Indeed, certain technical beings tend to channel, facilitate or forbid certain behaviors, hence imposing their own norms on the actors. For instance, the famous Long Island bridges were designed at the beginning of the 20th century in such a way that only low vehicles could drive under their arches; since cars were still rare and owned only by wealthy residents, the Black and Puerto Rican poor, who travelled mainly in tall public buses, were practically forbidden the most convenient access to the island (Winner 1980). Such normative imposition appeared later to be highly context-dependent, revealing the extrinsic conditions of “intrinsic” technical norms: indeed, driving habits evolved over the course of the century and today the rich drivers of bulky campers happen to be the ones impeded from enjoying the sea shore (Verbeek 2005, 117). These two theoretical frames are either too partial or partially contradictory, and any combination of both will also partially carry their flaws. Their shared presupposition, from which stem these limitations, consists of the idea that the opposition between the subject and the object is static. The modern concept of subject posited it as absolutely autonomous and transparent to itself, casting the object as its radical other, some pure exteriority
Automation
153
or unreflective stillness, an extended and unthinking thing. The philosophy of technology (mainly promoted by Ihde 1990 and Verbeek 2005) has recently remembered this old critique of an overly naive subject–object divide and proposed a mediation theory of technical factuality and norms. In order to overcome binary reasonings about technology, the mediation theory suggests a dynamic frame, where a technical process or artifact does not simply stand in front of the subject and her norms, but participates in their constitution; it does not simply connect the poles that are intermediated, but transforms them. To give a trivial example, phones, video-conference devices, etc. are not mere communication instruments, neutral and passive, since they transform the message itself, and ultimately what it means to communicate. Certain emotions or nuances cannot be carried through a flattened voice message or a visual frontal display with no eye contact. More interestingly, Verbeek showed that the very existence of ultrasound-scanners alters what is deemed to be reasonable, cautious or irresponsible during pregnancy, because having missed some serious pathology affecting the child will now be considered the result of a conscious decision, where before it was but blind fate. The parenthood norms, mediated by the factual power of such a modern detection tool, undergo a real transformation: scanners are not neutral instruments, obviously, but neither do they impose some fixed hardcoded norm onto parents. Surprisingly, this ambitious move toward a more concrete and dynamic theoretical ground in the philosophy of technology boils down to the refusal of any critical discussion of technical phenomena. Indeed, according to Verbeek, the dependency of the content of the evaluation standards on the mediating activity of the object that is being evaluated makes it absurd to pretend to assess a technology that has not yet been widely adopted: it would be tantamount to evaluating a phenomenon form a normative standpoint that we already know we will probably disapprove of afterwards. Let us then try to substantiate our concept of mediation by digging into the Hegelian tradition. Hegel used the concept of mediation to account for, and extend, the idealist intuition according to which the subject participates in the constitution and the position of her objects. It became, in his philosophy, the general form of reflection, the negative moment of thought, where the immediate distinctions set by the understanding – between the thing and its others, what the thing is not – are blurred and overcome, to eventually lead to a more comprehensive apprehension of the thing. Let us narrow down the perspective from logical generalities to the question of technology: here, the mediation of human activity by technology covers all the manners of effectuation, actualization, or degradation and loss of the self and its activity through the objectivity of tools, know-hows and procedures. Human beings exist and reflect themselves through the products of their activity: their subjectivity is given and confirmed to them through the objective matter-offactness of their products. This does not simply mean that something, once
154
Pencolé
mediated, is not the same as before – this seems to be the logical observation underlying Verbeek’s argument and it could be deemed rather obvious and slightly vague. The becoming of the mediated subject may in fact be one of objective realization of inner pretentions, actualization of virtualities, or on the contrary negation of oneself, or inability to confirm one’s value in objectivity. The mediated being is not a wholly new and mysterious thing, but remains the same in its difference with itself, as for example an impulse of kindness or envy remains itself once it has been manifested outside, despite the new mode of existence it has acquired. Hence, Hegel offers us a more complex frame in which to describe normative phenomena in the technological realm as well as to criticize irrationalities or emancipatory perspectives.
Mediation and Norms Effectivity The most encompassing exploration of the relations between factuality and normativity is not proposed by Hegel in his few lines on technology, but in his study of the mediation of social relations by law, in which we might nevertheless find precious insights into our topic. According to Hegel, our shared norms of morality and justice require the mediation of the legal system, its codes and institutions, to be effectuated and actualized, to pass into the stuff and structure of society. Indeed, norms would remain mere wishes, inner standards with no objective reality, if they were not somehow to shape social relations and weigh on them. Thus, norms become more actual, and morality and justice become more adequate to what they are supposed to be, when they pass into the inert factuality of texts, institutions and established procedures. On another side – and here is Hegel’s most decisive argument – these norms are in their turn affected by their passing into factuality, and not merely in the sense they become more actual. Indeed, as facts, laws necessarily bring their share of contingency and rigidity, in what may appear as irrational and contrary to the ideal of justice they were supposed to embody. However, in court, a decision must be made, even if it is partly contingent (Hegel 2008, § 213). The worst that could happen is that no judgement is made, and that justice is denied, be it because of the inability to qualify a specific act, or to practically understand the intricacies of the texts and procedures. Therefore the law has to be simple enough to handle well, yet also be encompassing enough to cover every possible case: it is then impossible to legally code the perfectly moral and just answer to every transgression since nobody would be able to actually learn and apply it to concrete cases – and, precisely, actualization of the ideal is the essence of the law. We must then acknowledge that a factual legal system, partially inspired by the norms of justice but also partially contingent and arbitrary, is normatively superior to the pure ideals themselves, because of its very factuality. The factuality of actual law is normative. Thus, the norms need to become facts to be proper norms, as much as the facts themselves tend to become norms in the process.
Automation
155
Habermas later developed these insights (1996, 114–118). Norms, as mere subjective demands, are weak, and factual systems may bear their own kind of normativity because, for the self to stand alone at the origin of ethical decisions about justice or the good life, and to align her activity with these norms, and to recognize herself in them, represents an immense demand. As an actual and empirical thing, every subject is finite. Thus, she faces subjective factual limitations in her will to accomplish her ideals, so that the individual subject, the true bearer of these norms and responsible before them, also appears dramatically separated from them because of three determinations of her finitude. Firstly, realizing the norm – tearing it away from its abstract ideality – means applying its general prescriptions to concrete cases. Now, the universality of the norm opens a rift between itself and the always too particular cases, which demands some intellectual effort to be crossed: realizing the norm in complex situations requires us to face a huge cognitive indeterminacy. Everybody has experienced the toil of exploring, analyzing and evaluating the moral issues of everyday life: to be sure that what we do is moral, outside trivial situations, we need to collect information and put a lot of thought into it, weighing right and wrong, sometimes even deliberating with others, before being convinced that a certain behavior really is the right course of action. The more persons are involved, and the more extended the situation is, the more difficult it will be for the arbiter to cognitively relate the many singular and interconnected facts to the unity of a simple general ideal. Hence a proper law posits more or less clear borders between, for instance, what qualifies as harmful negligence and what is but an unforeseeable accident. All the work of customs and legislators, sedimented in tradition and positive law, relieves the present arbiters from the immense burden of crossing the cognitive gap between the particular and the universal. The counterpart of enabling the judge not to be struck by doubt is of course a certain level of contingency in the determination of the application of rules. A second burden consists of the motivational uncertainty moral actors may experience. Knowing what is just is one thing, but willing it is another. Situations often arise in which the right course of action happens to be detrimental to the particular interests of the actor. Thus, we can expect that she sometimes lacks the motivation demanded to eventually choose the general interest over her own. Since the legal system is factual, the subject who is in position to judge and the individual subjected to the law are both strongly encouraged to act according to the norm – because they can expect a reward or at least avoid harsh punishment. The factuality of an effectuated norm may be seen as a promise once made to oneself to later comply with principles whose general validity has been recognized. Finally, moral action faces the question of organizational (or accountability) indeterminacy. The subject is traditionally defined by her freedom and thus her responsibility, and yet, beside trivial situations, nothing is more indeterminate than the extension of the domain of what can be reasonably
156
Pencolé
imputed to her. Assisting the person in immediate danger in front of her is obviously her responsibility, but is it still her duty to go and help in the same way when the person lives in a completely different region of the world, though she knows she could have made a difference, had she committed herself to it? Beside a procedure of application and a series of incentives, a legal system also consists of a complex system of agreements about where everybody’s responsibility ends, which makes it practically possible to live under the rule of just norms. Such an unburdening of the demands for moral action onto the factual power of a legal system can be deemed virtuous as long as the community ruled by its law recognizes itself within it. As soon as it becomes deeply heteronomous, we may speak of a dispossession of autonomy, which ends up in the position of an inert opposite, an oppressive system of unilaterally imposed norms. Could a powerful technical system then not endorse, through its factuality, the same role as positive law in effectuating norms of morality and justice?
Digital Automata and the Effectuation Norms Complex technical systems, algorithmic regulations among them, usually appear as impersonal mechanisms tearing off actors’ autonomy and forcing extraneous norms upon them, but this is not an essential property of technological mediation as such – in Hegelian terms we could brand them as insufficiently rational, as an oppressive state would be, if ruled by a particular class in its own interest. We may identify two examples of relatively valid digital mediations: the BitTorrent protocol and sharing communities on one side, and the Wikipedia encyclopedia on the other. The BitTorrent system is a technical assemblage that allows the decentralized sharing of files on the Internet between peers. A key component is the BitTorrent protocol, which has been instantiated in many different pieces of software. It is a simple set of technical rules designed by Bram Cohen to ensure the effectuation of an certain ideal of sharing: altruistic behavior (hosting files for others to download) had to be encouraged, while free-riding (downloading while never uploading in return) was to be avoided as much as possible, but narrow-bandwidth users also had to be able to access a significant share of the downloads even though they could not contribute as much as broadband peers (Cohen 2003). Another crucial part is that of the platforms, composed of a web search engine – similar to some sort of directory or catalogue – and a tracker, the server that informs a given user’s software about the other uploaders who may send her the requested file. Admittedly, many of these platforms are little more than forprofit organization pushing in the direction of the basest consumerism, but some of the most prominent institutions of torrent-sharing are built as well ordered communities: chats and forums are dedicated to mutual help, the imperative to keep hosting the file for others once downloaded is repeated
Automation
157
ad nauseam by the users themselves, the users have to register and the tracker keeps count of the total amount of data they transfer, blocking downloaders who don’t upload their share, and finally every user participates in the evaluation of the safety and quality of each new file they download (Dagiral and Dauphin 2005). The cognitive and motivational loads weighing on the subjects and relieved by the protocol and the platforms are easily recognizable. Knowing how fast to download, which users to upload to, and which files to avoid sharing – because they’re corrupted or infected with malware – to respect the general principles of the community would obviously require hundreds of complex calculations every couple of minutes. What the human mind cannot handle, a network of computers can. Besides, there is a strong incentive to share and fill the needs of other peers with the automatic effect of the protocol (the download over upload ratio), and the whole esteem economy enforced throughout the system, so that everyone has an interest in being a truly altruistic contributor. The third burden needs to be slightly redefined to fit the specificity of technological mediation since the scope of the regulation does not reach far enough for imputability problems to arise. Habermas related the problem of accountability to the material cost of being moral: if accountability stretches too far, almost nobody will own the means necessary to accomplish any significant moral action. Here, we can see that the normative advantage of a factual algorithmic system is even more obvious than in the legal domain: it would be materially impossible to share so many cultural works by burning disks and dispatching them through the postal network or any other means. Building huge digital libraries, curating collections and gathering millions of daily sharing peers around them would be inconceivable without digital technologies, while here it requires no more than a few servers and a handful of animators and technicians. The case of peer-to-peer file sharing is not pure – not every platform will qualify as a virtuous mediation – but it illustrates the fact that such communitarian values of sharing would remain wishful thinking were they not taken in charge by the factual power of a certain device. Even though they are far from perfect, they are at least somewhat effective. Yet not every assemblage is that successful, and the conditions of such a rational mediation must not be neglected. There are two risks of dispossession of users’ autonomy: the verticality of the organization, whose administrators are formally accountable to nobody – even though one might observe various forms of protest emerging from the community – and the pressure of the culture industry’s capital, which has been trying to shut down the platforms through legal means. Here again, the ambivalent factuality of the system helps us understand why it could have held out so long against heteronomous constraints: the cheap infrastructure and low amount of labor that needs to be put in to make it operational have allowed it to be repaired or built again elsewhere after every serious blow.
158
Pencolé
Wikipedia is a collaborative encyclopedia launched in 2001, two years before the invention of the BitTorrent protocol, and is based on a few principles: neutrality in the presentation of different points of view, free access to reading as well as participation, respectful collaboration, and finally the debatability of every rule. Before these quite general norms, instead of resting solely on spontaneous impulses toward the preservation and improvement of common knowledge among its dozens of millions users per day, the project developed a complex of technical mediations, ensuring that not everyone was required to be selfless and absolutely dedicated to keep it working. We could demonstrate this through how the platform architecture elicits the distribution of tasks and cooperation (Auray 2009) but let us focus on the more strictly technical part of its inner regulation – the bots (Halfaker and Riedl 2012; Geiger, 2017). Wikipedia hums with task-specific software activity: thousands operate constantly (the majority in the English-speaking sphere), the total production of some amounting to millions of edits. Their tasks, among many others, may be to automatically detect vandals and repair their deeds, or to quickly identify copyright issues; they may participate in data structuration to standardize certain elements in articles or assist in the labelling and distribution of the remaining work to be done by humans; they also often clean broken code and dead links from pages. Without the help of these tiny automata, substantially contributing to Wikipedia could end up being wearing, given the numerous and complex conformity rules between which to arbitrate. Bots offer some assistance by relieving the users from having to keep in mind all the exact details of every procedure, for example, to alter a hotly debated article or to call for a second opinion about a contested deletion. The motivational assistance consists here of punishing deliberate rules transgression, like patterns of obvious vandalism, which are automatically detected and end up in banishment and the lockout of the affected articles, but again, it also implies a strong esteem economy, where generous contributors are respected and praised. Finally, the organizational burden lies on one side in the tedious effort and numerous hours spent in standardization of the text and code of each article, and on the other in the tremendous material cost such a worldwide cooperation would imply if conducted through analogical ways; both of which are spared by the efficiency of automated informational procedures. Beside a certain social homogeneity of most active contributors, Wikipedia’s sociotechnical system has achieved a remarkably horizontal, open, and yet robust common institution of knowledge. Admittedly, its global infrastructure is terribly expensive but the software and all the content produced are freely reproducible, and the project has managed to ensure its independence by regularly calling for donations and thus creating a non-commodified bubble to isolate it from the influence of private property and the market. Yet another significant condition of its success is certainly that the
Automation
159
principle of the debatability of every rule has been instantiated in the algorithmic automata themselves: of course, every bot may be subjected to a public discussion and redefined by the community, but even individuals may temporarily yet very quickly contest and interrupt the course of a certain bot by inserting specific labels in the article before it is automatically scanned. Users’ activity may be assisted, controlled and complemented by automata, but one merely needs to type a few letters to get back one’s hold of the processes previously handled by the machine.
Conclusion It appears that, once we are equipped with a mediation theory of technical norms, the normative value of factuality in technical systems may be properly assessed: the effectivity of our norms about the good life or justice depends on them passing into the factuality of a technical mediation, given the colossal weight of the cognitive, motivational and organizational demands addressed to the subjects of any community that aims to live by their shared norms of justice. However, the study of actual communities mediated by digital automata – the BitTorrent networks and Wikipedia – reveals that their factual organizational advantage is not merely a normative requirement to the effectuation of the ideals of sharing and open cooperation, but also contributes on another level to their consolidation against the risks of dispossession. Such systems are technically reproducible with no insurmountable obstacles and are thus particularly difficult to shut down. But above all, their design tends to encourage or at least allow collective appropriation. Contrary to cumbersome machines and bureaucratic administrations, algorithmic mediations are characterized by their openness to communitarian appropriation and the difficulty of establishing a private monopoly over them. So even the delimitation between what is positively and negatively normative in technological mediations may also be established by their factual determinations. Having established these logical groundings, claiming that impersonal automata in general are bound to dissociate the subjects and their essential prerogatives and duties is as abstract as pretending that, since norms and technology are linked, no consistent critique can be made about the latter. A critical assessment of technological delegation is actually possible, but only on the level of concrete and situated sociotechnical assemblages. Now, collective autonomy seems to be the general condition to determine whether a given mediation grants effectivity to the norms; certain factors are of course extraneous to the mediation itself (as is capital or market pressure in the abovementioned cases), yet it remains possible to identify, in the very factuality of some systems, a form of flexibility and openness to communitarian appropriation that seems to limit phenomena of dispossession: this is finally the only characteristic of machines, considered in abstraction from any concrete social embedding, that would make sense of the idea of a machine’s morality.
160
Pencolé
Note 1 This chapter reframes and extends the reflections elaborated in a previous work (see Pencolé 2017).
References Auray, Nicolas. 2009. “De Linux à Wikipedia: Régulation des collectifs de travail massivement distribués.” L’évolution des usages et des pratiques numériques. Limoges: FYP Editions. Cohen, Bram. 2003. “Incentives Build Robustness in BitTorrent.” Workshop on Economics of Peer-to-Peer Systems. Berkeley, California. Available at http://bittorrent. org/bittorrentecon.pdf. Dagiral, Eric and Florian Dauphin. 2005. “P2P: From File Sharing to Meta-Information Pooling.” Communications & Stratégies 59, no. 3: 35–51. Geiger, R. Stuart. 2017. “Beyond Opening up the Black Box: Investigating the Role of Algorithmic Systems in Wikipedian Organizational Culture.” Big Data & Society 4, no. 2: 1–14. doi:10.1177/2053951717730735. Habermas, Jürgen. 1996. Between Facts and Norms. Cambridge, MA: MIT Press. Halfaker, Aaron and John Riedl. 2012. “Bots and Cyborgs: Wikipedia’s Immune System.” Computer 45, no. 3: 79–82. doi:10.1109/MC.2012.82. Hegel, Georg Wilhelm Friedrich. 2008. Outlines of the Philosophy of Right, trans. T. M. Knox. Oxford: Oxford University Press. Ihde, Don. 1990. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press. Pencolé, Marc-Antoine. 2017. “Nos algorithmes peuvent-ils être plus justes que nous?” Revue Française d’Ethique Appliquée, no. 5, 67–80. doi:10.3917/rfeap.005.0067. Simondon, Gilbert. 2017. On the Mode of Existence of Technical Objects. Minneapolis: Univocal Publishing. Verbeek, Peter-Paul. 2005. What things do. Philosophical Reflections on Technology, Agency and Design. University Park: Pennsylvania State University Press. Winner, Langdon. 1980. “Do Artifacts Have Politics?” Daedalus 109, no. 1: 121–136.
9
How Agents Lost their Cognitive Capacities within the Computational Evolution of Market Competition Anna Longo
The rules of the game play a large role in determining market distribution – in preventing discrimination, in creating bargaining rights for workers, in curbing monopolies and the powers of CEOs to exploit firms’ other stakeholders and the financial sector to exploit the rest of society. These rules were largely rewritten during the past thirty years in ways which led to more inequality and poorer overall economic performance. (Stiglitz 2016, 147)
Financial markets have been expanding and they represent today one of the main sources of wealth. Thanks to the development of electronic platforms and applications, anybody can easily engage in financial operations (Martin 2002) choosing among an increasing number of products like bonds, options or futures. Information technology, automated systems and predictive algorithms allow for developing new strategies, always updated predictions and faster transactions. Paradoxically, despite the availability of the most sophisticated predictive models and automated systems for allocation,1 we have never been so exposed to uncertainty: economic crises, market failures, speculative bubbles and other unpredictable events seem more and more likely to happen. The complex and chaotic dynamics of contemporary markets, where agents are driven by “animal spirits”2 and limited by cognitive bias, has been requiring an update of the classical theory of market equilibrium and the creation of new models relying on computational and algorithmic technology. Contrary to orthodox models, where agents are perfectly rational Bayesian maximizers, heterodox evolutionary simulations stage boundedly rational3 agents competing to adapt to changing conditions (where anybody is the unpredictable environment of anybody else). It seems that, rather than facilitating people’s capacity to make efficient economic decisions in their own interest, the development of predictive technology and automated trading systems has been leading to a sort of counter-evolution of the agent’s cognitive capacities with respect to classical models. It must be noted that increasing inequality has been following the financialization and the automatization of economy in the most developed countries (Stiglitz 2012; Piketty 2013). The question I explore in this chapter
162
Longo
is the following: is this an effect of agents’ irrationality, incapacity to make efficient economic decisions, or is this the effect of a change in the rules of the game such that players are prevented from calculating the risk of their choices? If the strict normativity of orthodox “fair game”4 has been criticized for modelling idealized decision makers, isn’t the assumption of an imperfect Darwinian competition among boundedly rational agents a norm justifying the necessity to adapt to an unfair game?
Classic Games and Rationality Before describing evolutionary economics and the rules of the present algorithmic competition in more detail, I am going to recall the norms of neoclassical efficient market “fair game” so as to provide a definition of the cognitive capacities which are supposed to characterize the ideal homo economicus. The book in which the classical game theory was first presented is Von Neumann’s and Morgenstern’s Theory of Games and Economic Behavior (1944). It provided the mathematical framework for conceiving the strategic interaction among rational players. Classical theory is said to be normative since it prescribes the decisions that the agents ought to make to maximize their utility, once the structure of the game is specified and information is complete. Hence, if the structure of the game is common knowledge and the totality of pertinent information is public, then equilibrium (the set of strategies nobody can expect to get more by deviating from) can be predicted by rational players. Like in a game of chess, by observing the disposition of the pieces on the board (actual prices), players are able to choose the best move to achieve the desired situation (utility) by taking into account the probability of the adversary’s moves (future prices). In the same way as the exact sequence of the configurations of the chessmen during a match cannot be predicted, so the series of the realized prices is supposed to be stochastic,5 even though, at any stage of the game, the probability of the following states is objective. As a consequence, players can calculate the exact risk of their decisions which is relative to the number of the adversary’s possible moves or the probability distribution of prices. Such a game is fair since any rational player – in other words, making the right decisions to maximize expected utility – has the same chances to win the match. It is important to recall that the condition for the market to be a fair game (cf. Fama 1970) is that prices transmit the totality of relevant information6 and that this information is public and free: a market where prices transmit the information which is needed to make optimal decisions said to be efficient. The result is that nobody can use private information to make predictions which differ from those of the less informed, so arbitrage is impossible, and prices actually follow a random walk. Moreover, in the same way as speculative behaviors are supposed to be excluded, so irrational deviations from the expected set of moves (suboptimal or bad decisions) are
How Agents Lost Cognitive Capacities
163
supposed to have probability = 0. Accordingly, orthodox market equilibrium, like the equilibrium of classic games, is achieved because, given the observation of present prices, rational agents expect the other rational agents to make a certain set of decisions and they make those decisions because they trust the others’ rationality. This, of course, reminds us of David Lewis’s theory of conventions (1969) where coordinated behaviors are explained in terms of reciprocal rational expectations which presupposes, as a condition, common knowledge of what is reasonable to expect as a solution to the problem (equilibrium). As Lewis explains: We may achieve coordination by acting on our concordant expectations about each other’s actions. And we may acquire those expectations, or correct or corroborate whatever expectations we already have, by putting ourselves in the other fellow’s shoes, to the best of our ability. If I know what you believe about the matters of fact that determine the likely effects of your alternative actions, and if I know your preferences among possible outcomes and I know that you possess a modicum of practical rationality, then I can replicate your practical reasoning to figure out what you will probably do, so that I can act appropriately. (1969, 27) Market equilibrium can be conceived as the convention toward which rational agents converge by expecting that everybody does the same thing in order to maximize expected utility. Accordingly, classic equilibrium is the effect of the convergence of beliefs concerning the probability distribution of future prices that can be explained by supposing that players follow Bayesian decision theory within a well specified problem whose conditions are common knowledge.7
Bayesian Decision Theory and Cognitive Capacities Bayesian decision theory is based on subjective8 interpretation of probability (Savage 1954; de Finetti 1989; Ramsey 1999), according to which probability refers to the personal degree of belief in a predictive hypothesis. From this perspective, probability does not refer to real stochastic variables but it’s an epistemic notion measuring the confidence in a predictive hypothesis based on the available information. The subjective degree of belief is measured by the decisions that one is willing to make relying on the predictive hypothesis in order to get a future benefit. For example, the degree of the belief in the hypothesis that Jokey the horse will win the race is measured by the amount of money that one is willing bet to on it,9 the more one trusts the forecast, the more one is ready to bet. So, if one has some reliable information about the capacities of the horse, one has more reasons to be confident. However, according to subjective interpretation, this does not entail that the totality of information which is needed in order to know the future with certainty can be
164
Longo
collected (the totality of the samples supporting a generalization): any prediction is an induction and, as Hume famously stated, predictive hypothesis can only be considered more or less probable. Accordingly, there is no a priori restriction concerning what one has to believe, however beliefs must be coherent (one shouldn’t rely on a prediction based on Einstein relativity for some events and on Aristotelian physics for others) and preferences well-ordered (one should risk to find himself in a less preferred scenario rather than in the worst while looking to get the best possible). If axioms of consistency hold, then an agent’s beliefs are such that expectations are justified by the decisions she will make, or to put it otherwise, no bookmaker will be able to propose a system of bets in which he is certain to win.10 Now, since subjective degrees of beliefs depend upon the available information, it is reasonable to consider a hypothesis more probable once new information becomes available. For example, the confidence in the inductive hypothesis that all the ravens are black is stronger if one observed 1 million ravens rather than only ten, in the same way as the belief in a scientific theory is deeper after having observed a larger number of experiments confirming the hypothesis predictions. The method for updating beliefs with respect to the arrival of new relevant information is provided by Bayes’ theorem. As de Finetti explains: Inductive logic is reduced in essence to the theorem of compound probabilities or to its slightly more elaborate variant, often caned Bayes’ theorem: P(H|E) ={P(H|E)PH over PE} The fundamental meaning of P(H|E) the conditional probability of H given E is the probability initially attributed to H conditional on the possible verification of E. According to a criterion of temporal coherency, P(H|E)is also the new probability attributed to H after the person has observed E (and E alone-a proviso to which we shall return). With this interpretation, Bayes’ theorem expresses the transformation from the initial probability P(H) of the hypothesis H to the final probability P(H|E), that is, the behavior of a man who augments or diminishes his credence in a hypothesis in response to new facts that affect the plausibility of the hypothesis. (1972, 150) It is important to note that, by applying Bayes’ theorem, two individuals concerned with the same problem but starting with different degrees of belief will converge toward the same opinion after having observed the same results. This allows scientists to converge on to same theories by observing the results of the same experiment in such a way to agree on what is reasonable to expect if a specific protocol is followed. As Savage (1951, 62) states, “where effective experimentation is a component of some of the possible actions, practical agreement may well be reached in this way. For unless two opinions are originally utterly incompatible, enough relevant evidence will bring them close together, and even if they are utterly
How Agents Lost Cognitive Capacities
165
incompatible the holder of each will feel that he has little to lose in agreeing to a sufficiently extensive fair trial.” As a consequence, a recognized theory constitutes a common prior the knowledge of which allows agents to expect that, given the same observed situation, anybody will make conformed inferences. However, it is important to keep in mind that, as Savage pointed out, Bayesian agents can make the decisions that are supposed to maximize the expected utility only when dealing with well-defined situations or within “small worlds” (Savage 1954, 82); in other words, when the agent’s problem can be represented using a decision matrix consisting of an finite given state space, a set of consequences, and a set of acts. Hence, in order to achieve the optimal equilibrium and playing the best strategy with respect to the other’s strategies, agents must share the knowledge of the conditions of the problem and they must agree on the evidences which are supposed to condition the probability of one among the possible outcomes (the next adversary move). Differences in beliefs can then be reduced to differences in information (observations) and agents can learn to predict what the others will do through repeated interactions, hence sharing information. As Aumann (1976) pointed out, Bayesian agents cannot agree to disagree and, if they share common priors (the set of the future states of the world that is rational to expect) their posterior (the probability of one among the possible states) will be equal if they exchange relevant information (if they observe the results of interactions). So, within orthodox equilibrium, prices transmit all the information which is needed in order to condition the probability the other agents’ decisions and, as consequence, the probability distribution of future prices can be predicted by anybody. As in a fair game, nobody has more chances than anybody else to get the expected pay-off and the observation of the realized stochastic movement of prices, that is determined by rational expectations and consequent decisions, confirms the general belief in the market efficiency hypothesis. Of course, this is possible only if agents share common priors and if the totality of pertinent information is public otherwise the calculation of the best strategy with respect to the others’ strategies overwhelm human actual capacities. The difficulty is that such conditions do not hold in real markets which are better represented as large worlds where the cognitive capacities of the agents are insufficient to make optimal decisions. As a consequence, orthodox equilibrium has been criticized for being an idealized situation that fails to be achieved by the capacities of real agents whose rationality is said to be bounded.
Evolutionary Algorithmic Models Empirical evidence such as market crashes, excess of volatility, and speculative bubbles have proved that orthodox fair game is more an ideal than a reality. On the one hand, prices do not transmit efficiently the totality of pertinent information and costly research of further information actually allows for better predictions (Grossman and Stiglitz 1980). On the other
166
Longo
hand, agents are not perfectly rational, and in many cases, they do not make optimal decisions so their behavior differs from the supposed cognitive norm (e.g. Shiller 1990, 2000), mostly when information is incomplete and expensive. The effects of such “irrational” behavior are fluctuations that exceed the volatility predicted by classic models (prices do not follow a random walk) and which determine more irrational reactions (with respect to rational expectations). Moreover, it has been noted that perfect equilibrium is not compatible with economic growth (Grossman and Stiglitz 1980; Nelson and Winter 1982) since it does not guarantee the conditions for a competition based on innovation and leading to the historical development of knowledge and technology. Joseph Schumpeter (1934) was the first to realize that the intemporal classic model – where any possible event can be anticipated and externalities cause mere perturbations of the everlasting equilibrium – is not suitable for describing the unpredictable dynamic of real markets evolution. Schumpeter’s insight concerning the historical process of “creative destruction”11 of economic organizations, as it is determined by endogenous technological innovation (Romer 1990), has been rediscovered and developed by the theoreticians of evolutionary economics (Nelson and Winter 1982). Evolutionary economics aims to describe the real market competition where investments in research and development are justified by the larger benefits provided by private exploitation of knowledge and technology. Accordingly, it describes the dynamic of the imperfect game that offers advantages to innovators before spreading through imitation and losing its efficacy. The diminished efficacy pushes to new research and competitive technological innovation leading the whole system to adapt to the unpredictably modified conditions. As we are going to see, evolution sets the rules of the game of real market competition where the development of technology assumes the role of engine for growth while producing disparities of opportunities and information. Contrary to orthodox models, where agents are homogenous with respect to capacities and opportunities, in evolutionary models, agents are heterogeneous since their possible strategies (available decisions) depend upon the techniques and the knowledge that they possess. As a consequence, there is no space known by all that contains all choices that different players can make in response to a given signal, and this is why different players cannot predict each other’s moves. Moreover, since technological innovations introduced through investments in research cannot be predicted, agents cannot calculate the probability of the future states of the system (new moves are introduced that couldn’t be anticipated). As Nelson and Winter put it: In evolutionary theory, choice sets are not given and the consequences of any choice are unknown. Although some choices may be clearly worse than others, there is no choice that is clearly best ex ante. Given this assumption, one would expect to see a diversity of firm behavior in
How Agents Lost Cognitive Capacities
167
real situations. Firms facing the same market signals respond differently, and more so if the signals are relatively novel. (1982, 277) Different agents are then characterized by specific routines, or schemas of action, which are compared to genetically determined behavior competing, like different species, in Darwinian evolution. The interaction among species of practices determines the selection of fittest routines by generating evolutionary stable equilibria characterized by the most reciprocally adapted techniques and practices. Nelson and Winter describe market competition as follows: The comparative fitness of genotypes (profitability of routines) determines which genotypes (routines) will tend to become predominant over time. However, the fitness (profitability) clearly depends on the characteristics of the environment (market prices) confronting the species (collection of firms with similar routines). The environment (price vector) in turn depends, however, on the genotypes (routines) of all the individual organisms (firms) existing at a time – a dependency discussed in the subdiscipline called ecology (market theory). Therefore, no theory of long-run evolutionary change logically can take the environment of the individual species (collection of firms) as exogenous. Hence, the notion of fitness (profitability) contributes much less to the understanding of the long-run pattern of change than might at first glance appear. What does play a crucial though obscure role is the character of the whole evolving system’s interactions with the truly exogenous features of the environment. (1982, 160–1) Evolutionary economists are concerned with the interactive dynamics among heterogeneous agents achieving to the selection of evolutionary stable strategies (Maynard Smith and Price 1973) or equilibria that cannot be rationally anticipated. In fact, the small world conditions do not hold as agents do not share common knowledge and common priors. Since they cannot rationally choose the optimal strategy by assuming the set of the others available decisions, agents are modeled as playing fixed routines, like animal species. For this reason, they are represented by algorithms (in other words, they follow given rules that establish a set of operations as responses to specific situations) and their cognitive capacities are reduced to nihil (or they are very unsophisticated). Basically, they cannot speculate about what is rational to expect from other rational agents (like, for example in Lewis’s theory of conventions) since they do not have any common knowledge of the norm of rationality, hence their non-strategic decisions are programmed operations which depend on their “specie” or type. As no one knows a priori what to expect from the unknown adversary, the result of the
168
Longo
interactions can only be observed as it unfolds (rather than being analytically anticipated). In the same way as the trajectory of a system of interacting particles can be computed in order to study the parameters which lead to the emergence of stable patterns (to one among the possible attractors) the equilibrium of evolutionary competition (a set of stable schemas of behavior) can be simulated to observe how reciprocally adapted algorithmic strategies are selected. According to the protocol of evolutionary competition, the fittest strategies will reproduce while the less adapted to the environment (the ensemble of all the given behaviors) will disappear. Equilibria are then achieved once reciprocally adapted strategies are selected through repeated interactions in such a way that organized patterns of behavior appear as emergent (non-anticipated) organizations. Agents, in fact, just do what they do and do not think of what they ought to do: in this sense they lack cognitive capacities that were supposed to be needed in order to achieve an equilibrium. Evolutionary models, then, show that strict Bayesian rationality (consciously aiming to maximization) is not needed to obtain collective organizations: the latter can emerge spontaneously in the same way as complex wholes emerge in nature through the interaction among the parts. In these models, the approach is said to be descriptive (positive) rather than normative: rather than being analytically deduced from the axioms by fully rational agents, equilibria are observed by letting simulations to run. Evolutionary multi-agent simulations stage algorithms competing according to simple rules of interaction that determine the reproduction of the most successful procedures and the disappearance of the less fit practices. For example, Artificial worlds (Lane 1993) are such computational simulations that are used to study the parameters that render the emergence of coordinated and reciprocally adapted behaviors more likely, in the same way as, in complex systems, trajectories leading to different attractors depend upon initial conditions. More sophisticated multi-agent simulations have been developed to create artificial stock markets (Palmer et al. 1999) to observe how the interaction among boundedly rational agents lead to phenomena observed in real markets such as crashes, bubbles and excess of volatility (Broke et al. 2015). Here, agents are modeled as algorithms that can learn through simple rules (Weibull 1995; Fudenberg and Levine 1998) and, rather than being forced to play a fixed strategy (as it were genetically encoded), they can modify it by selecting the operations that has been more successful as responses to the recognized recurrent moves of other players. Of course, this does not mean that agents can think of what they ought to do by supposing a symmetric reflection from the others, but that a reinforcing mechanism allows for selecting the actions that are statistically more likely to succeed. In same models, agents can also imitate other players’ efficient strategies so that the fittest behaviors spread and replicate as the less adapted are abandoned. Moreover, genetic algorithms introduce random variations that, like stochastic mutations, produce novelties in the game while forcing to adapt to the changed environmental conditions (Young 1997). Accordingly, equilibria
How Agents Lost Cognitive Capacities
169
are studied with respect to their robustness (resistance to perturbations) to understand what kind of behavior is more likely to be transmitted to following generations. Furthermore, this leads to turbulences that characterize the transitions from an equilibrium to another one and, thus, it allows to simulate the perturbations that happen in real markets as an effect of the unpredictable changes that challenge the stability of a given organization. Experiments in simulated evolutionary games show that equilibria emerge spontaneously (Sugden 1989), without being rationally anticipated, even though they can be more or less stable and more or less Pareto efficient.12 Given a set of competing strategies and rules of interaction, multiple equilibria are possible and even though it can happen that evolutionary stable strategies are equivalent to those that would be analytically predicted by Bayesian decision makers (Nash equilibrium), most of the time players converge on suboptimal solutions – in other words, they cannot maximize – and the patterns of interaction are more favorable for somebody than for somebody else. This is supposed to explain the diversity of the conventions that are observed in reality and their historical becoming: while equilibria must be judged for their robustness rather than for optimality, unpredictable innovations, like stochastic mutations, can always force to further adaptations and to the emergence of new organizations that cannot be said better or worse with respect to some ideal. As a consequence, classic equilibrium must be considered as one among the possible conventions that can spontaneously emerge without supposing neither the rationality of the agents, nor common knowledge (thus agents are heterogeneous). From this perspective, equilibrium is not the result of some ideal norm of rationality, rather, rational expectations are the result of the spontaneous emergence of the rules of interactions as behavioral regularities or useful habits. It follows that orthodox equilibrium does not have any rational necessity and it is just a contingent organization that further innovations, such as deviant strategies or new technologies and practices, can make to evolve toward a different adaptive solution. As Nelson and Winter explain: The world seen by evolutionary theory differs from an orthodox world not only in that things always are changing in ways that could not have been fully predicted, and that adjustments always are having to be made to accommodate to or exploit those changes. It differs, as well, in that those adjustments and accommodations, whether private or public, in general do not lead to tightly predictable outcomes. For better or for worse, economic life is an adventure. (1982, 270)
Evolution of Conventions and Morality Multi-agent algorithmic simulations of evolutionary games have been used to explain the processes of selection that leads to social organizations that
170
Longo
we consider as morally valuable without considering them as the effect of a priori commonly known rational norms (Sudgen 1986; Binmore 1994; Skyrms 1996; Young 1998). For example, Skyrms’s simulations of evolutionary version of a game of “dividing the cake” – played by heterogeneous populations of algorithms – entail to the following conclusion: In a finite population, in a finite time, where there is some random element in evolution, some reasonable amount of divisibility of the good and some correlation, we can say that it is likely that something close to share and share alike should evolve in dividing-the-cake situations. This is, perhaps, a beginning of an explanation of the origin of our concept of justice. (Skyrms 1996, 21) Conventions involving, for example, efficient allocation or mutual help, are not chosen through intelligent reflection or planification, but they are possible solutions to coordination problems that can be achieved by algorithms endowed with null cognitive capacities. It follows that what we usually consider as morally valuable behaviors are the result of simple rules of interaction that can be computed in the same way as emergent complex patterns appear in nature without supposing any rational capacity. Accordingly, rules that we consider as fair are but the effect of equilibrium selection within recurrent coordination problems. As Binmore explains: We evolved the capacity to entertain fairness norms because they allowed our species a quick and efficient way to solve the coordination problems that inevitably arise when a group is faced with a new situation. For example, how should a novel source of food be shared without fighting or other wasteful conflict? If I am right, then fairness can be seen as evolution’s solution to the equilibrium selection problem that arises in certain games with multiple equilibria. (2010, 246) Conventions emerge as reciprocally adapted stable strategies that could have been different and, most of the time, they are suboptimal solutions (they would not have been chosen by Bayesian maximizers). Different societies follow different conventions at different historical moments, and no one can be said to be better than another one, although, according to simulations, egalitarian and altruistic rules are more stable: in other words, they characterize more robust equilibria. Accordingly, it is the feeling of moral obligation that supervenes on the actualized regularities of behavior (Sugden 1989, 95) and turns them into norms or ought to do. Moral obligations are then the a posteriori recognition of those conventions that can be computed as solutions to coordination problems by unintelligent interacting algorithms. As Sugden explains:
How Agents Lost Cognitive Capacities
171
In this sense, at least, conventions are not the product of our reason. Nor are these patterns of behavior necessarily efficient. They have evolved because they are more successful at replicating themselves than other patterns: if they can be said to have any purpose or function, it is simply replication. They do not serve any overarching social purpose; thus, they cannot, in general, be justified in terms of any system of morality that sees society as having an overall objective or welfare function. The conventions that we follow may, however, have moral force for us. But if they do, that is because our moral beliefs are the products of the same process of evolution. (1989, 97) From this standpoint, moral beliefs are nothing more than the recognition of the evolutionary stable strategies that emerge spontaneously from a simulated competition among algorithmic procedures. So my question is: does this not imply a justification of the inegalitarian rules of the market and of a competition which is going farther from Pareto optimality? Furthermore, isn’t the technological innovation of algorithmic prediction that allows the ones who produce and observe the simulations to take advantage of the private knowledge to introduce more advantageous deviant strategies?
Second Order Competition and Algorithmic Strategies Orthodox economic theory has been refuted since it presupposes a rational normativity that fails to guarantee the fair game where anybody’s expectations are legitimated. Heterodox economists show that, given the conditions of the real market, agents cannot make optimal decisions since the problem is much more complex than it was supposed by classic theory (it is not a small world in the sense of Savage) and relevant information cannot be reduced to scarcity (bid and ask as they are reflected by prices). Basically, the totality of the necessary information is not available and, if it was, then the market would not be competitive and the conditions for economic growth would not be met (Grossman and Stiglitz 1980).13 In order for financial markets to work properly, beliefs and expectations must differ14 (Grossman 1977) (otherwise there is no reason to bet on future prices) and the investments for producing more reliable forecasts, through the development of technology, are justified only if they provide actual arbitrage opportunity15 (Grossman and Stiglitz 1980). Since the problem is to predict what naïve investors, who make their decisions in a state of non-always recognized incomplete information – a situation preventing them from optimal decisions – multi-agent simulations and evolutionary models staging “irrational” heterogeneous agent are developed to forecast the dynamics of beliefs.16 In fact, in order for rational agents to be prevented from making optimal decisions, it is enough to prevent them from accessing the information concerning other players’ strategies, in particular costly information produced by privately funded researches and on which professional speculators base
172
Longo
their more successful decisions. Uniformed economic agents (the ones which are modelled) are forced to behave according to “irrational” procedures as they make their decisions according to conventions which are different from the conventions which are used by more informed agents (there is no common knowledge, hence prediction will not converge), in particular those who can afford the most updated computational models in order to predict the behavior of the masses. So, it seems to me that the real innovation in our markets is not the introduction of automated trading (the programmed decisions based on the realization of events which are supposed to imply some predictable fluctuation of prices) but the fact that the goal of economic research is to provide tools for collecting, treating, and exploiting the information concerning people’s responses to released information. A new mantra of sorts is the question of how the market will react to this or that news, and it does not matter if the news is true or fake. Rather than producing goods or means of production, research today is meant to develop technological tools that allow for predictions of the dynamics of replication of the behavior of agents in order to manipulate and exploit them. The real innovation is that computational technology is not improved for the wealth of the whole of society but for providing those who can afford it with information that allows them to make second order bets on the bets of the majority of the agents who are forced in a situation of artificially produced uncertainty. It is because ignorance, as disparity of information, is actively produced that the majority of the agents are forced to make their decisions as if they do not have the cognitive capacities which are needed to predict each other’s decisions, whereas this the effect of disparity of information and related beliefs. For example, high frequency trading strategies, based on sophisticated prediction of arbitrage opportunities, are profitable only if they are not common knowledge, then, when their use spreads, they lose their efficacy (cf. Aldridge 2009). This is the reason why such strategies must be exploited as fast as possible in an automated way. However, this does not mean that algorithms make autonomous trading decisions, they just execute programs based on predictive models and they buy or sell according to the actualized value of parameters that are supposed to increase or decrease the probability of a future event. If algorithmic trading strategies are successful and unpredictable, it is because they are based on second order observations (they forecast the predictions of ignorant agents) that allow for deviant strategies with respect to the set of the conventionally recognized strategies within the market ecology. Information technology and computational devices are but tools that, as any technical innovation, are functional to satisfy a specific utility by performing the set of operations they are programmed for executing. Thus, even though they can change their strategy, they do that according to a given rule that enables an action with respect to a specific feedback. And of course, automated strategies must be constantly updated by humans with respect to the unpredictable changes that the use of such automated decisions produce in the market environment. New research and new
How Agents Lost Cognitive Capacities
173
computational models produced by humans are constantly needed in order to predict the effects that are determined by the spreading of the deviant strategies. Accordingly, uncertainty is constantly reintroduced in the game: as soon as naïve agents learn more efficient strategies, the rules are modified by the players who predicted the spreading of the behavior. It is evident that what is evolving here is not the wealth of society or knowledge, but the trading strategies of which only a minority can profit. Fed by big data, available only to those who can afford them together with the technology which is needed to treat them, models predict the game played by cognitively null agents, their reactions to released information and their unsophisticated methods to access the information they cannot afford, like, for example, reinforcement learning or imitation. Such models, produced by scientists and privately funded research, are knowledge and information which is sold and bought to elaborate more competitive and sophisticated strategies aiming to exploit the computable irrational dynamics of the uniformed agents’ beliefs. This means that the real competition is played at this second order level where perfectly rational Bayesian agents produce scientific predictions as they were selling tickets for bets on the bets of the naïve investors. The real competition is not the one which is modelled, but the one among computational models: these provide the actual technological innovations or deviant strategies with respect to any convention, or rule of the game, that the uniformed agents are playing accordingly. The effect is an increasing systemic risk17 and increased uncertainty that, while justifying the use of computational models and strategies, it guarantees the inequality of profits. Should we accept to adapt to such an unfair competition under the assumption that this is the natural law of evolution? It seems to me that, while considering the protocol of the imperfect competition as the set of rules allowing for the historical development of social organizations, evolutionary economic theory is not less normative and dogmatic than the orthodox.
Notes 1 Electronic markets are designed by developing allocation algorithms (Milgrom 2011). 2 To quote Keynes’s famous expression in The General Theory of Employment, Interest and Money (1936, 161–2): “Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as the result of animal spirits – a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.” 3 The notion of “bounded rationality” has been introduced by Herbert Simon (1957, 198) against the perfect rationality of classic Homo economicus. The idea
174
4
5
6
7
8
9
10
11
12 13
Longo is that real agents in real market conditions cannot perform the calculation which is needed to maximize the expected utility. That under the condition of market equilibrium agents are playing a fair game is un idea introduced by Bachelier (1900) observing that prices move stochastically according to a Markov process: like in a game of dice nobody can predict the next outcome even though anybody knows the set of the possible outcomes and their probability. The idea of the fair game has been developed within Fama’s efficient market hypothesis (1970). The idea that prices move stochastically was first introduced by Louis Bachelier (1900) who proposed a Gaussian distribution of probability. During the 1960s Bachelier’s intuition was empirically tested and the thesis that prices movement was an approximation of Brownian motion gained a large support. Finally, Samuelson (1965) and Mandelbrot (1966) proposed the martingale model to calculate the probability distribution of future prices. According to Fama’s efficient market hypothesis, “the ideal is a market in which prices provide accurate signals for resource allocation: that is, a market in which firms can make production-investment decisions, and investors can choose among the securities that represent ownership of firms’ activities under the assumption that security prices at any time ‘fully reflect’ all available information. A market in which prices always ‘fully reflect’ available information is called ‘efficient’” (Fama 1970, 383). The notion of common knowledge is fundamental in epistemic logic and game theory, it was first introduced by David Lewis (1969) and it was mathematically defined by Robert Aumann (1976). Roughly speaking: “Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows it, 1 knows that 2 knows that1 knows it, and so on” (ibid., 1236). The others main interpretations of probability are the objective, based on observed frequencies, and the logical developed by Keynes (1921) and Carnap (1950) according to whom probability measure the implication between a proposition about empirical evidences and a predictive hypothesis. From the logic perspective, probability does not refer to real chance but to the degree of confirmation of a scientific theory. The methods of the bets, according to which the more I judge that a hypothesis is reliable, the more I am willing to bet, has been supported by de Finetti: “The probability P(E) s p that you give to E is the betting rate (or insurance rate) for E that you consider fair” (de Finetti 1970, 132). This is the Dutch book argument by which Bayesians prove the consistency of subjective belief which guarantees the possibility of making reasonable decisions involving the realisation of future events. “Admissibility (or consistency, or coherence) means in this case: prevent the opponent from finding an opportunity for a Dutch Book” (de Finetti 1970, 132). “The opening up of new markets, foreign or domestic, and the organizational development from the craft shop and factory to such concerns as US. Steel illustrate the process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one. This process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in. [Capitalism requires] the perennial gale of Creative Destruction” (Schumpeter 1994, 83). Pareto efficiency or Pareto optimality is a situation where no individual can be better off without making at least one individual worse off. In their model of a competitive economy where information is incomplete and expensive, Grossman and Stiglitz (1980, 393) show that “there is an equilibrium degree of disequilibrium: prices reflect the information of informed individuals
How Agents Lost Cognitive Capacities
14
15
16
17
175
(arbitrageurs) but only partially, so that those who expend resources to obtain information do receive compensation. How informative the price system is, depends on the number of individuals who are informed; but the number of individuals who are informed is itself an endogenous variable in the model.” As Grossman (1977, 431) noted: “When the spot price reveals all of the informed traders’ information, both types of traders have the same beliefs about next period’s price. In this case there will be no incentive to trade based upon differences in beliefs about next period’s price.” As Grossman and Stiglitz (1980, 393) explain, “If competitive equilibrium is defined as a situation in which prices are such that all arbitrage profits are eliminated, is it possible that a competitive economy always be in equilibrium? Clearly not, for then those who arbitrage make no (private) return from their (privately) costly activity. Hence the assumptions that all markets, including that for information, are always in equilibrium and always perfectly arbitraged are inconsistent when arbitrage is costly.” Since their beliefs are heterogeneous because they are based on different information and, since the update according to different evidences, their beliefs won’t converge as it is supposed by Bayesian decision theory. As a consequence, their decisions cannot be analytically deduced, and must be computed. “A systemic risk is the risk of a phase transition from one equilibrium to another, much less optimal equilibrium, characterized by multiple self-reinforcing feedback mechanisms making it difficult to reverse” (Hendricks 2009).
References Akerlof, George and Robert Shiller. 2009. Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism. Princeton, NJ: Princeton University Press. Aldridge, Irene. 2009. High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems. Hoboken, NJ: John Wiley & Sons. Axelrod, Robert. 1984. The Evolution of Cooperation. New York: Basic Books. Aumann, Robert. 1976. “Agreeing to Disagree.” The Annals of Statistics 4, no. 6: 1236–1239. Stable URL:https://projecteuclid.org/euclid.aos/1176343654. Bachelier, Louis. 1900. Theory of Speculation. A thesis presented to the Faculty of Sciences of the Academy of Paris on March 29. Originally published in Annales de l’Ecole Normale Supérieure 27: 21–86. Binmore, Kenneth. 1994. Game Theory and the Social Contract. Cambridge, MA: MIT Press. Binmore, Kenneth. 2010. “Game Theory and Institutions.” Journal of Comparative Economics 38: 245–252. https://doi.org/10.1016/j.jce.2010.07.003. Broke, William, Cars Hommes, & Wagener, Florian 2015. “Evolutionary Dynamics in Markets with Many Trader Types.” Journal of Mathematical Economics 41, nos 1–2: 7–42. https://doi.org/10.1016/j.jmateco.2004.02.002. Brown, George. 1951. “Iterative Solutions of Games by Fictitious Play.” In Activity Analysis of Production and Allocation, edited by T.C. Koopmans, 374–376. New York: Wiley. Carnap, Rudolf. 1950. Logical Foundations of Probability. Chicago: University of Chicago Press. de Finetti, Bruno. 1970. “Logical Foundations and Measurement of Subjective Probability”. Acta Psychologica 34, 129–145. https://doi.org/10.1016/0001-6918(70)90012–0.
176
Longo
de Finetti, Bruno. 1972 [1959]. “Probability, Statistics and Induction: Their Relationship According to the Various Points of View.” In Probability, Induction and Statistics: The Art of Guessing. Aberdeen: Wiley & Sons, 141–228. de Finetti, Bruno. 1989 [1931]. “Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science.” Translation of a 1931 article. Erkenntnis 31, nos. 2–3 (September): 169–223. https://www.jstor.org/stable/20012237. Fama, Eugene. 1970. “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance 25, no. 2: 383–417. Papers and Proceedings of the Twenty-Eighth Annual Meeting of the American Finance Association, New York, December 28–30, 1969. https://doi.org/10.1111/j.1540-6261.1970.tb00518.x. Fudenberg, Drew and David Levine. 1998. The Theory of Learning in Games. Cambridge, MA: MIT Press. Grossman, Sanford. 1977. “The Existence of Futures Markets, Noisy Rational Expectations and Informational Externalities.” The Review of Economic Studies 44, no. 3: 431–449. https://doi.org/10.2307/2296900. Grossman, Sanford and Joseph Stiglitz. 1980. “On the Impossibility of Informationally Efficient Markets.” The American Economic Review 70, no. 3: 393–408. http://www.jstor.org/stable/1805228. Harsanyi, John. 1967. “Games with Incomplete Information Played by ‘Bayesian’ Players.” Part 1. Management Science 14, no. 3: 159–182. https://doi.org/10.1287/ mnsc.14.3.159. Hendricks, Darryll. 2009. “Defining Systemic Risk.” The Pew Financial Reform Project. https://www.pewtrusts.org/en/research-and-analysis/reports/2009/07/08/definingsystemic-risk. Hume, David. 1975 [1739]. Treatise of Human Nature. Book III, Part I, Section I. Oxford: Clarendon Press. Keynes, John Maynard. 1921. A Treatise on Probability, London: Macmillan & Co. Lane, David. 1993. “Artificial Worlds and Economics.” Journal of Evolutionary Economics 3: 89–107, 177–197. https://doi.org/10.1007/BF01213828. Lewis, David. 1969. Convention. Cambridge, MA: Harvard University Press. Mandelbrot, Benoit. 1966. “Forecast of Future Prices, Unbiased Markets, and Martingale Models.” Journal of Business 39: 242–255. https://doi.org/10.1007/ BF00126176. Martin, Randy. 2002. Financialization of Daily Life. Philadelphia, PA: Temple University Press. Maynard Smith, John and George Price. 1973. “The Logic of Animal Conflict.” Nature 146: 15–18. https://doi.org/10.1023/A:1006596122296. Maynard Smith, John. 1982. Evolution and the Theory of Games. Cambridge: Cambridge University Press. MacKenzie, Donald. 2006. An Engine, Not a Camera: How Financial Models Shape Markets. Cambridge, MA: MIT Press. Milgrom, Paul. 2011. “Critical Issues in the Practice of Market Design.” Economic Inquiry 49, no. 2: 311–320. https://doi.org/10.1111/j.1465-7295.2010.00357.x. Muth, John. 1961. “Rational Expectations and the Theory of Price Movements.” Econometrica 29, no. 3: 315–335. https://doi.org/10.2307/1909635. Nelson, Richard and Sidney Winter. 1982. An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press. Palmer, Arthur, John Holland, and Blake LeBaron. 1999. “An Artificial Stock Market.” Artificial Life Robotics 3: 27–31. https://doi.org/10.1007/BF02481484.
How Agents Lost Cognitive Capacities
177
Picketty, Thomas. 2013. Capital in the Twenty-first Century. Cambridge, MA: Harvard University Press. Ramsey, Frank. 1999 [1926]. “Truth and Probability.” In The Foundations of Mathematics and other Logical Essays. Edited by R.B. Braithwaite, with a Pref. by G.E. Moore. London: Kegan, Paul, Trench, Trubner & Co. Romer, Paul. 1990. “Endogenous Technological Change.” Journal of Political Economy 98, no. 5: S71–S102. https://doi.org/10.1086/261725. Samuelson, Larry. 1997. Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press. Samuelson, Paul. 1965. “Proof that Properly Anticipated Prices Fluctuate Randomly.” Industrial Management Review 6: 41–49. https://doi.org/10.1142/9789814566926_ 0002. Schumpeter, Joseph. 1934. The Theory of Economic Development: An Inquiry into Profits, Capital, Credit, Interest, and the Business Cycle. New Brunswick, NJ: Transaction Books. Schumpeter, Joseph. 1994 [1942]. Capitalism, Socialism and Democracy. London: Routledge. Shiller, Robert. 1990. Market Volatility. Cambridge, MA: MIT Press. Shiller, Robert. 2000. Irrational Exuberance. Princeton, NJ: Princeton University Press. Simon, Herbert. 1957. Models of Man. New York: John Wiley. Skyrms, Brian. 1996. The Evolution of the Social Contract. Cambridge: Cambridge University Press. Skyrms, Brian. 2004. The Stag Hunt and the Evolution of Social Structure. Cambridge: Cambridge University Press. Sugden, Robert. 1986. The Economics of Right, Cooperation, and Welfare. New York: Blackwell. Sugden, Robert. 1989. “Spontaneous Order.” The Journal of Economic Perspectives 3, no. 4:85–97. https://doi.org/10.1257/jep.3.4.85. Von Neumann, John and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Savage, Leonard. 1951. “The Theory of Statistical Decision.” Journal of the American Statistical Association 46, no. 253: 55–67. https://doi.org/10.1080/01621459. 1951.10500768. Savage, Leonard. 1954. The Foundations of Statistics. New York: Dover. Stiglitz, Joseph. 2012. The Price of Inequality: How Today’s Divided Society Endangers Our Future. New York: Norton & Company. Stiglitz, Joseph. 2016. “Inequality and Economic Growth.” The Political Quarterly 86, no 1: 134–155. https://doi.org/10.1111/1467-923X.12237. Weibull, Jörgen. 1995. Evolutionary Game Theory. Cambridge, MA: MIT Press. Young, Peyton. 1997. “The Economics of Convention.” The Journal of Economic Perspectives 10, no. 2: 105–122. https://doi.org/10.1257/jep.10.2.105. Young, Peyton. 1998. Individual Strategy and Social Structure: An Evolutionary Theory of Institutions. Princeton, NJ: Princeton University Press.
10 Thinking about Google Search As #DigitalColonialism Joshua Adams
For many Americans, using search engines has become a ubiquitous part of digital life. According to a 2012 Pew Research Center study, 54 percent of Americans report using a search engine at least once a day (Brenner et al. 2012); 91 percent of search engine users say they always or most of the time find the information they are seeking when they use search engines. And of the several search engines available online, there isn’t one as popular as Google Search – 83 percent of users stated that Google Search as their preferred search (ibid.). If you use the internet, it is likely that you have experience with the usefulness of “googling it.” Most people perceive search engines as vast, everexpanding digital libraries where they can find the most relevant and useful information in an instant. Search engines dramatically lower the barrier in retrieving information and make the virtually infinite world wide web more compartmentalized and accessible. However, to see Google Search as a public resource that helps us find important, relevant, credible, popular, and accurate information obscures Google’s place as a multinational ad broker with tremendous power in defining these concepts. Because of the contemporary digital ad-centered business model that links profit to engagement through data and metrics, the design and outcome of tech privileges dominant narratives about the world. And in a world where dominant narratives can – and often do – have oppressive effects on marginalized groups, digital tools like search engines can reinforce current and historical inequalities. Digital technology provides new ways in which powerful individuals and corporations from elite populations can essentially “own” culture (in other words, the digital representation of a specific culture or its individual aspects) through the design of search engines and the practice of search engine optimization. It is with these ideas in mind that we need to think about how Google’s search engine can unknowingly promote colonial ways of ownership, or what scholars at the intersection of human rights, data science and technology call “digital colonialism.”
Google Search As #DigitalColonialism
179
Digital Colonialism through Search Engine Optimization Avila defines digital colonialism as “the new deployment of a quasi-imperial power over a vast number of people, without their explicit consent, manifested in rules, designs, languages, cultures and belief systems by a vastly dominant power” (Avila 2018). Artist Morehshin Allahyari uses the term to describe the tendency for information technologies to be deployed in ways that reproduce colonial power relations (News Museum, n.d.). Nick Couldry and Ulises Mejias’s (2019) concept of “data colonialism” – the colonial process of transmuting every aspect of life into data that is useful for creating profit and elimination barriers of extraction – is useful to think about as well. However, while data colonialism points to the extraction and commodification of data, digital colonialism is a more useful framework for this chapter, as it describes the processes in which individuals or corporations can use digital tools to colonize culture and then appropriate it for their own economic ends. To illustrate this point, let’s look at the Google search results for “ubuntu.” Ubuntu, the philosophy, is a Zulu word from the expression Umuntu umuntu ngabuntu, roughly translated as “a person is a person because of/by/through other people” (Ngcoya 2015, 253). Mogobe Ramose (1999, 53) argues that ubuntu places the human being at the beginning, center and end of all ethical considerations. Mvuselelo Ngcoya writes that ubuntu: stresses the importance of community, altruism, solidarity, sharing and caring. This worldview advocates a profound sense of interdependence and emphasizes that our true human potential can only be realized in partnership with others. It censures the obscenity of greed and materialism and the insanity of the idea of a rugged, sovereign individual. Instead, ubuntu advocates respect, reciprocity, hospitality, and connectedness as providing the ethical foundation of a just society. (2015, 253) After clearing the search engine history and cookies from my browser, I did a Google search for “Ubuntu.” The top result is to Ubuntu, an “open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.” It is a product of Canonical, a software company whose founder and CEO is South African entrepreneur Mark Shuttlesworth (Canonical n.d.a). On Ubuntu’s site, the company describes ubuntu’s “story” as: Ubuntu is an ancient African word meaning “humanity to others.” It is often described as reminding us that “I am what I am because of who we all are.” We bring the spirit of Ubuntu to the world of computers
180
Adams and software. The Ubuntu distribution represents the best of what the world’s software community has shared with the world. (Canonical n.d.b)
The result for the Wikipedia page of “ubuntu” lists it as “a Linux distribution based on Debian mostly composed of free and open-source software.” (Wikipedia 2019). On this Wikipedia page, only one of the 234 references listed links to the original meaning of the word, and that single reference is to the aforementioned “The story of Ubuntu.” On page four of the search results, Urban Dictionary defines the term as “an ancient african word, meaning ‘I can’t configure Debian’” (Urban Dictionary. n.d.) and a reference to the original meaning of the word does not appear until page eight with an article in the Guardian entitled “What does ubuntu really mean?” (Ifejika 2006). When I googled the term “ubuntu” on September 19, 2019, Google Search’s autosuggestions rendered long tail phrases such as “Ubuntu download”, “Ubuntu server”, “Ubuntu 18.04”, “Ubuntu 17.10”, and “Ubuntu Linux.” The long tail “Ubuntu meaning” was the sixth suggestion. If we accept prevailing ideas about Google Search’s search results as a fair, democratic, objective, neutral, and apolitical platform that gives users the most important, relevant, credible, popular, and accurate information, we would also have to accept the idea that the most important, relevant, credible, and accurate thing about the word “ubuntu” is that it is the name of an operating system, and that the origin and meaning of ubuntu is approximately the sixth most important, relevant, credible, and accurate thing about it. Regardless of how you assess Canonical and Ubuntu’s social and economic mission, the fact remains that this tech company ostensibly controls the digital representation of a fundamental concept to many South African, African and African diasporic philosophical and spiritual traditions. And because search optimization is a key component to any tech company’s marketing strategy, it is in their economic interest to do just that.
Ubuntu as “Terra Nullius” We do not have to ascribe nefarious intent to Canonical to acknowledge that guiding users to their site and products (as opposed to resources from African historians, writers, et cetera explaining the concept of ubuntu) is in its business interest. Between 71 percent and 92 percent of search traffic clicks on Google come on the first page results (Shelton 2017). 72 percent of global advertising spending passes through Google and Facebook (Waters 2016). Steering, controlling, curating, and controlling “ubuntu” is important to the company’s bottom line. This is why it is necessary to think about how search engines reinforce colonial ideas about ownership. Through a colonial gaze, resources, either literal or cultural, can be claimed. Search engine optimization promotes extractive processes where a cultural item can be appropriated as a
Google Search As #DigitalColonialism
181
marketing tool and reframed in the digital sphere without input from or investment in the culture from which it came. In this process, “ubuntu” is taken and redefined as Canonical’s product, decontextualizing it from its African meaning – the philosophy of reciprocal process of mutual recognition where equality, fairness and justice precede rights (Ngcoya 2015, 254). Ubuntu exists outside of the digital realm, but online, “ubuntu” exists as a terra nullius – an uninhabited space, a resource waiting to be claimed. This reflects a key aspect of colonial logic: it renders the ethical questions that arise from “taking” obsolete once the colonizer reaches the point of “owning.” While “taking” is unethical, once you “own,” it is your right to own. The colonizer makes the unethical behavior of taking invisible, only to make ethics visible once they have moved into ownership. To paraphrase, it says: “Maybe I was wrong to take it, but I’m here now, so it would be wrong to take it from me.” Lewis Gordon writes that: A peculiar development in the modern world, however, is the emergence of guests who transform themselves into settlers – guests who not only stay, but also assert a right to the future of the land. In effect, such guests affect belonging by rendering the hosts homeless, paradoxically, calling it home. (2014, 63) As a business and product, Canonical and Ubuntu operate with capitalistic and colonial epistemology and ontology – where economic rewards nudge the individual to place more value on rights and personal gain than they do ethics and collective good, and at bare minimum, to weigh these things in a cost– benefit analysis where rights are benefits and ethics are a cost (in its most pernicious form, capitalism understands ethics as an externality). Shuttlesworth or Canonical’s has the “right” to use ubuntu as the name of its product without needing the consent (even though in the context of capitalism, consent is still embedded and manifests itself in the concept of rights versus ethics) of the community from which the philosophy rises. Where ubuntu starts with people and goes forward, neoliberalism starts with profit and works backward. Sharing is inherent in ubuntu, owning is vital for Canonical and Ubuntu. Where Google Search insists on vertical hierarchy, ubuntu sees existence as horizontal. Though Canonical understands its operating system as a digital expression of ubuntu, its business interest to control the digital representation of ubuntu runs counter to that philosophy. When arguments over claims of cultural appropriation arise, the most common rebuttals are questions about “who owns culture?” Without being skeptical to the intent behind this discourse, these calls to acknowledge the complexity of debates on cultural appropriation hide the fact that we ostensibly have decided who gets to own culture – corporations. The fact that a tech company has more power to shape the common user’s relationship to “ubuntu” than the South African, African, and Afro-descendant people who practice it is
182
Adams
not a static phenomenon, but the product of both contemporary and historical belief and value systems that make it so.
Who “Owns” Ubuntu? If Google Search is ultimately a democratic tool, who from the African diaspora can petition to control their cultural image on digital platforms? If they have a “legal” standing to make this petition, where would it be directed? To Google or Canonical? Can they make it in their own language, or would the petition need to be filed in English, the dominant language of the internet? Can only those who “own” the copyright to the word “ubuntu” make this petition? How does “who owns culture” discourse arise in relation to cultures of people of color and indigenous peoples, particularly Afro-descendant people, in ways it does not for white, Western, and European culture? What if typing in “Celtic” in Google rendered first results relating to a new app, or clothing line, or alcoholic beverage (which could unintentionally be an implicit or explicit allusion to the negative stereotype of the Irish drunkard)? How can the libertarian impulses of the tech world continue a form of digital colonialism? Are tech companies Africa’s new colonizers (Pilling 2019)? And how does our increasingly open and virtually endless digital sphere muddy this discourse? Wrestling with these questions can help us think critically about how the digital representation of marginalized groups is increasingly being dictated by powerful and influential companies. In her book Algorithms of Oppression: How Search Reinforce Racism, Safiya Noble discusses how society’s biases about women and people of color were reproduced in search engine results. For example, in 2011, “Sugary Black Pussy” was the top hit when you searched “black girls.” This reflects both the porn industry’s imperative to influence search results associated with women and girls, and historical narratives painting Black women as hyper-sexual objects. Ramesh Srinivasan’s Whose Global Village? (2017) discusses how the prevailing idea of the internet as a utopian, global democracy blinds us from the ways it marginalizes indigenous communities (but also how these communities use tech to promote their culture and combat their marginalization). Though seen as ubiquitous in places like the US, global internet connectivity is far from equal (Harrison 2019). Most people in the world do not have smartphones, do not use the internet every day. So how do prevailing ideas about the democratic potential of the internet and search engines blind us from the ways these tools privilege the values, beliefs, ideologies, and ontologies of the Western world? These are serious topics that deserve thoughtful study and research, and they show how digitals tools can reinforce oppressive social structures, and the danger of indigenous epistemologies being marshaled for colonizing ends (Ngcoya 2015, 258). Noble argues that we must first trouble the notion of Google as a public resource by making more visible its existence as a private information enclosure where information is a commodity and understanding the
Google Search As #DigitalColonialism
183
commercial interests that overdetermine what we can find online (Noble 2018, 50). Rumman Chowdhury says that “the key to preventing a recolonization of the global south is citizen empowerment, good governance, and horizontal systems of power” (Chowdhury 2019). Ngcoya argues that many multinational corporations have employed ubuntu to improve their bottom lines. A radical and emancipatory reading of ubuntu would not only disavow such appropriations but also go further to require an examination of South Africa’s colonial and apartheid foundations “that turned Africans into strangers to one another and strangers in their own land” and a strong critique of the exploitative neoliberal economic relations, both with concern for the interests of others at its core (Ngcoya 2015, 258). To be clear, Google Search is not “bad.” It is a helpful tool and an integral part of many internet users’ digital experience, and aside from university and local library catalog searches, Google Search was the primary search engine used in the research for this chapter. And though ubuntu can be evoked within anti-capitalist critiques, as a philosophy, it is not inherently anti-competition. Vuyisile Theophilus Msila writes: In the African village, the people celebrated the best wrestlers, the best stick fighters, the best cooks, the best runners and so on. Yet, even these excellent villagers were celebrated within the context of their village. Their excellence was brilliance ascribed to the village. Therefore, ubuntu is not opposed to competition when competition promotes the community values and excellence. The victory may be an individual’s but the glory is shared by the entire community. (2015, 4) However, where Ubuntu diverges from ubuntu is not its lack in promoting community values, but in Canonical’s economic incentive to control the digital representation of ubuntu. When we think of how this incentive is reified and facilitated through search engine optimization, we can see how Google Search is not merely an easy-to-use digital tool to access info on the internet, but a digital means of colonizing. Google Search – its design as a search engine, its immense computational power, its preeminence and its culture capital in appearing to be a grand public service – incentivizes a process of not merely responding to culture, but absorbing and reshaping culture to reach optimal monetization and revenue-generation (Vaidhynathan 2011, 203). Its colonial logic inspires rigid forms of individualism (as it pertains to ownership), competition and seeing others in terms of production (Msila 2015, 2) while using collectivity (a sense of interconnectedness) for the purposes of “collectivity” (the active processes of collecting and digitizing in order to commodify). We should also consider how digital colonialism intersects and augments processes of technological redlining (Noble 2018), automated inequality (Eubanks 2017), surveillance capitalism (Zuboff 2019), data colonialism, and the abstracting force of commodity –
184
Adams
transforming life processes into things of value – that is a fundamental characteristic of capitalism (Postone 1993). From here, we can see that thinking critically about the search results for “Ubuntu” reveals how free market beliefs and values are encoded into search engines, and how users are, often unknowingly, accepting these value systems, both corporate and colonial. It is an example of how the Western political, economic, and ideological visions are encoded into architecture of the internet. McLuhan wrote that “environments are not passive wrappings, but are, rather, active processes which are invisible” (1957, 68). Though this chapter would problematize McLuhan’s concept of the “global village,” his insight into how design influences thinking helps us understand how we can engage in colonial processes without knowing.
References Avila, Renata. 2018. “Resisting Digital Colonialism.” Internet Health Report 2018, April. https://internethealthreport.org/2018/resisting-digital-colonialism. Banks, James and Cherry McGee Banks (ed). 1989. Multicultural Education: Issues and Perspectives. Needham Heights, MA: Allyn & Bacon. Brenner, Joanna, Kristen Purcell, and Lee Raine. 2012. “Search Engine Use 2012.” Pew Research Center. https://www.pewinternet.org/2012/03/09/search-engine-use-2012. Canonical Ltd. n.d.a Canonical’s “About” page. https://canonical.com/about. Canonical Ltd. n.d.b “The Story of Ubuntu.” https://ubuntu.com/about. Chowdhury, Rumman. 2019. Transcript of a talk on “Algorithmic Colonialism” with the IntersectTO community on August 24, 2019 in Toronto. Couldry, Nick and Ulises Mejias. 2019. The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Stanford, CA: Stanford University Press. Eubanks, Virginia. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press. Gordon, Lewis. 2014. “Justice Otherwise: Thoughts on Ubuntu.” In Ubuntu: Curating the Archive, edited by Leonard Praeg and Siphokazi Magadla, 10–26. Scottsville: University of KwaZulu-Natal Press. Harrison, Chris. 2019. “Internet Maps: Word Connection Density.” http://www. chrisharrison.net/index.php/Visualizations/InternetMap. Ifejika, Nkem. 2006. “‘What Does Ubuntu Really Mean?’” Guardian, 29 September. https://www.theguardian.com/theguardian/2006/sep/29/features11.g2. Lederach, John. 1995. Preparing for Peace: Conflict Transformation across Cultures. Syracuse: Syracuse University Press. McLuhan, Marshall. 1967. The Medium is the Message. Illustrated by Quentin Fiore. New York: Bantam Books. Msila, Vuyisile Theophilus. 2015. Ubuntu: Shaping the Current Workplace with (African) Wisdom. Randburg: KR Publishing. News Museum. n.d. “Morehshin Allahyari: Physical Tactics for Digital Colonialism.” https://www.newmuseum.org/exhibitions/view/morehshin-allahyari-physica l-tactics-for-digital-colonialism. Ngcoya, Mvuselelo. 2015. “Ubuntu: Towards and Emancipatory Cosmopolitanism?” International Political Sociology 9, 248–262.
Google Search As #DigitalColonialism
185
Noble, Safiya. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. Pilling, Daniel. 2019. “Are Tech Companies Africa’s New Colonialists?” Financial Times, July 4. https://www.ft.com/content/4625d9b8-9c16-11e9-b8ce-8b459ed04726. Postone, Moishe. 1993. Time, Labor, and Social Domination: A Reinterpretation of Marx’s Critical Theory. Cambridge: Cambridge University Press. Ramose, Mogobe B. 1999. African Philosophy through Ubuntu. Harare: Mond Books. Shelton, Kelly. 2017. “The Value of Search Results Rankings.” Forbes, October 30. https:// www.forbes.com/sites/forbesagencycouncil/2017/10/30/the-value-of-search-resultsrankings/#70e55d6844d3. Srinivisan, Ramesh. 2017. Whose Global Village? Rethinking How Technology Shapes Our World. New York: New York University Press. Urban Dictionary. n.d. Listed as “Top Definition.” Post by written by “oSuperDaveo.” https://www.urbandictionary.com/define.php?term=ubuntu. Vaidhynathan, Siva. 2011. The Googlization of Everything (And Why We Should Worry). Berkeley and Los Angeles: University of California Press. Waters, Richard. 2016. “Four Days That Shook the Digital Ad World.” Financial Times, July 27. https://www.ft.com/content/a7b36494-5546-11e6-9664-e0bdc13c3bef. Wikipedia. 2019. “Ubuntu.” https://en.wikipedia.org/wiki/Ubuntu. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.
Part 4
The Ethos: Thinking, Computing, and Ethics
11 The Light of Morality and the Light of the Machine François-David Sebbah Translated by Aengus Daly
The Heideggerian approach to technology (see especially Heidegger 1977) is the subject of a great deal of commentary. It has been severely criticized and there are many who have long since thought that we should “move on to something else.” That being said, it seems to me important not to forget Heidegger’s main proposition, namely, that “technology” does not so much designate a region of being that we can specify (such types of entities1 that would be considered “technological” due to their having specific characteristics while others, for example, would be considered “natural”) but is rather a mode of revealing what is: basically, every being is technological when it appears in a certain light. Furthermore, technological is the name for a “light,” a mode of appearance that “makes” that which shows itself to us “appear” in a particular way and that thus constrains and informs how we relate to what appears. We can certainly dispute how Heidegger characterizes this way or mode of revelation (this light). We can ask, for example, if he does justice to all the objects and devices that are described as technological in ordinary language or only to some of them, or even to none of them or even to an aspect of what they are but not all of what they are or even to nothing about them or further still to the entirety of what they are. All of these questions are legitimate. Let us nonetheless decide that calling technological a way of letting what is appear is not without justification. What way? A “making available” (in the precise sense of “having at one’s disposal”). In the light of technology, every “thing” only appears insofar as it is made available in the sense of being at one’s disposal (for). But its being at our disposal is itself, as a mode of appearance, not something that is subject to our decision or initiative. It is not just that what appears in the light of technology is “serviceable for,” but we too are at the disposal of technology in this sense – even if human beings themselves have produced the technological devices at issue. I do not think we are justified in believing that this Heideggerian decision captures all that technology is. But I do think that such a description does take sufficient account of a significant aspect of our experience of what we call technology today. Accordingly, it should be accepted as a relevant approach: technology always makes appear by having always already commuted “presence” to “presence as available for [Bestand] …,” for us who
190
Sebbah
nonetheless do not decide on this way of “coming to presence” in the socalled era of the Internet of Things, AI, Big Data, intelligent machines and increasingly “autonomous” robots or even “social” robots (to refer far too superficially but not falsely to a highly visible aspect of our contemporary situation). This characterization is undoubtedly very general, but it holds in general of an average human life today, it is the generality that belongs to the experience of technology in human life. My body is made available by the medicine of the “quantified self,” information is made available to me by numerous digital tools, the presence of the Other2 is made available to me remotely by telephone or by Skype, and of course by so many social networks. Academia.edu – which I did not want to give money to – makes available what I thought was “intimate,” my readings of documents, and it informs me of those of others (and promises to divulge other “secrets” if I give it money), and, generally speaking, my “brain time” is made available by the information, communication and entertainment industries. When I describe my life very briefly from this perspective, I also describe a very average and ordinary life today. We must undoubtedly also remember that algorithms are written by human participants, that some communities are devoted to the production of open tools that give participants the means to intervene and participate in technological invention as the invention of sense (I open up and intend new potentials, new possibilities of experience, for example interactive reading, ways of forming communities in social networks, et cetera) – on various levels (from intervening in the code to the simple enlightened use of digital tools). And, of course, some human participants are more active and inventive than others in the deploying of technology, technology thus perceived as a place of meaning and invention – and undoubtedly encouraging this trend is useful and positive, from this point of view encouraging, as is often said today, the “empowerment of all human subjects.” But I am not sure if such considerations are on the same plane and impact the truth, albeit partial, of the Heideggerian description of technology (at least as it is presented here). If it is consistent, this latter wants to give access to a disclosure that underlies and is presupposed by the scene on which an autonomous desiring subject is established, questions this subject’s capacity to get a grip on things, to “make sense” and to be confronted with choices (or maybe it thinks it sees that decisions are actually made behind its back, that it is transformed into a “cog” of a mega-machine, et cetera). We can even say that technology is one of the occasions on which we perceive that we are not primarily such a “subject” (even if we are perhaps it as well or are justified in wanting to be; let us bracket such questions … ). However, it turns out that an important part, or very visible in any case, of what are today described as “ethical” reflections on technology are chiefly concerned with the possible loss of the autonomy of the subjects that we are with regard to our moral decisions, precisely because we could delegate our subjective capacities (for initiative, for discernment, especially for choice) to
The Light of Morality and the Machine
191
machines. From such a perspective, it is “best” that we take an interest in those new so called “autonomous agents” that some machines are and take an interest in the question of how relate to them, to their “autonomy” (they are going to be able to make “decisions” in our place as in the well-known example of the self-driving cars). Yet they are nevertheless (and undoubtedly in contradiction to this) thought of as regulable and as being regulated a priori. Within this framework, it is a matter of deciding on good algorithms that will have to be implemented in order that these devices act in an “ethical way” in the different kinds of anticipated situations. I mentioned that the Heideggerian conception of technology is a conception whose radicality consists in its suggestion that we read technology as a mode of revealing, as a light and not as a region of beings that appear in a supposedly neutral and unique light. I will deliberately use the same distinction, or at any rate a similar distinction, with regard to “morality.” The dominant currents in debate in the English-speaking context are essentially concerned with “moral judgments” and they use thought experiments to study the formation of judgments about the limit cases of sacrificial judgments. These currents accordingly have, right from the outset, restricted the horizon of reflection on technology and ethics to the alternative “deontology”/ “utilitarianism” (or between “principlism” and “consequentialism” – from which, strictly speaking, they must be distinguished). And, truly speaking, even if the great variety of life situations obviously do not raise such dilemmas, I have nothing against the idea that on a certain level of description morality concerns a subject who makes judgments about situations, about their own actions or those of others and in so doing, evaluates what is “right or wrong,” “just or unjust.” This all implies taking subjects, if not as “free” then at least as in some way or other “responsible” (for their actions) in the sense of being accountable for what they could have done otherwise. I also do not have any objection to the idea that thought experiments in the form of “dilemmas” are useful for elucidating these judgments. I can even specify that on this level of description I am a fervent adherent of the “minimal ethics” that Ruwen Ogien (see especially 2007) developed in France from the quarry of Anglo-Saxon ethics. This minimal ethics rightly invites us to purge our moral evaluations of any “substantial content” that is not rationally founded. Yes, rationally, I have nothing against the idea that the harm done to another is alone morally wrong and that no substantial moral corpus can claim, beyond this sole limit, to govern my life or other human lives (certainly not, for example, to save my soul from the perspective of one or another religion that morally condemns suicide and not even “for my own good”: if, for example, I endanger my health through engaging in sado-masochistic practices or if, to a lesser degree, I debase my “human dignity” by indulging in some practice or another that does not directly harm others). Of course, in such contexts, the problem lies precisely in identifying where the alleged harm is done to others begins. This is sometimes obvious, sometimes far from being so …
192
Sebbah
The Light of Levinas In any case, to put it briefly, directly and far too causally: it turns out that the “modern liberal individual” that I am is also and from the outset “Levinasian,” at least in the sense that I read in Levinas the fairest, most pertinent and most robust description of moral and ethical experience, at least in its core – even in its aspects that are excessive and maybe “unbearable.” According to this description, ethics is, first of all, an experience and even an ordeal (épreuve) that we can try to characterize, prior to its being a set of principles or of rules to be applied (and to which it is undoubtedly never really reducible.) Some may see this as a painful contradiction. Levinasian morality elects the ordeal of the face of the Other and designates the “subject” of such an ordeal as guilty “before everyone and for everyone, and… more than the others”3 When summarized so succinctly and abstracted from the rigorous phenomenological descriptions offered in Levinas’s work and context of the philosophical discussions of which it is a part, such an ethics may appear arbitrary and heavily “substantial,” at worst as a vague substitute for Judeo-Christian morality that blames the subject and overturns the affirmative force of life by holding the subject to be always guilty. In short, it seems to be the prime example of what Ogien’s “minimal morality” invites us to expurgate. But even if Levinas’s ethics, like many others, is sometimes reduced to the status of a buzzword, an ideology, a pedestal for values rigidified in prejudice, in reality it has nothing in common with this caricature. Raised to the height where it demands to be, it claims to describe the experience that reveals access to ethical signification as such, that is, the experience that makes the word “ethics” have a sense, a precise sense, for us. It calls ethics a mode of revelation, a way of letting appear. Moreover, it is a very vertiginous experience because it is a mode of appearance that suspends the mode of appearance of “common law” where all things appear in the light of the World, that is to say, are re-presented before us. But the “face” of the other can never be enclosed in its form or in the light of the World and, in the same movement, the attempt to persevere in my individuality, in my individuality that is itself part of the World, is suspended, put into question in its shameless indifference by the Other, the Other who gives himself as nothing other than this putting into question itself. Epistemologically, the ethical experience – the ethical ordeal – overflows any mastery by an act of knowledge and ontologically, it is the ordeal in which my conatus is put into question by the irruption of the Other. This putting into question that has, in a sense, always already taken place if I am only born as human subjectivity through this ordeal (if this ordeal alone makes me “subjective” and “human”). The description of this ordeal, the justification of the description of this ordeal and its “decisive” status for a human existence, the richness of its variations and its implications – Levinas depicts all this on thousands of pages4 – it is undoubtedly impossible to
The Light of Morality and the Machine
193
access this without yourself having gone some of the distance with him, something I cannot do with you here today. In any case, the core of this ordeal can well be briefly summarized as follows: the breaking-up of the mastery of the visible from within the visible and as the Other (the face as “counter-phenomenon”) by the putting into question of my attempt to persevere in my being; a traumatic destabilization that is nonetheless the source of all signification in the strong sense of the term for the “totality” of human actions (and undoubtedly for the totality of our experience of the world). This is not the place – even though this is the most important issue – for proposing a rigorous justification of the choice of the Levinasian description of the ethical experience over other possible “candidates” for the role. But it is already important and it is already decisive to sense that this reflection is situated on a different level. The entire arena in which the consequentialist and the principalist positions clash presupposes a decision already made, presuppose that this arena is established in a certain light: the human being appears as a subject who is endowed with internal states, emotions and cognitions and who makes evaluative judgments based on principles, all of which presupposes that these subjects have the margin of initiative and of freedom necessary for certain actions to be morally preferable to others. For what Levinas calls “ethics” is an experience which, in a sense, reveals all things otherwise; it is, in a certain sense, a mode of revelation, the only one that opens the dimension of sense and that is both presupposed and hidden by the position of a subject endowed with faculties (sensibility, rationality, “freedom” of choice) who relates to the world by, for example, making judgments about it. At the level of description envisaged here, it is simply meaningless to view ethics as the place of evaluative judgements, be they consequentialist or principalist, that are susceptible of being arranged in a hierarchy from a moral perspective on action (be they actions mine or those of others). Without any doubt, a good code of practice for humanity cannot be deduced from a description of ethics such as that provided by Levinas and it is not enough to apply such a description so as to know what to do in a given situation. For example, we could hardly use it to deduce in an unequivocal way a battery of algorithms to be implemented in self-driving cars! That last demand, even if it is socially important and commands floods of financing for the applied ethics of robotics, is simply senseless when the question of ethics is raised on the most “radical” level that I am indicating here – and in asking it on this level, I hope that I can convince you that it is not without lessons of another kind for living today and also for living with machines.
The Light of Technics It might seem as if this chapter started with a very long detour… And yet in this way we have arrived at the heart of the matter itself. I do not want to ask if there are “moral machines,” if “machines can be moral” and, if so,
194
Sebbah
“how to make them moral,” or if only human beings are capable of being moral, “machines” being neutral, and so on. I want to pose a yet more anterior question. If we call “technology,” this “light,” this mode of disclosure; and “ethics” this other mode of disclosure (that paradoxically ruptures all light, let say all lights in the World), then what is the relationship between these two lights? Is it one of absolute distance or of anxious and inseparable proximity? Or of mutual indifference, of unavoidable and merciless war … or of a possible solidarity? It must be remembered, even though such a statement is undoubtedly too perfunctory, that calculating means ordering effects, making them available, even if we can integrate the random and therefore the unpredictable into calculations and their effects. The calculative power produces effects ordered by the premises and operations implemented: and today, this power of aggregating simple operations is unprecedented. When these calculations are incorporated into the “physical world,” they produce effects – such as when the so-called autonomous robots are produced (there are drones and social robots and many other machines – I am speaking very generally here and neglecting for the moment these differences) and the digital age makes available an unprecedented amount of data. Without a doubt, it will always be correct to emphasize what is a caricature in the mere denunciation of the mastery and the domination of which the human would very often find itself the victim, instead of the beneficiary. In addition to the obvious increase in well-being that this making available also produces through calculation – and the legitimate concern to accompany it with an empowerment of human participants – we must never forget that there can be, among the effects of experience thus produced there, for example, an “aesthetic disinterest.” I am not just my slave’s slave when an Amazon drone delivers a consumer product to my door, I am also the one who is capable of enjoying an unprecedented aesthetic experience that is produced thanks to the new digital tools (for instance, interactivity or augmented reading), thanks to a specific device (a device perhaps delivered by the drone). All this is true, but does not call into question the making available by calculation. Or rather, it should be noted that all these effects that produce experiences – and which are not experiences of domination but of disinterest and/or unpredictability, et cetera – are themselves produced starting from the “making available by calculation.” This is the paradox of a disinterested effect produced by a making available. This cannot be forgotten or effaced, or rather, this must not be (because we constantly tend to forget this since the technological artifact effaces itself in the effect it makes available). Thus, even if “making” available is not necessarily and exclusively the domination of one being by another – even if it can be the scene of a free accord between entities, for example, aesthetically – it nonetheless only constitutes such a possible scene by always already constituting it from the precondition of a making available of being as such and of all entities.
The Light of Morality and the Machine
195
However, in many respects, the ethical perspective as described above does not propose intervening starting from a precondition or presupposition of such a scene in order to direct the relations between entities otherwise; rather it does not enter this scene at all, it constitutes the trembling of this entire scene, its being put into question. Levinas’s rare comments on technology (see especially 1990), which I will not detail here, are largely oriented towards giving a positive connotation to technology precisely because it de-sacralizes the entity (l’étant), bringing it back to the condition of being available and serviceable and being nothing but this condition. Technology, according to Levinas, makes the entity appear as nothing more than present as available, thus attesting that “significance,” if there be such, cannot have its source in it. Basically, if technology is a mode of revealing, the mode which reveals being as effaced in the entity and the entity as nothing but the “serviceable for,” then from a certain point of view the abovementioned ethics as described by Levinas is in no way inscribed in being in the light that manifests the truth of being. One should even be grateful to technology for attesting the truth of being with unavoidable evidence. Ethics is somewhere other than in technology because it is played out somewhere other than in the being, the being of which technology is the truth. Technology “desacralizes” being, it sobers us up from any temptation to see the entity as being anything more than its being “serviceable for.” I only mention by way of allusion and in passing that the divergence between Heidegger and Levinas is played out concerning these questions (and perhaps even in a fundamental rather than a regional way). Heidegger’s entire endeavour is, in a sense, that of disclosing (déceler) another light that lets the truth of being appear as irreducible to constant presence. This latter is itself accomplished in the “entity as serviceable for,” in the entity as enveloped by technology and deployed as technology. This way of demanding an ontological difference that, among others things, preserves the being of the entity, preserves it as the “being of the entity,” that is, “against the entity” in the sense of protecting against its forgetting in and as entity, seems from the Levinasian perspective to be a way of absolutizing ontology, of not allowing it any exteriority. And if, in a sense, ethics is this exteriority itself – because being can only be put into question from exteriority and by the exteriority – then, according to Levinas, being – both in its truth of “being serviceable” as well as in the Heideggerian endeavour to preserve it from this reduction to “being serviceable” after having disclosed it – is the aethical itself (the “neutral,” Levinas says). In many respects Heidegger’s understanding of the God of the Jews and the Christians5 confirms this reading: God is accomplished as a “super entity,” the cause of all entities, the eminent constant presence and, in the same movement, as the great calculator, the instance (to put it in mechanistic terms!) that reckons up good and bad deeds and produces salvation or its refusal as the result. This interpretation of the word “God,” which is in many ways a misunderstanding of the meaning of this word, at least as
196
Sebbah
regards the Jewish tradition, is quite significant: it makes God the culmination of ontology and its closure on itself, ontology as ontotheology, that is to say, as the absolute of constant presence whose truth is being serviceable, is “technology” as a way of making available by calculation. For Levinas, the thinking of being that is irreducible to ontology (ontotheology), whatever its refinements, will always be a false exit and will persist in ontology; and to present “God” as the point of closure of ontotheology is the most radical misunderstanding if “God” is precisely a name for this exteriority to all entities and all being, a name for this “gap” in being, for the incalculable, which is irreducible to all descriptions in terms of the causality and calculation that are the very horizon of being. “God” is the name by which I experience and undergo the ordeal that being and its law do not constitute the absolute without exteriority but are opened up by a beyond. The word “God” means that God is not – and that there is nothing to know and nothing to believe in this ordeal (if, from a certain perspective, belief is still a mode of knowing, a way of adhering to, of positing an existence). And if this putting into question of ontotheology as expanded to a putting in question of all ontology, of all being, is, for Levinas, ethics itself, then this is because it always already resonates “in me” as the putting in question of my being in its attempt to persevere in its being. This putting into question occurs through the vulnerability of the face of the other before death, before their death, which is always already a call for help, an appeal that constitutes me as an impossible response (I cannot save them from death as such), that is to say, the indistinction of responsibility and guilt. According to Levinas, it is this appeal and its “impossible” response, if there be such, that implies that being insofar as it gives itself in terms of causes and effects or even as calculable is in a sense not the “totality,” or that it is the totality but it is always already broken. (And “there is such” as long as we have the experience of responsibility as guilt, as long as certain violent human gestures are at times suspended by the “without force” – the without physical force – of a face. But nothing guarantees that this ethical ordeal could be assured for all time: significance can collapse.)
Ethics and Machines Some, perhaps many, will be irritated or distressed: does not such a description, even if it is introduced in a critical context, simply renew the classic Heideggerian description of technology, a description which is often considered to be abstract, ignorant of the diversity of techniques and technologies and “diabolizing”? Still others may think that these kinds of considerations are decidedly useless when it comes to deciding the real questions such as what algorithms should be introduced into so-called “autonomous” or “self-driving” cars. But let us continue on this path. Ethics thus conceived implies the “detotalization” of being, and particularly a rupture of the regime of
The Light of Morality and the Machine
197
calculation, but it does not at all imply that it is unrelated to totalization and calculation. While totally other to these, it could well be that it is only played out in them and nowhere else. Precisely because the ethical ordeal so understood does not delimit a particular ontological domain or regime (a being beyond the entity or even the regime of the “thing in itself” as distinct from the “phenomena”), it is, in a sense, not to be sought anywhere other than in the here and now or in the incognito of the beyond in here and now. It cannot be stated often enough that, following Levinas, it is in being nothing but an entity that the face “perforates” being in the movement of its appearance – perforates “phenomenology.” It should also be recalled that it is not membership of the human species that guarantees that a being “faces,” is the event of a face (which would be to derive the ethical ordeal from an ontic or ontological characteristic) but inversely, the fact of “facing,” of the event of a face (and especially of being sensible to questioning by a face) that renders “human” (in a completely renewed sense)! The Levinasian idea of ethics proposed here is compatible with those different approaches that posit that the human is technologically constituted and never free of its prosthesis. The ethical intrigue in Levinas’s sense is played out between questioning vulnerability and the questioned, put into question, conatus – and this simultaneously and without contradiction. This intrigue is determined in no way within or by being – for example, by the laws of the entity, causality and calculability – and it is not measured by anything belonging to it (by any of its characteristics) and nonetheless it is only played out within being, in the incarnate actions of feeding, helping, and so forth, between the entities – entity among entities, being “all” entity and being nothing of being – and thanks to entities. Because we are in the machine age, technological devices too, machines in particular and especially algorithms cannot in any way cause anything or measure ethical action as ethical, but are the site of ethics, its medium – they are its only possible site and medium – because in truth ethics has never been tied to any ontological domain that “morality” is concerned to preserve, to safeguard and that would be by its nature uncontaminated by technology (such as “life,” “human life,” “human dignity,” the “freedom of the human subject,” et cetera). Ethics, the ordeal of the beyond being, is played out in all entities, without exclusion and in the contaminating entanglement that does not safeguard any “pure nature.” These reflections wanted to (1) show that technology and ethics can be considered as two kinds of “light” in the sense of two ways of “letting appear,” (2) argue that the Heideggerian description of technology as a mode of revelation and the Levinasian description of ethics as a mode of revelation are two robust candidates for describing these two lights, and (3) show that the relationship of one to the other is not one of identity nor of simple antagonism, that it simultaneously implies a reciprocal exclusion and the irreducible incognito of the second (ethics) in the first (technology). Based on these reflections, it is tempting to pause for a moment to consider
198
Sebbah
the question of social robots, which are very often anthropomorphic and a number of them even have a “face,” more or less. Let us try to make Levinas speak, a little cavalierly, about this. What would he say? Or rather what can we say about this issue based on his descriptions? Since it is not the human as an a priori ontological domain (like a class of being or of entities such as genera and species) that is decisive in ethics, nothing in Levinas prohibits us from thinking that entities – regardless of how they have been made and the materials that constitute them – can face, that is, be the event of a face. We know about Levinas’s embarrassment when he is asked about the “face of the animal” (Levinas 1988) but, truly speaking, nothing can rightly a priori exclude any being from the possibility of “facing,” of being the event of a face – neither animals, nor cyborgs, nor any robot. For the human subject who encounters one or the other of these beings either undergoes the ethical ordeal or does not. It will be rightly objected that if, for Levinas, the human face “faces,” is the event of a face, it is (1) because it is immediately the call for help of a life that can suffer and that wants to persevere in its attempt to live before unavoidable death, and (2) because we have the experience that the human face, which expresses emotions (particularly, but not only, in the gaze), constitutes, in an exemplary fashion, such an appeal or call. And the ethical ordeal and the proof of the ethical is the ordeal and experience of the call by a living being such as it is, that is, “ultimately” powerless before death. But nothing in Levinas invites us to decide ontologically and a priori what kind of being can produce this appeal: certainly, the human face (in the sense of what belongs to the biological human species) is the event of a face in an exemplary fashion – and Levinas describes this ordeal – but nothing, in a priori and ontologically, prevents the presentation one day of the ordeal of other faces than the human. Certainly, on a certain level of description, we can understand the objection to the use of some types of social robots in retirement homes or hospitals: some of these robots encourage social interaction or even emotional investment and allow diminished human subjects to find themselves thus stimulated on these different planes – and isn’t this a way lying to these human subjects by offering them simulations, semblances of the face? Is it “ethical,” even for the good of a human subject, to deceive it by means of a machine that simulates an “Other” (a machine that seems to “have presence” otherwise than as a machine, seems to experience, to feel emotions, and eventually “to address itself to…”)?6 Again, on a certain level of description, these questions are well founded – as long as we speak in terms of autonomous subjects, ends and means (technological means) and truth (as adequation of representation to that which is). But we have moved to a totally different level of description. Certainly the social robots referred to here are means intentionally designed for the aforementioned ends and we are rather inclined to judge, very reasonably, that they are extremely elaborate machines that mimic sensations, feelings, emotions and gestures (gestures that are not really so because they are not really addressed to… but
The Light of Morality and the Machine
199
are programmed, even if they “emerge” and adapt themselves to situations and to the environment). But it should not be forgotten that, very precisely, the ordeal and proof of the Other, the very one that we have with the Levinasian description, as such and for what it is, absolutely exceeds knowledge, is situated on a totally different level. Thus Daniel Dennett could easily demonstrate in his remarks on zombies (see Dennett 1992) that we can never be sure we are dealing with the Other (in Dennett, this is implicitly understood as another living being that is referred to as “human” and resembles me) for it could be that, outside of the internally experiencing and feeling me, there are only semblances of human beings. But in a way Descartes had answered him long ago when, leaning out his window and being unsure of whether he was dealing with men or automata wearing coats and hats, had nonetheless concluded: “Yet I judge them to be men” (cf. Descartes, the end of the “Second Meditation”). For here, the “I judge” must be heard as “I wager,” “I have faith,” “I have confidence.” Although always doubtful from the perspective of truth as the adequation of the representation to the being and the knowing that seeks this, the ordeal of the Other both befalls me and is a “yes,” an “assent” that is anterior to all knowledge: this ordeal is immediately ethical and does not make a detour by way of knowledge; the Other is never given as a knowledge. The counterfeiting and mimicking of this experience in order to increasingly ensnare and deceive is completely possible, thanks to ever more elaborate machines; it is even possible that “I” live in a vast machine that mechanically produces “zombies” and I can never be absolutely certain of the contrary – but moral and ethical experience nonetheless remains, as it is completely and entirely the experience of the Other as we have described it and this is played out on a totally different level and remains untouched as long as we do not claim to measure it by criteria that are absolutely heterogenous to it and that do not overlap with it, namely those of being and knowledge. This Levinasian meditation on “machines” is decidedly unhelpful when it comes to knowing which algorithms should be installed in self-driving cars and it also does not help in deciding if it is morally justifiable to deceive human beings “for their own good” with so-called “autonomous” machines, for example, with “anthropomorphs.” However, it would allow the examination of our contemporary relationship with machines totally otherwise. Ethics is nothing mechanical, if the machine “is” the “causality” and “calculability” that makes available and renders to disposal by fixing a “presence” in an exemplary fashion. Ethics, however, absolutely does not consist in keeping “safe (sauf),” keeping “intact,” untouched and uncontaminated by technology – a terrible misunderstanding! In effect, ethics, as it has been described here (the ordeal of what exceeds being and its law, of what exceeds the regime of knowledge, of what thus constitutes the source of all signification, anterior to both cultural and “natural” significations), therefore, ethics opening beyond being is, however, in nowhere else than in being and by it. In it and thanks to it, as technology, ethics reverses the laws of being: thanks to mechanistic causality and
200
Sebbah
calculability; it arouses within the World that which suspends and puts in question the being that proceeds in its interests and in view of itself. Any machine that makes the light that is as technology the occasion or the site of this totally other light, this totally other to any light of knowledge, that is, the ethical experience in Levinas’s sense, all machines and every technological environment that fosters this second light through the weapons of its contrary, would be “moral.” Nothing in the machine and of the machine prevents it being the site of ethical experience and this latter even inevitably needs new machines and prosthesis to renew and intensify itself, just as I need my body, and all possible tools, to respond to the Other. Ethics is nothing of the order of machines, but it is never anywhere other than in machines, since ethical machines are those that open themselves to an excess or a surplus over their “being only machine” (thanks to their being only machine). The real question is therefore: how to work with contemporary machines such that they foster, even if only a little, or even intensify, the experiencing of “facing,” of the event of the face? Which machines, which technological environments, intensify the ordeal of the “face,” foster in each of us this suspension of the effort to be and foster this openness beyond being that disarms and allows the emergence of sense? That is really the question that it seems to me should be constantly asked by engineers and designers. But this suggestion, without a doubt, unfolds on a totally different level than that, for example, of the ambiguous murmur of “ethics by design.”
Notes 1 In this text the philosophical term “être” (sein) is translated by “being,” “étant” (Seiende) by “entity.” 2 In this text the philosophical terms “Autre” et “Autrui” are translated by “Other.” 3 In Dostoyevsky’s words, often cited by Levinas. Citation here from Levinas 2004. 4 I am referring especially to the two major works: Totality and Infinity: An Essay on Exteriority (2002) and Otherwise than Being or Beyond Essence (2004). 5 See especially Heidegger 2016 and Didier Franck’s (2017) commentary on this. 6 For more on this point, see Dumouchel and Damanio 2016.
References Dennett, Daniel. 1992. Consciousness Explained. New York: Little, Brown & Co. Dumouchel, Paul and Luisa Damanio. 2016. Vivre avec les robots, Essai sur l’empathie artificielle. Paris: Seuil. Franck, Didier. 2017. Le nom et la chose. Langage et vérité chez Heidegger. Paris: Vrin. Heidegger, Martin. 1977 [1954]. “The Question Concerning Technology.” In The Question Concerning Technology and Other Essays, trans. William Levitt, 3–35. New York: Harper & Row. Heidegger, Martin. 2016. Ponderings II–VI: The Black Notebooks 1931–1938, trans. Richard Rojcewicz. Bloomington: Indiana University Press.
The Light of Morality and the Machine
201
Levinas, Emmanuel. 1988. “The Paradox of Morality: An Interview with Levinas.” In The Provocation of Levinas: Rethinking the Other, edited by Robert Bernasconi and David Wood, 168–180. London: Routledge. Levinas, Emmanuel. 1990 [1963]. “Heidegger, Gagarin and Us.” In Difficult Freedom: Essays on Judaism, trans. Séan Hand, 231–234. Baltimore, MD: Johns Hopkins University Press. Levinas, Emmanuel. 2002 [1961]. Totality and Infinity: An Essay on Exteriority, trans. Alphonso Lingis. Pittsburgh: Duquesne University Press. Levinas, Emmanuel. 2004 [1974]. Otherwise than Being or Beyond Essence, trans. Alphonso Lingis. Pittsburgh: Duquesne University Press. Ogien, Ruwen. 2007. L’Éthique aujourd’hui: maximalistes et minimalistes. Paris: Gallimard.
12 What Do We Call “Thinking” in the Age of Artificial Intelligence and Moral Machines? Anne Alombert
Introduction In a text entitled “Machine et organisme” and published in 1952 in La connaissance de la vie (Canguilhem 1992a), the French philosopher Georges Canguilhem criticized the analogy between machine and organism. According to him, such an analogy emerged at a certain point of the evolution of life, with the apparition of automatons: Canguilhem insisted on the paradoxical thesis according to which a living organism could be considered as a machine, whereas the machine was itself a product of the vital activity of the organisms it was supposed to explain (ibid., 135, 154). Almost 30 years later, in a conference entitled “Le cerveau et la pensée” Canguilhem questioned the analogy between computer and thought: as before, he situated this analogy in the history of life and in the technical evolution, and noted that it appeared with the emergence of logical machines likely to handle data according to instructions (Canguilhem 1992b). This time, he insisted on the paradoxical thesis according to which thought and brain could be considered through the computer model, whereas the computer is itself a product of the cerebral or thinking activity that it is supposed to explain (ibid., 19). While the analogy between machine and organism rests on a mechanical conception of living beings, the analogy between computer and thought rests on a logical or computing conception of thought (ibid., 19). The notions of “conscious machine” or “artificial intelligence” thus appeared to Canguilhem as misleading metaphors or “irrelevant expressions”: even as Canguilhem acknowledged that they had been heuristic models for scientific research, he affirmed that at the industrial stage of computer science and information technology, these notions had become clichés for advertisement or elements of ideological propaganda, aimed at hiding the decision-making processes behind anonymous machines and at making people accept the automatic regulation of their everyday life and social relations (ibid., 21). Indeed, as Canguilhem puts it, “How could we criticize computers if our brain is itself a computer? A computer at home? Why not, if a computer is inside each of us!” (ibid., 21). Forty years after Canguilhem’s conference, in the age of big data and computational capitalism, the problem is not to get computers into the
What Do We Call “Thinking”?
203
private spaces of homes, but to get autonomous vehicles into the public spaces of so-called “smart cities.” But the ideological role of misleading metaphors does not seem to have changed that much. Indeed, why speak about “moral machines” and “smart cities”, if not to hide those who program the machine’s morality or who control the city’s “smartness”, and thus to make people accept the “functional sovereignty” (Pasquale 2017) or the “smartness mandate” (Halpern et al. 2017) of digital platforms and giant tech companies? Reformulating Canguilhem’s questions 40 years later, we could ask, “How can we criticize Google Cars if they can be programmed in a moral or ethical way? Autonomous vehicles in our smart cities? Why not, as soon as we ourselves are moral and intelligent machines!” But are we really moral machines, thinking machines, or living machines? And, if not, what are the unthought prejudices or presumptions that enable transhumanist ideologies to use these misleading metaphors so efficiently? I suggest that the notions of artificial intelligence or moral machines rest on a problematic conception of technology (which tends to confuse technology with mere means or mythical robots) and a problematic conception of thought (which tends to confuse intelligence, thought, cognition and calculation). I shall try to show that Gilbert Simondon’s reflections on technical objects and Bernard Stiegler’s reflections on technical externalization enable us to exceed those problematic conceptions, by considering the transductive relations between culture and technology (Simondon) or knowledge and artifacts (Stiegler). While Simondon invites us to understand “thinking machines” as the crystallization of human activity, Stiegler invites us to understand the constitutive role of artificial prostheses in thought and knowledge. For both philosophers, the question is not to know whether machines can become intelligent or moral, but to question the cultural and technical conditions of ethical and political life, when industrial milieus seem undecipherable, and when algorithmic calculations seem to disrupt collective deliberations.
Gilbert Simondon: The Deconstruction of the Myth of the Robot and the Development of a Technical Culture The Distinction between “Automaton” and “Robot” The notion of artificial intelligence at the center of contemporary transhumanist discourses is not new: it appears in the 1950s, with the emergence of cybernetics researches and after Alan Turing’s work on the imitation game. In this test, the mathematician tried to simulate linguistic human behaviour with a computer: even if Turing himself did not affirm such a thesis, this capacity of simulation was often interpreted as a criterion for attributing intelligence or thought to a machine. Three years later, the French philosopher Gilbert Simondon, who was one of Georges Canguilhem’s students, dedicated two articles to the epistemology of cybernetics (Simondon 1989),
204
Alombert
in which he questioned the notions of imitation and simulation. Simondon thus revealed the ambiguity of such notions and insisted on the distinction between what he called “the robot” and what he called “the automaton.” Indeed, according to Simondon, these two technical objects must be distinguished. A robot is an artificial object that simulates or imitates human behavior, and that needs a deluded spectator (ibid., 45). In this sense, a statue is already a robot, and this imitation process has taken different forms during the history of art and technology, up to android machines. However, according to Simondon, the automaton does not aim at imitating human beings and at deluding human spectators: the aim of an automaton is to accomplish a task or a function that human beings traditionally accomplish, but according to very different methods and processes that correspond to its specific structure, which can be a mechanical or an electronic, but not an organic, structure (ibid., 46). Simondon takes the example of the computing machine, which functions according to a binary numbering system that human beings cannot use, whereas the decimal numbering system used by human beings (which corresponds to the organic structure of their own body – the ten fingers of their hands) would be paralyzing for a computing machine (ibid., 46). Likewise, in his 1958 thesis, translated as On the Mode of Existence of Technical Objects (Simondon 2016), Simondon distinguishes the machine’s memory from the living or the human memory: whereas the machine’s memory conserves very complex and precise data in massive quantities (accumulation), human memory selects into present data on the basis of past experiences (selection and interpretation) (ibid.). In other words, even if some automaton reaches the same results as human beings in some of their functions, this does not mean that they proceed in the same way or that they execute the same operations. On the contrary, according to Simondon, the apparent equivalence or performance of the results often hides different structures, different operations and different methods. Thus, a machine or an automaton is not an artificial being imitating a human being, but a technological device likely to replace human beings by accomplishing their functions through other structures and operations. This is why, according to Simondon, cybernetic literature uses a misleading analogy when it affirms that machines perceive, think, speak or remember (1989): such an analogy reveals a mythical conception of the robot. Simondon underlines the fact that a cultivated people would never allow themselves to describe a statue or a painting as a person provided with an interiority, a soul, emotions or volition, but they nevertheless speak about threatening machines as if they attributed a separate soul, an autonomous existence, feelings and intentions to these technical objects (ibid., 11). Technical Objects Understood as “Crystallization” of Human Activity According to Simondon, such a mythical conception rests on an ignorance of the machines’ functioning, which is itself due to a traditional
What Do We Call “Thinking”?
205
opposition between culture and technology: technical objects have been rejected from the world of sense and values for a very long time. Far from investigating technical objects, the cultural realm constitutes a defensive system against technology, as if technical objects were external, foreign, or unknown realities that human beings could not possibly understand, adapt, or interpret. Such a rejection of technical objects outside of the cultural and meaningful sphere necessary leads to two inseparable but contradictory attitudes: either culture considers technical objects as pure material and useful means without any internal meaning, or as potential dangers threatening humanity with oppressive intentions (Simondon 1989, 11). In short, either technical objects are meaningless means subordinated to human ends, or they are inhuman robots subordinating human beings. In both cases, such conceptions ignore the human reality that is contained or “crystallized” inside them, and cannot possibly understand their internal signification (ibid., 13, 328, 333). Indeed, for Simondon, technical objects are the crystallization and the fixation of human gestures into functioning structures: artifacts are objectifications and materializations of an inventive act, crystallizing a thought and an operation that have resolved a problem (ibid., 335). As soon as they materialize or objectify a set of operational schemes, they possess internal functioning norms and carry some information. Technical objects are not pure means serving users, but rather supports of information which have to be deciphered and interpreted by individuals who can thus relate to the past human activities crystallized in them. They are intermediaries between epochs and civilizations; they enable human beings to communicate through their inventions (ibid., 334–336). If human users understand the information contained in the objects, they will be able to transform the internal norms into new norms, and thus make the objects evolve and continue the inventive act that was at their origin. So, for Simondon, the question is not to understand what machines think, want or desire, but to understand the conditions under which they can become meaningful for human beings. In this respect, Simondon studies the process of industrialization and remarks that industrial production, which implies complex assembling processes, makes technical objects more and more difficult for their human users to understand. Indeed, in industrialized and standardized objects, it is no longer possible to read the constructive operation in the object: the object cannot be understood as the result of an operation of construction because the technical process is hidden. This is why human individuals living in industrialized societies are troubled: they are faced with objects that are not immediately clear to them; their technical milieu seems as impenetrable as a foreign language (Simondon 2014c, 65–6). Even if they constitute the everyday environments of human beings, technical objects still seem foreign to them, because their propagation into human societies was not accompanied by the transmission of the knowledge necessary to understand their functioning (ibid., 28).
206
Alombert
The Need for a Technical Culture to Avoid Alienation Indeed, according to Simondon, for an exchange of information between technical object and human subject to become possible, it is necessary that this human subject has integrated “technical forms” that enable him to understand the internal functioning of the objects and not to consider them as mere utensils. The individual subject thus must acquire and possess a technical culture, a set of subjective forms likely to encounter the forms contained in the machines and to produce signification (1989, 335, 342). But if such a technical culture is lacking, and if technical objects are produced only to be sold to “ignorant users” (ibid., 339), those users will not be able to participate in the evolution of their technical milieu: on the contrary, they will be alienated from that milieu and forced to adapt to the machines’ injunctions – human action will be unable to reach the world. Besides, according to Simondon, this separation between cultural contents and technical realities, which maintains users in a state of ignorance and powerlessness, has political risk: the cultural contents such as institutions, laws, languages or customs, which are supposed to govern societies, rest on traditional schemes which are adapted to old-style technologies (handmade or agricultural technologies) whereas the technical world has become an industrial one. As Simondon puts it, laws and codes are adapted to a society of men working with tools whereas the current reality to be governed consists of human and machines, technical systems and technical assembly, where the technical individuals are the machines and no longer the men (1989, 16, 207). Such an unadjusted culture can no longer be effective: cultural contents only have a symbolic value and enter a degradation process (Simondon 2014c, 35–6 and 2014b, 321). Culture thus becomes archaic, whereas technology is left to external forces and disorder (2014c, 35–6). This is why Simondon insists on the need to develop a technical culture, in order to bridge the gap between the slow rhythm of cultural evolution and the accelerated rhythm of technical transformation (2014c, 35–6), which are happening on a planetary scale, breaking through frontiers and ethnical groups (2014b, 319). Such a technical culture should be integrated into educational and academic programs, as well as literary or scientific culture, through the teaching of the fundamental technical schemes or through initiation to the functioning of current technologies (1989, 15). This extension of culture to technical realities has a philosophical function, because it will enable us to go beyond myths and stereotypes about technologies and machines – such as the myth of the robot provided with threatening intentions or the myth of perfect automatons serving humanity. It also has a social and political role, because it is a way to fight against users’ ignorance and alienation, by giving them the power to apprehend their existence and their situation and to act in their technical environments (ibid., 16). So, for Simondon, the problem is not to produce artificial intelligence or moral machines, but to transform the cultural contents and the moral rules according to the new technological milieu.
What Do We Call “Thinking”?
207
Bernard Stiegler: The Artificial Form of Life and the Production of Collective Intelligence The Noetic Life of Artificial Living Beings Indeed, since Simondon’s epoch, technical milieus have changed greatly: they are not only characterized by industrial machines and logical or computing machines, but also by digital platforms that reticulate billions of individuals through their connected devices and gather massive amounts of data. In this new technical epoch, the French philosopher Bernard Stiegler inherits Canguilhem’s and Simondon’s considerations, and raises new questions about intelligence and ethics in digital milieus. Like Simondon, Stiegler’s reflections go beyond the instrumental conception of technics that had already been criticized by Heidegger (in 1949). Indeed, Stiegler maintains that artifacts do not constitute means subjugated to human intentions, but artificial organs that affect biological organisms in return and support collective memory. As Bergson, Freud, Canguilhem and Leroi-Gourhan have previously demonstrated, Stiegler insists on the fact that what we call “human beings” are nothing without their technical artifacts. In other words, and in contrast with what Simondon seemed to suggest, there is no human reality or human interiority before technical artificiality or technical externalization. Indeed, in the third chapter of Technics and Time, through a commentary on LeroiGourhan, Stiegler reveals the illusion that consists in presuming a psychological interiority at the origin of the process of technical invention (1994): if such a process can be described as a technical externalization, we have to keep in mind that there was no interiority before such an externalization, because it is only during this process of technical externalization that such a psychological interiority could form itself (ibid., 152, 162). In short, it is through this process of technical externalization that the socalled “human” living being develops his psychological or mental capacities, which are constitutive of what we usually call his interiority. This is why, in the second volume of De la misère symbolique (Symbolic Misery; 2005), Stiegler maintains that noetic life is a technical life, that is to say, that the possibility of noesis or of thought is intrinsically linked with the possibility of artifactuality. In this sense, we could say that thinking is an always already artificial activity: thought is always already artificial because it is the interiorization of artifactual automatism that must be de-automatized, it is the reactivation of an always already externalized memory that must be revived and transformed through new interpretations. In other words, if intelligence characterizes all forms of life, noesis or thought needs a technical form of life, a form of life always already enhanced by its artificial organs. Indeed, Stiegler maintains that every form of life can be considered intelligent, in the sense that a living being is never only mechanical or automatized, but has an oriented behavior: the living being is oriented towards some goals, which are not necessarily conscious or
208
Alombert
represented. Intelligence in this sense is linked to the movement or the animation of a living being, which the biologist Jakob von Uexküll (1934) described as a “sensorimotor loop”, in order to explain the circular exchanges between the reception of sensory information and the organism’s reactions. Stiegler maintains that the technical externalization of the living being implies a transformation of this loop: the production of artificial organs enables the organism to vary its responses to external stimuli, to hold back its reactions and to de-adapt its instincts, which thus become drives, and can be transformed into desires (2015, §20). Through the process of externalization, the physiological response is differentiated and thus becomes a psychological action. It thus seems that artificial intelligence or artificial life can be described as a technical form of life that is characterized not by reactions (which respond to stimuli coming from the environment) but by actions and technical productions (which transform this technical environment through the production of new artifacts), not by instincts (adapted to an object) but by drives (which can change their objects and become desires), not by genetic memory (which is transmitted through heredity) but also by collective externalized memory (which is transmitted through education and can be interpreted in unpredictable ways). The Transformation of Noetic Functions through Technical Evolutions Through this process of technical externalization, which began in the prehistoric age, noetic or mental functions are externalized into material supports. This means that what we usually consider mental or psychological functions, such as perception, memory, intuition and imagination, evolve through time and alongside the evolution of material supports and artificial organs: for example, visual perception evolves with the transformation of scientific instruments such as microscope or telescope, memory evolves with the transformation of recording technologies such as writing, phonographic recording, photography, cinema – which modify our access to the past and anticipation of the future; imagination evolves with the transformation of artistic technologies such as cave painting, painting, printing, audiovisual media, digital media, etc. And as the American theoretician Katherine Hayles (2007) has shown, attention itself is transformed through technical evolution: the passage from written and printed technologies to digital and reticulated technologies implies a passage from “deep attention” (focalization on a unique object of thought over a long time) to “hyper attention” (focalization on multiple intellectual tasks at the same time and with little attention). In short, as their technologies evolve, the artificial living beings that we call “humans” externalize their mental functions into prosthetic organs. According to Stiegler, who revives an idea already highlighted by Socrates in Phaedrus and by Freud in Civilization and Its Discontents, this process of
What Do We Call “Thinking”?
209
externalization is ambivalent: it is an enhancement of the human capacities thanks to the prosthetic organs (it is possible to remember much more information thanks to the technique of writing, and it is possible to move more rapidly and over a longer distance thanks to engines and cars), but it is also a loss of the same capacities delegated to prosthetic organs (as soon as I can write down a piece of information, I no longer have to train my memory to remember it; as soon as an algorithm can guide my car, I no longer have to learn how to drive). Every enhancement implies a risk of dependence and addiction because of the loss of the capacity which is supposed to be enhanced. The process of progressive externalization of functions into technical supports always implies the loss of capacities but can also go with the production of new social practices through which technical living beings relate to each other from generation to generation. Indeed, according to Stiegler, such a loss can be counterbalanced only if new knowledge is invented, that is to say, only if artificial living beings do not adapt to their technical prosthesis but relate to each other in order to invent new rules and to share new practices. For example, the delegation of memory to writing technologies can be counterbalanced by the invention of grammar rules and by the practice of linguistics or philosophy, the delegation of moving functions to cars and engines can be counter-balanced by the development of driving rules and the practice of driving, etc. Such forms of knowledge (theoretical knowledge, practical know how, ethical know how to live) constitute the social practices that enable living beings to take care of their artificial organs and to live together in their technical milieu. For this precise reason, they cannot be externalized in those same technical organs or in that same technical milieu. Such forms of knowledge are always collective and social practices: they can consist of practical knowledge (know-hows), cultural knowledge (know how to live) or theoretical knowledge. In any case, knowledge is not understood as an internal operation happening inside an individual mind or brain, and cannot be confused with cognition. Pharmacological Effects of Digital Technologies: How Do We Produce Collective Intelligence in Automatic Societies? In this theoretical context, what we call artificial intelligence must be understood as a new stage of the process of technical externalization, through which new (mental or psychological) functions are externalized. Consequently, the question is not to know what moral machines think but to understand which functions are externalized into digital technologies, and what are the new forms of knowledge to invent in order to counterbalance the inevitable loss produced through this algorithmic enhancement. This is why, in the first volume of Automatic Society (2015) and in the postscript of Technics and Time entitled “The new conflict of faculties and functions” (2018), Stiegler studies the deep transformations of the functions of
210
Alombert
intuition, understanding, and imagination in the era of contemporary artificial intelligence. Contemporary artificial intelligence is no longer the artificial intelligence of logical machines; it is now characterized by the algorithmic processing of massive amounts of data, based on the technological reticulation of billions of individuals through their connected devices, controlled by the planetary platforms of giant technological companies. What are the retroactive effects of such technical transformations on noetic functions (on perception, understanding, and imagination)? Unlike sensory data, which according to Kant was characteristic of intuition, digital data are already informed through their collection by interfaces and their calculation by algorithms. According to Stiegler, these algorithms, which function at a higher speed than the synaptic connections of our brains, can be described as a delegation of the understanding functions to automatic technologies, which outpace the function of reason, that is, the function of deciding according to a singular interpretation. The function of imagination is outpaced too, through the systems of correlationist calculation which suggest to psychic individuals the objects that they are supposed to desire, according to their statistically generated profiles. Thus, according to Stiegler, these devices lead to a hypertrophy of automatic calculation that shortcuts the noetic functions of reason and imagination and introduces a risk of “de-noetisation,” that is, of the disappearance of noetic activity. Indeed, as the theoretician Jonathan Crary (2013) has shown in his book on contemporary capitalism, the functioning of algorithms in the service of data economy aims at provoking automatic reactions in order to accelerate consumption and financial flows, by suppressing the time of reflexive suspension that separates the stimulation from the reaction. Psychic individuals are ordered to react immediately to the injunctions of their connected environments, and do not have enough time to interpret the data, which has already been informed and calculated by algorithms. According to Stiegler, the annihilation of this period between stimulation and reaction bypasses the time through which drives defer their satisfaction and through which automatic reflex become reflexive action. Thus, noetic activity disappears and behaviour becomes a new sort of sensorimotor loop, not between a living organism and its surroundings, but through retroactive loops relating psychic individuals’ profiles and their connected environments. The individuals thus become functions of the technical system, which exploit their libidinal energies. This is why, according to Stiegler, the transmission of a technical culture proposed by Simondon cannot be sufficient: in order to enhance noetic functions and collective intelligence, the technical systems themselves must be transformed. The digital technologies of control that exploit libidinal energies must become digital technologies of spirit, which support new kinds of knowledge and enable such a cultural transmission. Such a perspective implies that we must implement new functionalities into algorithmic technologies and design new digital platforms, such as
What Do We Call “Thinking”?
211
categorization and annotation systems, which enable psychic individuals to categorize and interpret the data they receive, or contributive platforms and deliberative social networks (Stiegler 2015, §70), which enable psychic individuals to express and share their points of view (political arguments, aesthetic judgments, scientific thesis), to confront them, to interpret them and to make collective decisions – that is, to practice their reason and their imagination, to renew what we could call the noetic life of their society and the noetic diversity which is characteristic of the technical form of life. Such a technical transformation should enable data, algorithms and social networks to become not only means of measurement and calculation but also mediums of collective and artificial intelligence.
Conclusion Simondon’s reflections enable us to go beyond the problematic alternative between an instrumental conception of technology (which considers technical objects as pure means to which human subjects can assign ends) and a mythical conception of the robot (which considers machines as mysterious entities possessing thoughts or intentions): Simondon invites us to consider technical objects as crystallizations of human activities or inventions, which carry information, and through which human individuals can develop transindividual relations. When these technical objects are produced in an industrial manner, and when their social diffusion is not accompanied by the transmission of an adapted technical culture, they tend to become indecipherable to human users, who thus become alienated from their technical environments. Hence Simondon insists on the need to develop a technical culture and to invent new legal rules and new ethical practices in order to adjust cultural contents and human societies to the industrial technical milieu. Stiegler’s reflections enable us to go beyond the problematic alternative between an internal conception of thought (which considers thought as an internal process that happens in individual minds or brains) and an external conception of thought (which reduces thought to an external and observable behavior): Stiegler invites us to consider thought or noesis as a process of externalization of biological and psychological functions into technical artifacts, which inevitably leads to the loss of capacities, but which can be compensated by the development of new social practices or collective knowledge. When the development of such knowledge is bypassed or disrupted by the constant transformations of digital technologies and by the speed of algorithmic calculation, individuals no longer have the time to develop their noetic capacities. Hence Stiegler insists on the need to transform the functioning of digital technologies, to make the devices become supports of knowledge, controversies, deliberations, and collective decisions. According to such considerations, the problem raised by “autonomous vehicles” or other “smart environments” is not to know whether such devices can become “moral machines”, but to invent new ways of living with
212
Alombert
them – that is, new institutions, new ethical rules, new social practices. For such an invention to become possible, it seems necessary to understand the internal functioning of such automatons, and to decipher the algorithms according to which they work. It also seems necessary to understand the functions externalized in such systems – their impact on the psychic capacities of memory, perception, and anticipation, and on the collective capacities of deliberation or decision. In short, rather than wondering if a machine can be moral, we should ask whether future users of such machines will possess a sufficient technical culture to avoid alienation, and if they will have the time and the space for collective deliberation, in order to avoid the delegation of their capacities of decision to algorithms calculating massive amounts of data. Morality and ethics are not a matter of statistical probability or digital computation, but a constantly renewed problem of living collectively in a technical milieu, which is simultaneously liberating and alienating, and which cannot stop transforming. We cannot know where and when this process of technical externalization will stop, but as long as machines are not confronted with the political necessity of inventing new collective rules in order to adopt the transformations of their technical milieu, the notion of “moral machine” will remain a misleading metaphor.
References Canguilhem, Georges. 1992a [1952]. La connaissance de la vie. Paris: Vrin. Canguilhem, Georges. 1992b [1980]. “Le cerveau et la pensée.” In Canguilhem, philosophe, historien des sciences, Actes du colloque des 6–8 décembre 1990, 11–33. Paris: Albin Michel. Crary, Jonathan. 2013. 24/7: Late Capitalism and the Ends of Sleep. London and New York: Verso. Halpern Orit, Robert Mitchell, and Bernard Dionysius Geoghegan. 2017. “The Smartness Mandate. Notes towards a Critique.” Grey Room 68 (Summer 2017): 106–129. doi:10.1162/GREY_a_00221. Hayles, N. Katherine. 2007. “Hyper and Deep Attention: The Generational Divide in Cognitive Modes.” Profession I–II: 187–199. Heidegger, Martin. 1993 [1949]. “La question de la technique.” In Essais et conférences. Paris: Gallimard. Pasquale, Franck. 2017. “From Territorial to Functional Sovereignty: The Case of Amazon.” LPE Project, June 12. https://lpeblog.org/2017/12/06/from-territoria l-to-functional-sovereignty-the-case-of-amazon. Simondon, Gilbert. 1958. Du mode d’existence des objets techniques. Paris: Aubier. Simondon, Gilbert. 2005 [1964]. L’individuation à la lumière des notions de forme et d’information. Paris: Jérôme Millon. Simondon, Gilbert. 2014a [1965]. “Culture et technique”. In Sur la technique. Paris: Presses Universitaires de France. Simondon, Gilbert. 2014b [1965–6]. Imagination et invention. Paris: Presses Universitaires de France. Simondon, Gilbert. 2014c [1960–1]. “Psychosociologie de la technicité.” In Sur la technique. Paris: Presses Universitaires de France.
What Do We Call “Thinking”?
213
Simondon, Gilbert. 2016a [1953]. “Cybernétique et philosophie”. In Sur la philosophie. Paris: Presses Universitaires de France. Simondon, Gilbert. 2016b [1953]. “Epistémologie de la cybernétique”. In Sur la philosophie. Paris: Presses Universitaires de France. Simondon, Gilbert. 2016c [1958]. On the Mode of Existence of Digital Objects, trans. Cecile Malaspina and John Rogove. Minneapolis: University of Minnesota Press, Univocal Publishing. Stiegler, Bernard. 1994. La technique et le temps. t. 1 La faute d’Epiméthée. Paris: Galilée. Stiegler, Bernard. 1998. Technics and Time, 1 The Fault of Epimetheus, trans. Richard Beardsworth and George Collins. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2013 [2004–5]. De la misère symbolique. 2 vols. Paris: Flammarion. Stiegler, Bernard. 2015. La société automatique, t. 1 L’avenir du travail. Paris: Fayard. Stiegler, Bernard. 2016. Automatic Society, Volume 1: The Future of Work, trans. Daniel Ross. Cambridge: Polity Press. Stiegler, Bernard. 2018. Qu’appelle-t-on panser? t. 1 L’immense régression. Paris: Les liens qui libèrent. Stiegler, Bernard. 2018. The Neganthropocene, edited, trans., and intro. by Daniel Ross. London: Open Humanities Press. von Uexküll, Jakob. 1934. Mondes animaux et monde humain suivi de La théorie de la signification. Paris: Denoël.
13 Can a Machine Have a Soul? Daniel Ross
Introduction What follows is not an attempt to analyze the implications of the development of artificial intelligence, but rather an attempt to try to specify the terms in which it is possible to think what this phrase, “artificial intelligence,” means or could mean, which, it will be suggested, involves asking the prior question of the difference between different kinds of souls. It arose from an email by Anne Alombert sent to Michał Krzykawski, in which she suggested that the question of whether AI is or could become really able to think (and care), or the question about “moral machines,” is really a kind of ideological trap. I completely agree with this. What follows from asserting the ideological character of such questions is the necessity of undertaking an analysis of the real issues underlying the ideology, and on the basis of this analysis to elaborate a critique of this ideology. The problem in this case is that the analysis is less easy to pin down than it might appear at first glance, and a too-hasty assumption that one has done so can lead those who consider themselves to have a critical view of digital technology to in fact engage in forms of discourse that are symptomatic of this difficulty, and that in fact reflect the underlying assumption of this ideology: that “real” AI is simply a matter of reaching a certain threshold of intelligence, which is really just a matter of reaching a certain threshold of complexity, and that the refusal of “philosophers” to accept this simply shows their adherence to old-fashioned (metaphysical) notions about “mind.” In some sense, this assumption is true, but only if there is a sufficiently complex idea about what complexity means, or about what mind means. There is, of course, a long tradition, going back at least to Descartes, of considering the animal body as nothing more than a very complicated mechanical apparatus, and an equally long tradition of seeing the human brain as nothing more than a further development of the animal brain. But I propose that untangling these issues involves asking the following question: what (if anything) is truly distinct about the noetic soul (in Aristotle’s sense)? And that means: what is distinct about the noetic soul in relation to
Can a Machine Have a Soul?
215
(1) the sensitive (animal) soul, and (2) the intelligent (computational) machine? I argue that the question about AI cannot really be answered unless both of these distinctions are addressed. If this is indeed the case, then the issue at stake concerns three kinds of beings: 1 2 3
the non-noetic organic being (more specifically, the organic being possessing a nervous system, that is, the animal); the noetic (and prosthetic) organic being (more specifically, the human being, or the noninhuman being); the organized inorganic being (more specifically, the computational machine).
These three kinds of beings (each of which is not a single being but a large class of beings whose members may possess a variety of characteristics) all seem to be describable in similar terms, in that they all seem to be engaged in various versions of a similar set of three-staged operations: 1
2 3
receiving information or data, impressions or givens, from outside its boundaries (from the environment), via one or another kind of sensory equipment; processing this information or these impressions via one or another kind of information-processing equipment; on the basis of this processing, executing actions and responses (behavior).
The issue is to say if and why a human being – more properly, a noetic soul – operates in a way that is fundamentally different to how an animal behaves or a robot behaves, or, to put it another way, whether what a noetic soul does can truly be described in terms of this kind of three-staged operation. If I am right to frame the problem in these terms, then I can indeed say: the question is not the threshold of AI but the distinctness of the noetic soul. I further propose that answering this question involves taking into consideration Bernard Stiegler’s approach to the relationship between endosomatization and exosomatization, which is also to say that it involves consideration of the work of André Leroi-Gourhan (1945, 1993) and Alfred Lotka (1945). What is truly decisive is the real philosophical content of Stiegler’s position, which actually tries to answer this question in a new way, by asking about the relationship between phenomenological considerations and technological considerations within an evolutionary context that is no longer just biological. This is what I will try to explain below.
From Umwelt to Welt The three-staged operation I have just described (receiving impressions; processing what is received; on the basis of this processing, executing action)
216
Ross
obviously corresponds in a very general and no doubt superficial way to Immanuel Kant’s critique of reason: on the basis of the data of intuition, concepts are mobilized in order to produce an analytical understanding; on this basis, it is possible to project a synthesis of reason that exceeds the understanding and opens the possibility of judgment, that is, decision, or in other words, action. The question becomes: if the noetic soul is truly distinct in the sense that it can project a synthesis of reason, then in what sense can we say that the animal cannot reason or decide in this Kantian sense, and/or that the machine cannot reason or decide in this same sense, and does the same explanation apply in each case? It should be added at this point that, from a Stieglerian perspective, it is a matter of thinking this question without accepting the idealist basis of Kant’s critique, but nor is it a matter of reducing the Kantian account to a materialist basis that would ultimately evacuate “reason” of its projective capacities altogether and reduce thought to one or another species of mechanism: this amounts to the problem of knowing what Stiegler means by “hypermaterialism.” This tripartite division (receiving through intuition, processing through analysis, executing through the synthesis of reason) is what Stiegler complicates in Technics and Time, 3 (2010), where he shows that a “fourth synthesis” is necessary, or in other words that reason in Kant’s sense necessarily has technical conditions. But this insight then has to be fed back into the analysis undertaken in Technics and Time, 1 (1998) and Technics and Time, 2 (2008) which means: not just into the question of interpreting LeroiGourhan, but of interpreting Leroi-Gourhan in relation to Edmund Husserl and vice versa. In other words, it is a matter not just of reason but of the difference that tertiary retention makes to secondary retention and to primary retention, and therefore to protention. Husserl is the real key here. The animal, too, is involved in a circuit of retention and protention, at least if we take Jacques Derrida at his word, since in Of Grammatology he extends the question of retention and protention back to the beginnings of life and forward to the electronic future of cybernetic programs, a movement that “goes far beyond the possibilities of the ‘intentional consciousness’” (1998, 84). For Derrida, this extension of the question of retention and protention is a matter of thinking the trace in general, that is, the trace of life in general, beyond the distinction between the animal and the human – beyond anthropocentrism. But from a Stieglerian perspective the exit from a metaphysical opposition between animal and human does not imply an exit from the question of the distinctness of the noetic soul, that is, the question of the trace that distinguishes “technical life” in Canguilhem’s sense (1991, 200–1). This in turn means that we can and must reflect on what it means to talk about the retentional and protentional character of the pre-noetic or non-noetic life of the sensitive soul, so that we can have any chance of knowing what it means to say whether and how the noetic soul can be distinguished from the sensitive soul. To recollect, for Aristotle in De Anima, there are three kinds of souls – the vegetative, the sensitive and the noetic soul – where a soul is defined as
Can a Machine Have a Soul?
217
that which has the principle of its movement contained within itself (see Ross 2009). What does it mean, then, to say that the sensitive soul is already involved in a circuit of retention and protention? It means that the “data” it receives – or rather, what is “given” – from its sensory apparatus is already conditioned, or in other words is already a selection at the very moment of its being received, but the basis of this conditioning or this selection of the given lies in the functional characteristics of the species. This is the meaning of Jakob von Uexküll’s (2010) analysis of the sensorimotor loop that relates reception and effection: the specificity of such loops means that the tick perceives and reacts to its milieu in a tick-way, the gazelle perceives and reacts to its milieu in a gazelle-way, and the lion perceives and reacts to its milieu in a lion-way. Individual organisms belonging to these species are capable of learning lessons through the encounter with the contingencies of the environment (hence they can be trained), but, unless these lessons are directed by a noetic soul (a human trainer), this learning unfolds within the prevailing conditions of the organism and the milieu in a noncumulative way. This is why, in 1929, Martin Heidegger praises von Uexküll for drawing attention to the complexity of the “relational structure between the animal and its environment,” where this relational structure means that the “organism is not something independent in its own right [in relation to the environment]”; rather, “the organism adapts a particular environment into it in each case,” and can do so “only insofar as openness for… belongs to its essence” (Heidegger 1995, 263–4). And the effect of this “openness for…” is that “a certain leeway is created within which whatever is encountered can be encountered in such and such a way” (ibid., 264). To put this in a different (Simondonian) vocabulary, what it means to say that the impression received by the animal is already a kind of selection is that the perceptual and nervous organs of that species are the outcome of a process of vital individuation: it is the process of biological evolution that has formed the criteria for this perceptual selection, and there is no way of intruding into this close relationship between organism and milieu except through some highly artificial means. Hence the protentional characteristics that condition the behavior of individuals of that species are not some kind of “objective” (or we could say, “computationally optimal or efficient”) calculation of objective data about its environment by which the organism struggles to survive and reproduce (the organism and the environment are not “present at hand,” in Heidegger’s terminology), but an interaction arising from the specific way that the species exists in tension with its milieu and the specific way it copes with that tension and responds to it through a sensorimotor circuit (that is, a set of recursive loops) – which may nevertheless turn out to be, thanks to the effects of evolutionary pressures, quite efficient. Heidegger praises von Uexküll for this insight, but he also draws attention to the point at which it becomes “philosophically problematic,” which occurs when “we proceed to talk about the human world in the same
218
Ross
manner” (1995, 263). It is at this point that Heidegger insists that “it is not simply a question of a qualitative otherness of the animal world as compared with the human world,” not simply a matter of “quantitative distinction in range, depth, and breadth,” not just about “whether or how the animal takes what is given to it in a different way”: the real question of the distinction between these two broad categories of soul, for Heidegger, lies in “whether the animal can apprehend something as something, something as a being, at all” (ibid., 264, emphases original). And this is why he argues that, beyond von Uexküll, the question is not just whether there is a tick-way and a gazelle-way of apprehending the environment, but whether we can truly say that there is a tick-world and a gazelle-world into which we can conduct any kind of exploratory foray whatsoever, “or whether [on the contrary] we do not have to determine that which the animal stands in relation to in another way” (ibid.). Hence Heidegger argues that this question can be resolved only “if we take the concept of world as our guiding thread” (1995, 264). But from Stiegler’s “organological” perspective, the passage from Umwelt to Welt cannot be conceived in precisely the same way as does Heidegger, since it must also go through the analyses of Leroi-Gourhan and Lotka, as well as those of Donald Winnicott (1971), for whom the enchanted world of transitional space is opened up via the transitional object, that is, via what Stiegler, after Husserl’s distinction between primary retention and secondary retention, calls tertiary retention. It is the transitional object that in fact opens up the possibility of a world in which the “as” character of things eksists. With the concepts of transitional object and transitional space, it becomes possible to say, with Stiegler, that the relational structure between the noetic soul and its environment differs from the relational structure between the animal soul and its environment because, for the noetic soul, the play between primary retention and secondary retention has always already been conditioned by tertiary retention, or in other words, because there is no living perception that is not already conditioned by what is dead, that is, by technical memory. It may no doubt be true that in some sense any animal with a developed central nervous system is involved with something like primary retention and secondary retention, insofar as when the animal is engaged in the flux of present experience, that experience can be influenced by the memories of past experience it has retained. Nevertheless, it does not experience primary retention as such or secondary retention as such, because this distinction and play between these two types of Husserlian retention is opened up on the basis of the tertiary retentional artifact that is also the transitional object. What dawns with the advent of the tool is the existence of a thing in the world that shows us and reminds us of our past, the past we have inherited, the gestures of the hands that fashioned it: this is why Stiegler describes a tool as a mirror, through which alone it becomes possible to see “we ourselves” in this coming from a past that is nevertheless not our present, and
Can a Machine Have a Soul?
219
which we must learn to adopt. It is through this dawning awareness that the noetic soul is brought to the point of becoming conscious of a distinction between the past of its secondary retentions and the present that it experiences through primary retentions: an awareness thus made possible through its relationship to tertiary retentions. The determinacy of this access, opening up the possibility of repetition, means that, for the noetic soul, it is not just that all reception is a selection but that all reception is always already, in a way, an interpretation: what is perceived is to this extent interpreted before it is received, even if this is not an interpretation in the full sense of an attempt to seek new differences of meaning, which is made possible by the deliberate frequentation of a tertiary retention. Furthermore, the circuit that runs from a reception to an effection, that is, from an impression to an action, as von Uexküll describes for the animal soul, has a further complication in the noetic soul: the noetic impression does not remain within an endosomatic circuit between the receptors and effectors that are the sensorimotor organs [but also] gives an expression, and this expression is exteriorized via fabricated objects – of which words and all transindividual ex-pressions are layers – and it is noetic only on this condition. (Stiegler 2020, 278, emphases original) This “expression” extruded by the noetic soul as things and words then forms the very transitional objects that condition primary and secondary retention. In other words, what matters is the precise way in which we describe this relational structure and processual relation between the primary, secondary, and tertiary. It is not at all a unidirectional linear process that runs from reception to effection, or from primary to secondary retention, and then to tertiary retention, but a constant recursive looping going in both directions, and both circling deeper and opening out, so that it becomes no longer a loop but a set of spirals, where smaller spirals are nested within larger spirals. Unlike the case of the animal, these looping nested spirals involve not just the organism and the milieu, but the retentional artifact, which means they loop into that “third area” that is the transitional space opened up by the transitional object (and opening up the very possibility of the transitional object). This is why Stiegler insists on the “orthothetic” character of writing: it is by repeatedly “coming back” to the poem or the law or the philosophical text, which itself does not change, sometimes across millennia of reading, that new possibilities open up (a future opens up, in other words), which means new interpretations by new noetic souls making possible new decisions. Futurity in the noetic sense is largely a question of the perpetual possibility of accumulations of interpretation, and this is what is not possible for a sensitive soul, because it does not have access to orthothetic and lasting tertiary retentions. This “repeated coming back” shows that what is
220
Ross
at stake are no longer recursive loops between organism and milieu but unfolding spirals that are no longer just in the brain, or just between the brain and the milieu via the sense organs, but in that “third area” that is the space of transindividuation, the space of knowledge and significance. Or in other words, through the transitional space that is “enchanted” by what Winnicott (1971) calls the good enough mother, which also means that, through these spaces, these spirals also loop their way through other noetic souls, to the point that knowledge and significance become collective – precisely, the transindividual. For the noetic soul, the criteria by which primary retentions are selected are, therefore, no longer determined by the character of the species, but by the accumulation of secondary retentions that are themselves conditioned by tertiary retentions giving access to experiences that the individual has not themselves lived (tertiary retentions that are shared not at the level of the species but within the idiomatic locality that Leroi-Gourhan calls the “ethnic,” now subject to globalized deterritorialization processes). It is this “relational structure” between primary, secondary and tertiary retention that opens up the potential infinitude of protention, that is, the unending possibility of new interpretations and new knowledge carried out on the basis of what does not exist but consists – on the basis of “the ideas” (see Stiegler 2011a, 89–93). In the case of the noetic soul, this recursivity operates not just within the organism, or between the organism and the milieu, but in a space that does not exist but consists: this is why we can say that it is a (localized) cosmos (or a world), and not just a universe.
From Recursive Loops to Procursive Spirals Nevertheless, it may be possible to imagine an ideologist of AI who could understand all or at least most of this, more or less, and still insist that it is really only a question of the complexity of the loops between various retentional systems, and that once technological retentional systems reach a certain level of complexity, meaning a sufficiently high number of loops operating at sufficiently high speed, then new protentional possibilities in turn open up. For example, they could ask: “When AlphaGo taught itself to play Go, the way it played was new, both unlike and superior to the way any human player had ever played, so isn’t that a bifurcation and the opening of a new future simply through the capacities of machine learning to receive data, process it, and on that basis execute unprecedented actions?” They could add: “Can we really say for sure that AlphaGo is not producing this new way of playing on the basis of what does not exist but consists: some
Can a Machine Have a Soul?
221
new idea of what it means to play Go? Or, if you won’t admit this possibility, can you not at least admit the future possibility that with increased complexity, this threshold will be reached for some kinds of human activity, and eventually perhaps for all kinds?” And if we said in reply: “Yes, but AlphaGo or any future AlphaGos can still really only process pre-existing data through complex processing methods, and this does not really count as true reason, that is, true decision (exceeding analysis and understanding) on the basis of ideas,” then the ideologists of AI would likely reply: “Now your metaphysics are showing: because the critic who repeatedly goes back to a poem and discovers new interpretations is really only involved in the same kind of recursive process as the machine, where tertiary retentional data (that is, inorganic but organized matter) is subjected to highly complex algorithmic loops (looping into brains that are themselves only organized matter, albeit organic), and the only difference is that the human critic has the illusion that something qualitatively different is going on, the illusion that the ideas are anything other than the ephemeral but ultimately always (at least in principle) calculable outcome of vastly complex recursive loops.” The issue for us is to say why this argument is wrong, and not just to fall back too quickly and easily into the notion that what does not exist (ideas), or what is infinite, is by definition incalculable (even though this is also a crucial argument, if not the crucial argument). The way to do this, it seems to me, is by again going back to Husserl, and to the phenomenology of time-consciousness (Husserl 1991). The difference between the noetic soul and the machine is not the complexity of the algorithm but the fact that the impression – “data,” the given – is of a fundamentally different character in both cases, and in a very specific way: in the sense that it involves a different relationship to time. No matter how complex the machine-learning algorithms may be, the data received by, for example, a video camera or a microphone is and can only be composed of a succession of now-moments. The data obtained by a computer can never itself be subject to a phenomenological account for that computer itself: all data has a definite character, marked, for example, by a particular “timestamp.” In the case of the noetic soul, however, at least according to Stiegler’s reading of Husserl, there is never really a now-moment at all: the experience of the “present” is always already a post-produced retentional experience, which is to say that it is always already edited and at least minimally
222
Ross
interpreted. This is also the case for the sensitive soul of the animal, but in a limited sense, as we saw: the sensory field is conditioned by the functional characteristics of the species, which perceives according to the singular characteristics of its species-way of selecting, arising from the contingencies of the vital individuation process, usually referred to as biological evolution and called by Stiegler endosomatization. But for the human being, that is, the product of exosomatization, this sensory field is conditioned according to the singular characteristics of its idiomatic way of perceiving, where the “idiom” can be both collective and individual, arising from the contingencies of the psychic, collective and technical individuation process. It is only because the now-moment is always already attached to the moments that preceded it and that will succeed it that “present perception” can function (noetically, rather than just biologically) as a selection and a “production” of perception. If this character of being always-already connected to the past and future was not the case, then the selecting and the production of that moment would always come too late, in the next moment, after the perception “itself.” In other words, the “data” of noetic perception is, once again, and in some way, interpreted before it is received. It is not that it is impossible to affix a “timestamp” to the flow of one’s own primary retention as they unfold in the play of primary, secondary and tertiary retentions, but that such a possibility arises because exosomatized perception has always already been artificialized through a relationship to exosomatic organs that always have something about them that is of the character of a clock – tools always in some way involve stamping time into matter, for instance in the form of the gestures of the hand that fashions a stone tool. This spiraling entanglement of primary retention with its retentional past and protentional future, both near and far, is also related to what Heidegger referred to in Being and Time as “hearkening,” as that which comes before hearing and as that which already understands: On the basis of this existentially primary potentiality for hearing, something like hearkening becomes possible. Hearkening is itself phenomenally more primordial than what the psychologist “initially” defines as hearing, the sensing of tones and the perception of sounds. Hearkening, too, has the mode of being of a hearing that understands. “Initially” we never hear noises and complexes of sound, but the creaking wagon, the motorcycle. We hear the column on the march, the north wind, the woodpecker tapping, the crackling fire. (2010, 163, emphasis original) None of this phenomenology of time-consciousness applies in any way to the machine or the computer. A computer that has been programmed or has machinically “learned” to recognize certain patterns (as representing a visual image of a cat or the sound of a meow), for example, nonetheless first
Can a Machine Have a Soul?
223
receives data impressions and then analyzes what it receives to see whether it conforms to the pre-established pattern, but it does not hearken before it hears. In other words, it does not project itself towards what it will receive, where the eventual perception of a cat or a meow amounts to the resolution of a kind of tension, and where this hearkening projection means, in fact, that what is heard can prove, for this reason, to be a misperception. What matters is less the character of retention that it implies than the character of this protention that always already accompanies retention: what is involved with the weave of looping spirals between primary, secondary and tertiary retention, operating in both directions, is really a matter of producing the possibility of decision that is the noetic way of executing the imperative to act, a decision that must always involve questions of affection and desire: What is it “to want”? Is it the sequence of electro-chemical micro-processes that mechanically and automatically follow and respond to a stimulus, like, for example, the sensorimotor loop that Jakob von Uexküll described in the case of the tick? Or is it not rather, precisely, and truly, the subject of a choice that can be called such only because it presumes a decision through which a psychic individual is divided between two choices insofar as the individual is not just a brain – or insofar as their brain is not just organic but, precisely, organological, made of spirals, and not simple loops, and, as such, social, that is, having the possibility of being attentive and caring? (Stiegler 2020, 220, emphases original) In some way, as always already an interpretation, noetic perception is already a decision, but a decision that no “intentional consciousness” decides upon, even though it is the essential characteristic of the noetic psyche. But the functional character of this exosomatic process is to protentionally produce behavior, which is to say, to execute action noetically, not according to the function of the species but rather according to the function of reason in Whitehead’s sense (Whitehead 1929). This kind of protention, enabling behavior based on making decisions, can arise only for an organism whose perceptual functions operate according to these kinds of phenomenological principles, where each “moment” is itself always already susceptible to production and interpretation, because there is no moment as such but only the functional apprehension of the flow of time as the space of existence and decision. This is the whole meaning of the “deconstruction of presence” that says that there is never anything but difference-in-repetition. Isn’t this the real argument about why AI is not and cannot be noetic? In other words, AI is not noetic, not just because machine learning does not operate in relation to that “third area” (because it itself occupies that third area as the hypermaterial component of an exosomatic circuit for noetic souls, and as long as it is “enchanted” by noetic souls), but also because it is
224
Ross
not “temporal” in the phenomenological sense of time-consciousness or of hearkening. Hearkening has nothing to do with how the computer “senses” its milieu and it is only with great effort that the noetic soul can “sense” in any way like a machine, as Heidegger points out, because, again, what the soul of Dasein does is “hearken towards” in the sense of initially and always already understanding and interpreting that which it is “receiving”: It requires a very artificial and complicated attitude in order to “hear” a “pure noise”. The fact that we initially hear motorcycles and wagons is, however, the phenomenal proof that Da-sein, as being-in-the-world, always already maintains itself together with innerworldly things at hand and initially not at all with “sensations” whose chaos would first have to be formed to provide the springboard from which the subject jumps off finally to land in a “world”. Essentially understanding, Dasein is initially together with what is understood. (Heidegger 2010, 164, emphases original) As Stiegler points out, this worldly character, which is not that of the animal or that of the machine, is nevertheless, beyond Heidegger and Husserl, what is opened up by the technicity of the noetic soul’s intentionality: The hyle- of intentionality is always already intentional. By maintaining that listening only takes place on the basis of the originary proximity of the ready-to-hand, thereby criticizing the form/matter opposition, Heidegger allows us to introduce the question of this “to-hand”. It is a memory that is neither primary nor secondary; it is completely ignored in Heidegger’s analyses, as it was in those of Husserl, and yet it is immediately there in the tool; indeed it is the very meaning of the tool. A tool is, before anything else, memory: if this were not the case, it could never function as a reference of significance. It is on the basis of the system of references and as a reference itself that I hear the “tool” that is “the creaking coach”. The tool refers in principle to an alreadythere, to a fore-having of something that the who has not itself necessarily lived, but which comes under it [qui lui sous-vient] in its concern. (Stiegler 1998, 254–5, emphases original) Now, one could imagine an objection by a cognitivist: “But even if what you say about the phenomenological character of human perceptual function is true, what is this nonexistence of the ‘now’ that is always already involved with the just past and the just coming, what is it, if not just an illusion that is maintained by unconscious recursive loops that may not be digital but that are ultimately and essentially of the very same kind as algorithmic computation? What ultimately is consciousness of any kind but such an illusion?”
Can a Machine Have a Soul?
225
And they could well add: “Because if it’s not just some vastly complex algorithmic process of that kind (in the sense of being recursive and, ultimately, biochemical and therefore physical, that is, always in the end a question of extraordinarily complex molecular processes involving nothing more ‘enchanted’ or ‘magical’ or ‘soulful’ than changes in the arrangements of atoms), then doesn’t it really presume a kind of time travel, where the past, the present and the future all magically exist at the same moment?” What could we reply to this objection? Perhaps we would have to respond by saying: “Yes, this phenomenology is an illusion, this selective and interpretive ‘production’ (in a cinematic sense) of primary retention and primary protention is a kind of magic trick produced by the organism, and in particular by the noetic organism interacting with tertiary retentional artifacts and with other noetic organisms through those artifacts, and everything must have a timestamp in some sense: the light hits my eye at a certain definite moment and travels along the optic nerve to the brain over the course of an equally definite amount of time, just as the light hits a movie camera at a certain definite moment and travels along a cable to a computer.”1 But then we would have to add: “But the process by which this illusion occurs, the process by which the ‘present’ is always already post-produced on the basis of selection criteria deriving from secondary retention and tertiary retention (where this ‘already’ means that this post-production has already occurred before the light that hits my optic nerve becomes my visual experience of sight), this process is not only vastly more complex than current computational mechanisms, but it is also of a fundamentally different character, and we cannot even begin to conceive what it would mean to introduce this kind of spiraling recursivity into computation.” What would have to become technologically possible, for example, for a computer to experience the surprise (the sur-prehension) of seeing a “bistable percept” (such as Wittgenstein’s “duck-rabbit” Wittgenstein 1953) “instantaneously” (that is, at what Stiegler calls “the infinite speed of desire”) change its receptive character as a new aspect dawns, yet with seemingly no change in what is given or com-prehended? (see Ross 2019). Information never exists “independently”: it is always, in every case, down to the roots of its very possibility, a phenomenon associated with a
226
Ross
very particular localized counter-entropic system functionally and recursively maintaining a resonance between its component elements and a tension with its exterior. If DNA is “information,” if the impulses transmitted along the optic nerve contain information, this is informational only within the highly specific context of the functioning of genes or the nervous system, each of which is the singular product of a singular history, product of an individuation process from which this “information” can never be divorced (except in being transformed through the use of instruments into another kind of information, relying on other kinds of “supports”). Even though a computer must have “preformatted” information in order to recognize it, this consists only in a pre-setting, pre-arranged by a noetic soul, of the way in which the universe can be broken into detectable and measurable and therefore calculable elements. That this “information” is always “pre-formatted” means that it does involve a kind of selection but it does not arise from a process of individuation, except insofar as it involves a technical individuation process that operates in an exorganological circuit with noetic souls, that is, with simple exorganisms involved in psychosocial individuation processes. No particular data received by a computer can, again, surprise it in such a way that it transforms the entire way in which the computer receives, whereas, for example, such a transformation is precisely what may be accomplished by an encounter with a Cézanne, and what had to be accomplished in order for such a work of art to be able to function as the opening of a new epoch of art. More than that, Cézanne had to paint Mont Sainte Victoire, repeatedly, in order for what it already was for him to be able to appear to him for the first time, his eye being trained by his brush-equipped hand – and only on this basis could his work subsequently train the eye of the museum-equipped spectator, who in this way becomes “modern” (Stiegler 2011b, 228–9). In other words, computation always involves the interaction between data processing and program execution, but it never involves the functional “production” of data in this phenomenological and exorganological sense, and therefore never involves the protentional elaboration of motives, expectations and desires, on the basis of the phenomenology of retention and starting with the very possibility and experience of primary retention and protention. A computer cannot reinvent the very character of data, whereas, through the accumulation of knowledge opened up through their intimate entanglement with instruments of every kind, noetic souls have been involved in numerous such reinventions, of which CRISPR gene editing is one of the most recent. If this kind of reinvention of the very character of information and its supports could be autonomously produced by a machine, then yes, perhaps we would have to talk about “real AI,” but how this could be accomplished is beyond any current understanding of what computation is, or even what it could become, outside the wildest and vaguest kinds of science fictional speculation – which for us becomes absolutely indistinguishable from magic. What is at stake in the question of “real
Can a Machine Have a Soul?
227
AI” is not just the complexity of recursive loops but the techno-phenomenologically constituted possibility that these loops can open up not just extremely complex recursivity but something else, which I would propose to call the spirals of genuine procursivity.
From Moral Machines to Procursive Justice One of the commonly discussed examples of the way that advanced automation leads to dilemmas of ethical programming is the question of how to build self-driving cars that may have to decide between two distinct potentially fatal outcomes. Of course, it is understood that this is not a question of some future “real AI” but rather a very real and concrete conundrum for automobile manufacturers and regulators. Nevertheless, not only can it be generalized to all questions about programming computers to make “social decisions,” but it also exposes what it really means to say that machines are stuck in a recursivity that can never become procursive. In such situations, the question is how to translate “human laws” into “machine language,” and the problem is quite literal, given that the risk to manufacturers comes from the potential of being held legally as well as commercially accountable for the negative impacts of their automation systems – which is also the case for Boeing in the 737-MAX crashes, without this being a question of making an “ethical” choice between alternative outcomes in the same sense we are discussing here. To be held accountable for the consequences of this automation is to be responsible for the inadequacy of such a translation. The discussion about how to program cars to make choices between outcomes is really about the grammatization of law: how to break it down into discrete and reproducible elements that can become the object of calculations by machines. Now, law itself was already a kind of grammatization process, grammatizing unwritten custom in a discrete and reproducible way via hypomnesic tertiary retentions on the basis of the production of a literate population. The premise underlying written law, however, is that the relationship between law and a particular situation (a case, consisting of particular circumstances) can never be automatic and always requires an element of interpretation making possible a decision. With digital automation, it is a question of grammatizing law on the basis of a different kind of analysis, turning legal questions into algorithmic and probabilistic programs. In the event of negative outcomes, it will be the character of the algorithms and the probabilities with which they operate that will be the subject of legal dispute, and the goal of such programming is to produce the least disputable versions of such algorithms. Even if written legislation is already a form of grammatization, therefore, there is still a significant difference between the operation of legal judgment and the application of computational automation to the situation of a potentially fatal car crash where there is a choice between two negative
228
Ross
outcomes. What kind of knowledge is it that is being grammatized in the latter case? It is situational, collectively-held knowledge about how to make decisions within the spatial and temporal finitude of particular situations, that is, a particular locality – an ethos, precisely. It is no doubt possible and in fact necessary to make calculations so as to be able to effect good decisions concerning specific situations: without calculation, bad decisions will inevitably abound. Calculation is necessary but it can never be sufficient, and this is because calculation can never in and of itself produce a decision, and because every situation requiring decision thus amounts to a moment of potential bifurcation in a process – that is, to a process in which the decisive element that must necessarily be produced is one that did not belong to the set of elements of which the process was hitherto composed, and it is why, for example, common law is an evolving history. For Derrida, in “Force of Law” (2002), this amounts to an aporia that he names the “epokhe- of the rule.” A law is not enough: there must also be a judge, who decides the way in which the law is to be applied to the situation, which is to say, the way that the judge interprets the relationship between the general law and the specific circumstances, where this interpretation necessarily exceeds calculability, even if the judgment may involve, for example, the calculation of damages – which is always an artifice designed precisely to compensate for the incalculability of justice. Now, in law, the right of the judge to produce a new decisive element is explicitly limited so as to prevent arbitrariness, but it is not nonexistent, and this is why in higher courts this judgment must be written – so that it can be retrospectively interpreted as performatively legitimating that element of juridical invention. It is this character of situational decision as localized and temporal (that is, finite), but which must at the same time be in excess of calculability (that is, infinite), that makes it agonistic and tragic. This is ultimately because, wherever there is grammatization, there is the potential for (if not the inevitability of) proletarianization, where decision fails to exceed calculability, or fails to be good. Hence it is that when interpretation is eliminated from judgment via the automation of decision-making, the judge becomes, as Derrida says, a “calculating machine,” which is to say, no longer a judge (2002, 251–2). But this is precisely what happens with the algorithmic and probabilistic automation of situations such as imminent collisions between self-driving cars and pedestrians. Such automation strives to “optimize” outcomes according to some utilitarian conception determined in advance and translated into a computer program, and ultimately to eliminate the necessity for decision, interpretation, and judgment – the ultimate fantasy is a world where traffic laws will become obsolete because all decisions made by selfdriving vehicles will be the best possible, or at least so far superior to human judgment as to amount, from the merely human standpoint, to a form of infallibility. The attempt to transform the car – or the entire traffic system, or society as a whole, which is perhaps then no longer a society – so that the
Can a Machine Have a Soul?
229
car (or the traffic system, or the non-society) itself becomes a non-judge as calculating machine is a symptom of transhumanism as ideology. It is founded on an attempt to deny the necessity of what Derrida called the “mystical foundation of authority”: mystical, that is, performative, futural, on the basis of ghosts of the past that have been retained and keep coming back (in loops and spirals). More specifically, it is founded on the denial that the knowledge grammatized in such processes truly is situational knowledge: phronesis. But it is not just a question of pointing to the epokhe- of the rule, which necessitates calculation while always, at some point, suspending calculation in order to open the space for decision. It is also a matter of knowing on what basis that decision exceeding the calculable is made. Or in other words: on the basis of what criteria can the judgment required for such a decision be undertaken? It is at this point that Stiegler’s distinction between existence and consistence becomes crucial: the judge judges in the name of justice, which does not exist but consists. But what does that mean? It means that justice is an infinite protention, a protention that arises from and can only arise from very long circuits of transindividuation, that is, very long circuits of the accumulation of secondary retentions that become collective via tertiary retentions. Or in other words: by projecting expectations onto transitional objects, worlds open up that are always worlds of an ethos – as long as these worlds last (and as long as they are not completely proletarianized by automation). Stiegler’s point is that we have access to what does not exist but consists only on the basis of what does exist: tertiary retentions, hypermaterial inscriptions stamping past time so that it remains in the present. The judge interprets the law, which does exist, on the basis of what does not exist, justice. Justice has never existed and never will exist, but it is essentially futural in the sense that it has this character of infinite protention, opened up and accessed on the basis of tertiary retentions – which persistently remind us of all the ghosts of the unjust past, in this sense facing backwards like Benjamin’s angel of history (Benjamin 1969). On the basis of that opening up and that access, and when new situations arise, laws themselves can be judged unjust, and hence laws can be revoked and replaced with new ones, as can entire legal systems. In other words, there are constant spirals operating between what exists (laws and archives of all kinds) and what does not exist (such as the idea of justice), but these are not just retentional but protentional: it is for this reason that I propose to call them procursive. The interpretation involved in any judgment involves not just feedback loops but procursive spirals, spirals that loop back from the future insofar as interpretation is always made on the basis of what tertiary retentionally exists, but what, protentionally, only consists. I have argued that the sensitive soul is indeed involved with recursive loops between primary retention and secondary retention, but that this soul
230
Ross
does not know primary retention as such, nor secondary retention as such. Nor, therefore, does it know protention as such, and therefore it cannot know what only protentionally consists. And this is ultimately because the possibility of the noetic soul is opened up by the technical artifact, because it is the spiraling of tertiary retention with primary and secondary retention that introduces the possibility of exceeding the endosomatic relationship between organism and milieu and thus the possibility of acting on the basis of what one knows, of that in which one believes, or has faith, or even “wants” (in Stiegler’s sense), and that allows one to judge, and therefore decide. But this is also a possibility that can be closed off by this technical artifact, when the automation that the acquisition of knowledge always involves (automation is necessarily involved, for example, in learning scales on the piano, through which one achieves the autonomy of the pianist or the composer) ends up serving the autonomization of the machine at the expense of the noetic soul, so that the latter finds itself automated in a regressive process that consists in inducing a sleep of reason, which in turn all too often produces monsters. The machine, too, lacks access to primary and secondary retention as such. Most fundamental of all, however, is the question of protention, which after all receives far less attention in Husserl than does retention. What Stiegler is arguing, it seems to me, is that the ideas, whatever they are (justice, the triangle, the number four, the French language), never exist, and if they do consist, it is only because we project this consistence protentionally, where this projective possibility depends firstly on primary and secondary protention, and secondly on the tertiary retentional systems from out of which such incalculable ideas arise. AlphaGo may propose new strategies, but it can do so only on the basis of criteria that effect recursive loops, never procursive spirals. The temporal and libidinal circuits in which noetic souls are caught, however, are not just between past and present but include what is protentionally futural as well. We cannot conceive the emergence of the ideas without acknowledging that this emergence arises from procursive spirals between retention and protention. This means that the criterion for decision is supplied from a (nonexistent but consistent) future, in which we believe on the basis of our shared tertiary retentional past, involving perpetual and repetitive temporal spirals that, as futural and performative, cannot be characterized as simply recursive. The law, for example, is written and decided on the basis of an idea of justice whose roots may lie in the long-lost past that opens up for us thanks to the accumulated stock of tertiary retentions that form what Stiegler calls the noetic necromass, but it is an idea that can be interpreted and enacted only on the basis of a future that does not exist, has never existed and will never exist, but in which we must continue to believe – procursively, that is, such that it never ceases to spiral ahead of our own individual and collective ability to draw nutrition from this noetic necromass. Is it not this character of procursivity that escapes every machine and that constitutes the singular
Can a Machine Have a Soul?
231
property of the noetic soul, insofar as we continue to possess such souls and insofar as these souls continue to be possessed by what they do not have, for ill or for good – which is never guaranteed?
Note 1 And after all, isn’t this why a photograph really connects us to a past, as light affects the molecules of the film, which in a chemical bath affects the molecules of the paper on which the photograph is developed, onto which light reflects before it strikes my eyes, forming a continuous chain of electromagnetic and chemical connections – as long as we are talking about pre-digital photography?
References Benjamin, Walter. 1969. “Theses on the Philosophy of History.” In Illuminations, trans. Harry Zohn, 253–264. New York: Schocken Books. Canguilhem, Georges. 1991. The Normal and the Pathological, trans. Carolyn R. Fawcett, with Robert S. Cohen. New York: Zone Books. Derrida, Jacques. 1998. Of Grammatology. Corrected edition, trans. Gayatri Chakravorty Spivak. Baltimore, MD and London: Johns Hopkins University Press. Derrida, Jacques. 2002. “Force of Law: The ‘Mystical Foundation of Authority,”’ trans. Mary Quaintance. In Acts of Religion, edited by Gil Anidjar. New York and London: Routledge. Heidegger, Martin. 1995. The Fundamental Concepts of Metaphysics: World, Finitude, Solitude, trans. William McNeill and Nicholas Walker. Bloomington and Indianapolis: Indiana University Press. Heidegger, Martin. 2010. Being and Time, trans. Joan Stambaugh. Albany: State University of New York Press. Husserl, Edmund. 1991. On the Phenomenology of the Consciousness of Internal Time (1893–1917, trans. James S. Churchill. Dordrecht: Kluwer. Leroi-Gourhan, André. 1945. Milieu et techniques. Paris: Albin Michel. Leroi-Gourhan, André. 1993. Gesture and Speech, trans. Anna Bostock Berger. Cambridge, MA and London: MIT Press. Lotka, Alfred J. 1945. “The Law of Evolution as a Maximal Principle.” Human Biology 17: 167–194. Ross, Daniel. 2009. “Politics and Aesthetics, or, Transformations of Aristotle in Bernard Stiegler.” Transformations: Journal of Media, Culture and Technology 2009 no. 17. http://transformationsjournal.org/wp-content/uploads/2017/01/Ross_ Trans17.pdf. Ross, Daniel. 2019. “Mind Snatchers of the Anthropocene. Can Aspects Dawn Within the Gulag Architectonic?” Polish Journal of Aesthetics 52, no 1: 21–40. doi:10.19205/52.19.1. Stiegler, Bernard. 1998. Technics and Time, 1: The Fault of Epimetheus, trans. Richard Beardsworth and George Collins. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2008. Technics and Time, 2: Disorientation, trans. Stephen Barker. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2010. Technics and Time, 3: Cinematic Time and the Question of Malaise, trans. Stephen Barker. Stanford, CA: Stanford University Press.
232
Ross
Stiegler, Bernard. 2011a. The Decadence of Industrial Democracies: Disbelief and Discredit, Volume 1, trans. Daniel Ross and Suzanne Arnold. Cambridge: Polity Press. Stiegler, Bernard. 2011b. “The Tongue of the Eye: What ‘Art History’ Means,’ trans. Thangam Ravindranathan with Bernard Geoghegan. In Releasing the Image: From Literature to New Media, edited by Jacques Khalip and Robert Mitchell, 222–236. Stanford, CA: Stanford University Press. Stiegler, Bernard. 2020. Nanjing Lectures 2016–2019, trans. Daniel Ross. London: Open Humanities Press. von Uexküll, Jakob. 2010. A Foray into the Worlds of Animals and Humans, with A Theory of Meaning, trans. Joseph D. O’Neil. Minneapolis: University of Minnesota Press. Whitehead, Alfred North. 1929. The Function of Reason. Princeton, NJ: Princeton University Press. Winnicott, Donald A. 1971. Playing and Reality. London: Routledge. Wittgenstein, Ludwig. 1953. Philosophical Investigations, trans. G.E.M. Anscombe. London: Macmillan.
14 The Chiasm Thinking Things and Thinging Thoughts. Our Being with Technology Lars Botin
The Chiasm The French phenomenologist Maurice Merleau-Ponty dealt extensively with the chiasm as a linguistic/semantic and physiological/phenomenological figure in the latter part of his work, of which the main part was published posthumously. Merleau-Ponty pointed at the force of the figure of the chiasm in relation to the representation of connectedness in between the body and the world, so that the chiasm showed how the body is intertwined in the world, as the world emerges in the body. It is the overall claim of the chapter that things mediate this in betweenness. Things are what connect us to the world, and vice versa. The chapter focuses on how we are together with things, and how this being together is constantly evolving through processes that overly can be classified as “technical.” By this I do not abide by instrumental and “onedimensional” readings and understandings of modern technological world – on the contrary, I think we are engaged in a playful round-dance and mirror-play with our nonhuman companion, i.e. things/technologies. In this chapter, I deal with a variety of philosophy of technology reaching from classical phenomenology, post-phenomenology, actor-network-theory, posthuman approaches, as well as anthropological analyses of human–technology relations. This is the orchestral setting for “technics as space of contemporary existence” (editor’s comment), and, furthermore, an attempt to create a philosophical framework that finally embraces our nonhuman companion as an existential creature. As for the linguistic/semantic chiasm, which I came acquainted with during Danish lessons in high school, it is characterized by its tendency towards closure, because it actually remains self-referential and does not point at possible continuation or, for that matter, expansion. The figure reads in brevity A/B*/B*/A: “The sun is red and blue is the moon” or, as it goes in the title of this chapter: “Thinking things and thinging thoughts.” At the same time, there is also a regular continuous rhythm, A/B/A/B, based on the grammatical construct: verb, substantive, verb, substantive. This continuous flow is what brings speed and motion into the figure, which, as I shall stress, is crucial in order for the figure to reflect reality.
234
Botin
Merleau-Ponty, and later Jacques Derrida, were inspired by the physiological/phenomenological reading of the concept, that depends on Martin Heidegger’s more open and less finite interpretation of the chiasm as the fourfold: earth, sky, divinities and mortals. Merleau-Ponty claims that the chiasm is: “a figure for thinking through the relationship between the body and the mind, the factual and the ideal” (qtd. in Toadvine 2011, 339). The chiasm [x] is a crossing, or erasure as Derrida (1981, 44) would have it, where cross-fertilization in between what has been crossed/erased, and the erasure/crossing as action, takes place, and a new being is allowed for. Heidegger (1971, 175–80) wrote that in the crossing of the fourfold there is a paradoxical withdrawal of Being that allows for a new (poetic) understanding of what Being is. It is in the paradoxical withdrawal of Being that space is created for a new form of being, the becoming of which Heidegger characterizes as a nurturing, cherishing and flourishing. An analogical movement can be found in the recent philosophy of technology of Luciano Floridi in Onlife Manifesto (2015): “We believe that societies must protect, cherish and nurture humans’ attentional capabilities” (2015, 8). Floridi is concerned with how digital technologies are attacking and threatening our capabilities of being empathic and our being together with others, and how “technologies shape us as humans, while we as humans critically shape technologies” (ibid., 8, my italics). This hybrid relationship of bridging and connecting can be read as a chiasm: Technologies–Humans/Humans–Technologies, although there is a rather significant asymmetry in the construction because (as I have emphasized in italics) humans critically shape technology, whereas the workings of technologies on humans is left unremarked. Poiesis is at work in the construct of the chiasm, and, accordingly, techné is the driving force in poiesis at least if we are to take Aristotle’s basic claims for granted. Merleau-Ponty wrote in “The Intertwining – The Chiasm”: My body model of the things and the things model of my body: the body bound to the world through all its parts, up against it (arrow) all this means: the world, the flesh not as fact or sum of facts, but as the locus of an inscription of truth: the false crossed out, not nullified. (1968, 131) It is the intersection or the crossing of the body and things that creates the locus for inscription. In Merleau-Ponty’s “locus,” and Heidegger’s “clearing,” truth happens. My reading of what happens as the chiasm is at work in Thinking Things and Thinging Thoughts is less ambitious and perhaps even less philosophical because it is not tuned towards truth, righteousness and beauty. Rather, it is an attempt to show how we are connected to things, or as Merleau-Ponty would have it, to the flesh of things, through our bodies and senses. Our way to the world and the things is bodily, and thinking/thoughts are mediated by this body-thing-world relation. What,
The Chiasm
235
according to Merleau-Ponty, is mediated is flesh. The fleshiness of the world, the things and the body tie them together, and a sort of metamorphosis occurs as the flesh is decentered from its being: The flesh is not matter, is not mind, is not substance. To designate it, we should need the old term “element”, in the sense it was used to speak of water, air earth, and fire, that is in the sense of a general thing, midway between the spatio-temporal individual and the idea, a sort of incarnate principle that brings a style of being wherever there is a fragment of being. The flesh is in this sense an “element” of Being. (Merleau-Ponty 1968, 141, original italics, my bold) If the chiasm of Thinking Things and Thinging Thoughts is this “incarnate principle” of something in the middle, or in between, and the flesh is a general thing, which is not exclusively human, but belongs to the things and the world as well, then it is this becoming of Being as flesh that should have our attention. Merleau-Ponty did not focus on technologies and did not specify what he meant in relation to things. In this essay, I have chosen to make things and technologies synonymous, and I am aware that in doing this I might violate and misinterpret Merleau-Ponty’s reflections on what a thing is. One could say the same for what concerns Heidegger’s definitions and reflections on things. Does Heidegger in dealing extensively with the concept of the thing, think technologies? The answer to this rhetorical question is no, but, at the same time, paradoxically yes. Heidegger wrote “the essence of technology is by no means anything technological” (1977a, 311), meaning that technology cannot be reduced to mere technical materiality, efficiency and functionality, because it exactly belongs to the realm of things; it is hence elementary and universal. Things are what tie us to the world, through bonds and strings that we cannot and should not try to dissolve or escape. We would not be humans at all in a world, if it were not for our being together with things, or in other words, technologies. In the essay “The Question Concerning Technology” Heidegger is very clear and poignant on this as he writes: “We shall be questioning concerning technology, and in doing so we should like to prepare a free relationship to it. The relationship will be free if it opens our human existence to the essence of technology” (1977a, 311, original italics). Heidegger emphasizes that we are in a position where our intertwined relationship with technology is possibly liberating and emancipating, which normally escapes the general critique of Heidegger being a determinist and dystopian in relation to technology. At the core of the chiasm are technologies – in other words, things – which mediate thinking and thoughts, or human reflection. The intertwinement is, according to the French archeologist and anthropologist André Leroi-Gourhan, interdependent. In Gesture and Speech (1993) Leroi-
236
Botin
Gourhan combines biological evolution with technical and social evolution. It was our ability as bipedal biological organisms to use our hands freely that made it so that our brains developed and grew in size. Craft and the brain developed together – thinking things and thinging thoughts happened simultaneously. Leroi-Gourhan points at the fact that at some stage, at the end of the lithic period the brain stopped developing and growing, while technology in a societal setting continued its exponential evolution: “The volume of the human brain has apparently reached its peak, and the (lithic) industry curve on the contrary, is at the start of its vertical ascent” (LeroiGourhan 1993, 144). The increasing evolutionary gap in between humans and technology has also had the attention of several philosophers in the 20th century. Among those, the French sociologist and philosopher Jacques Ellul (1964) and the American urbanist and philosopher Lewis Mumford (1967) have written extensively on the technological evolutionary path, where technology has left us behind and now we are just blunt details and parts of the mega-machines, or mere results of the workings of technology. Leroi-Gourhan is less negative and deterministic than Ellul and the later Mumford, and finds that we as humans have developed a strategy of exteriorization that we apply in order to free ourselves, hence emancipate, through and with technology: We must get used to being less clever than the artificial brain we have produced, just as our teeth are less strong than a millstone and our ability to fly negligible compared with that of a jetcraft … We already know, or will soon know how to construct machines capable of remembering everything and of judging the most complex situations without error. What it means is that our cerebral cortex, however admirable, is inadequate just as our hands and our eyes are inadequate; that it can be supplemented by electronic analysis methods; and that the evolution of the human being – a living fossil in the context of the present conditions of life – must follow a path other than the neuronic one if it is to continue. Putting it more positively, we could say that if humans are to take the greatest possible advantage of the freedom they gained by evading the risk of organic overspecialization, they must eventually go even further in exteriorizing their faculties. (Leroi-Gourhan 1993, 265) This means that it is the “thinging,” which, according to Leroi-Gourhan, has had the overhand that will eventually, as the exteriorization of our capabilities – physiological and intellectual – emancipate, empower and enhance our being in a world where we as “living fossils” still exist. By this, I do not adhere to the mindset of New Age “singularity,” as advocated by post- and transhumanists like Julian Huxley, Nick Bostrom and Ray Kurzweil, to mention the most prominent thinkers and philosophers of transhumanism in achronological order of some sort. Julian Huxley’s famous essay
The Chiasm
237
from 1957 on the future destiny of human species could be read as supplementary to the evolutionary ideas of Leroi-Gourhan, who was, for his part, highly inspired by the evolutionary Darwinistic ideas on selection, which he eventually transferred to the realms of technology and society, hence also of politics and ethics. In my opinion, the ideas of Leroi-Gourhan can be of value when it comes to reflections on how these exteriorizations should take place, in other words, how we think things. Leroi-Gourhan perceived contemporary humans as “living fossils,” and did not foresee or envision a radical new transhuman biological organism. The “living fossil” would, in this perspective, remain and persist as an enhanced, empowered and emancipated “living fossil.” In the following I shall return to the concept of thinging, and the existentialism that Heidegger conveyed to the concept.
Thinging In the essay “The Thing” from 1951, Martin Heidegger elaborates extensively on what the thing is. How the thing things in its becoming a thing, which is something completely different than a mere object or a thing in a commonsensical meaning: If we let the thing be present in its thinging from out of the worlding world, then we are thinking of the thing as thing … Thinking in this way, we are called by the thing as the thing. In the strict sense of the German word bedingt, we are the be-thinged, the conditioned ones. We have left behind us the presumption of all unconditionedness. If we think of the thing as thing, then we spare and protect the thing’s presence in the region from which it presences. Thinging is the nearing of world. Nearing is the nature of nearness. As we preserve the thing qua thing we inhabit nearness. (Heidegger 1971, 178–9) The worlding world is, according to Heidegger, a round-dance and mirrorplay in between the fourfold of earth, sky, divinities and mortals – a chiasm far more complex than the examples I introduced earlier – that results in a nearing. Getting close is, in this perspective, not spatio-temporal, but existential as we, in the end, “inhabit nearness.” This “conditioned” condition is the thinging that makes it so that we are almost automatically tuned towards caring, preserving and nurturing. The thinging conditions and leads our thoughts toward that direction. In the round-dance and mirror-play, we are bodily inclined towards the other. The round-dance and mirror-play of the fourfold sublimates and mediates values and qualities that were not there from the beginning, they result from the thinging, which is the round-dance and mirror-play in between earth, sky, divinities and mortals. We should save the earth, receive the sky, await
238
Botin
the divinities, and initiate our being as mortals. Heidegger writes in the essay “Building Dwelling Thinking” from 1951: In saving the earth, in receiving the sky, in awaiting the divinities, in initiating mortals, dwelling occurs as the fourfold preservation of the fourfold. To spare and preserve means: to take under our care, to look after the fourfold in its presencing. What we take under our care must be kept safe. (Heidegger 1971, 149) In the thinging, or gathering, we witness how we are required/conditioned (bedingt) to be both active (save and initiate) and passive (receive and await). As active mortals, we are on the earth, and we respond to the sky/divinities by grateful reception and hopeful attendance. These qualities emanate from processes of appropriation and use, and can, in fact, show themselves in very different and various ways. This means that the thing is not essential in its readyat-hand quality, but multistable, to phrase and frame it with the postphenomenological term of the American philosopher of technology Don Ihde (1993): “As thing; it can call forth appropriate responses, and it is not excluded from unanticipated new uses or ways of proving resistant to our uses” (qtd. in Minar 1999, 303). It is the calling from the fourfold, the round-dance and mirror-play that can result in multiple appropriations and uses, which, as I read it, are interdependent of the ways in which the “voices” from the fourfold are heard and interpreted. Earlier I touched upon the fact that the chiasm is a closed and determent figure, which appears as some sort of static and a-temporal outline. I pointed at the fact that the grammatical construct of verb/substantive/verb/substantive induced a linear progressive movement and dynamics into the process of thinking things and thinging thoughts. On this note, it is worthwhile to dwell at a mathematical figure that could supplement the semantic and physiological figure of the chiasm. The lemniscate of technological mediation was introduced by Olya Kudina (2019). Kudina elaborates on Don Ihde’s human–technology–world relations, and Peter-Paul Verbeek’s figure on human–technology–world, where these relations are constituted through actions/practices and experiences/interpretations. Kudina’s introduction of the lemniscaste into the equation is rather interesting, because through it, we are told that expansion and intensification is at hand, and, furthermore, that technology is constantly reinterpreted and reenacted as iterations are made in the lemniscate (see Figure 14.1). The figure also shows, as do almost all postphenomenological figures, that technology is in the middle, and bridges humans and the world. This bridging is more than a connection in between humans and the world. Heidegger writes: “The bridge swings over the stream ‘with ease and power.’ It does not just connect banks that are already there. The banks emerge as banks only as the bridge crosses the stream” (1971, 150). Technology makes us emerge as humans as the world world’s exactly as the world through the
The Chiasm
239
Figure 14.1 The lemniscates: constantly expanding and intensifying. Source: Lars Botin, inspired by Kudina and Verbeek 2019
mediation of technology. The lemniscate shows that this is an eternal process, where we are “caught” in the round-dance and mirror-play, or as Andrew Pickering (1995) would have it, in a dance of agency in between humans, world, and technology.
Thinking Which comes first, thinging or thinking? They work simultaneously, but because my focus is on technology, and not on cognitive processes of thinking, I have chosen to prioritize thinging over thinking. In classical phenomenology, it is an established idiom that when we see, we see something; when we hear, we hear something; and when we think, we think something. There is a world to be thought at and thinking happens in that world. We cannot think of a world without being in it, hence we cannot stand outside and observe a world. This ontological positioning of thinking in the world, enmeshed in things, and inseparable from she/he who thinks, shows how closely and intimately thinking is connected to things and the world. Is thinking exclusively a human enterprise, or is it, as postphenomenology claims, something that is distributed in between humans-technology-world? The answer is that thinking happens in between these, which means that some sort of distribution is at hand. Leroi-Gourhan (1993) pointed at the fact that we have developed machines that are much stronger, faster and smarter than we are, and through the processes of exteriorization, we
240
Botin
should be prepared to let go of even more cognitive and intellectual capabilities that we have used define what is it to be human. Thinking is conditioned (bedingt) by the thing it thinks, so from this perspective, there is no free thinking or free will, for that matter. The illusion of being free and unconditioned, hence able to assess and judge from a God’s eye perspective, has characterized Western analytical philosophy and science for many centuries. Even in some constructivist understandings of reality, we find that the same illusion is repeated when it comes to the analysis of, for instance, actors and networks as Bruno Latour would have it (1999). In this particular phenomenological and postphenomenological reading of thinking we realize that it is a messy business. The boundaries in between things and thinking are blurred to the extent that processes of thinking pass without friction or resistance into things, as things set the conditions for thinking. According to Heidegger, this has always been the case, but contemporary technological innovation and development is sublimating this osmotic condition in between thinking and things. The French philosopher Bernard Stiegler, who often agrees with Heidegger on how technics work, writes: Technics think, and must not the connection to the future be redoubled, as the thought of technics, as what think technics? Isn’t it necessary to think that we think as technics, as it thinks? It thinks before us, being already always there before us, insofar as there is a being before us; the what precedes the premature who, has already always pre-ceded it. (2009, 32) We are born into a world which is already always technical, and we are mentally and physically shaped and molded according to what (things and technologies) is/was already always there. Heidegger coined a phrase of “planetary technicity” to describe this phenomenon, meaning that the essence (Wesen) of technology in the modern age has pervaded and colonized everything: the who is posed in different and varied positions by the what. According to Stiegler, there is an asymmetrical meeting in between the who and the what, which Heidegger, to some extent, pointed at with the concept of “enframing.” We are already always a standing reserve for technical thinking, hence optimization, efficiency and exploitation. Heidegger meant that this posture (Gestell) was the dominant in relation to modern technologies, but, at the same time, pointed at the fact that other postures are possible because in technology, there is also the saving part (Heidegger 1977a). The American philosopher of technology Carl Mitcham is aware of the fact that technology has a decisive impact on what it means to think, and how we should address this technologically mediated thinking: Within such a logical framework, propositions are not properly true or false, but rather more or less useful or appropriate to a context.
The Chiasm
241
Propositions that are not strictly true or false are further linked in arguments that are not strictly valid or invalid. This obviously suggests a pragmatic logic, and indeed pragmatist philosophies of science such as John Dewey’s have tended to view science as an inherently technological endeavour. (Mitcham 1994, 99) The expanding genetic logic of technology makes it so that it is not restricted to scientific knowledge production and practice but is omnipresent and omnipotent in every realm of human existence. Our potential different postures, or the genetic logic of our bodies and cognition, make way for a sublime “gathering” (or symbiosis) in between the logics of technology and humans, which should be ethically and politically framed. I shall now turn my attention towards even more speculative thoughts on what it means to be and become in a world, where everything is in constant movement and flux. In doing this I try to capture and grasp what thinking is in relation to the acceleration and speed of contemporary technology.
The Question Concerning Speed Gilles Deleuze and Félix Guattari wrote in Thousand Plateaus. Capitalism and Schizophrenia (1980) that the in between is a very decisive place to be and understand: “The rhizome has no beginning or end; it is always in the middle, between things, interbeing, intermezzo … The middle is by no means an average; on the contrary, it is where things pick up speed” (2007, 28). The relation in between thinking and things is osmotic and rhizomatic. Substance floats from one condition to another and the acceleration in the floating is exponential: Between things does not designate a localizable relation going from one thing to the other and back again, but a perpendicular direction, a transversal movement that sweeps one and the other way; a stream without beginning or end that undermine its banks and picks up speed in the middle. (ibid.) This is what happens in the lemniscate of the human–technology–world. It is a whirl of speed, extension and intensification, where we are conditioned to be and act, but as Leroi-Gourhan pointed out, we actually manage, even as “living fossils,” to adapt to this condition – it is an evolutionary process. N. Katherine Hayles’s voice should also be heard on these matters, because she makes a very qualified attempt to show how technology acts and thinks, and how this acting and thinking can be considered in relation to humans. In the book Unthought. The Power of the Cognitive Nonconscious she writes:
242
Botin
“On technical side are speed, computational intensity, and rapid data processing; on the human side are emotion, an encompassing world horizon, and empathic abilities to understand other minds” (2017, 140). The question is how to combine these two in common effort while remembering that: “Ultimately the humans are the ones that decide how much autonomy should be given to the technical actors, always recognizing that these choices, like everything else within a cognitive assemblage, are interpenetrated by technical cognition” (ibid., 137). There is no free thinking, no free will, and choices are conditioned and “interpenetrated by technical cognition.” Nevertheless, we should constantly reflect upon this intriguing relationship when in the whirl and speed of things, because: “…when we design, implement, and extend technical cognitive systems, we are partially designing ourselves as affecting the planetary cognitive ecology: we must take care accordingly” (ibid., 141). The wording of Hayles recalls Merleau-Ponty’s remark on how things are fleshy, as “interpenetration” occurs. It is an intercourse in between human–technology–world where new and hybrid beings are constantly and exponentially becoming in a whirl of speed. The philosopher Bernard Stiegler, who, as I mentioned above, was highly influenced by the early Heidegger and studied Leroi-Gourhan extensively in the 1990s, also has a notion on speed and acceleration. Once again, it seems as if the challenge is to create a bridge in between slow humans and fast technology. Stiegler writes: “the speed of technical development since the Industrial Revolution has continued to accelerate, dramatically widening the distance between technical systems and social organizations as if, negotiation between them appearing to be impossible, their final divorce seems inevitable” (2009b, 3). Stiegler’s rather doomsday-esque prophecy is pretty much in line with what Paul Virilio wrote in Speed and Politics: An Essay on Dromology (1977; Virilio 1986), where humans and humanity in the whirl of speed would cease to exist. Yet another voice on the problematic consequences of speed and acceleration can be found within the framework of Critical Theory where a representative of the fourth generation of the Frankfurter School, Hartmut Rosa, has dealt extensively with technological acceleration of all possible human and social relations. This acceleration has eliminated meaningful gathering and work in lifeworld settings, and enforced the power of anonymous and capitalist system, epitomized in tech-giants like Amazon, Google, Facebook, Apple, etc. (Rosa 2015). Let us return to Hayles in order to escape Stiegler’s, Virilio’s, and Rosa’s ostensibly dystopian and dark visions. A central concept in Hayles’s ontological view on how we are together with technology and technological systems is, as I mentioned above, interpenetration, with all the connotations and associations the term brings along. To my mind, she is thinking in the same way as Leroi-Gourhan, Heidegger, Deleuze and Guattari, and myself, just using a term that is much more carnal and bodily. We interpenetrate and we cross-fertilize through the intercourse:
The Chiasm
243
Human complex systems and cognitive technical systems now interpenetrate one another in cognitive assemblages, unleashing a host of implications and consequences that we are still struggling to grasp and understand … human intervention is certainly possible when aimed at systemic dynamics, and that such interventions can and do change the cognitive ecologies to make them more sustainable, more affirmative of human flourishing, and more equitable in their operations. (Hayles 2017, 175, my italics) We are together with and through technologies, and this being together, this symbiosis, can bring us further and hopefully solve our problems which, for their part, are as complex as our being with technology. According to Hayles, as is the case in postphenomenology, our relation to technology is still asymmetrical. Human intervention is possible, but, of course, interdetermined with technologies and things. While writing about “the utopian potentials of cognitive assemblages” she envisions: “Working together in recursive cycles, human conscious analysis, human nonconscious cognition, and technical cognition can expand the range and significance of insights beyond what each can accomplish alone” (ibid., 211). We need to be together with machines and technical systems that can think faster and better than we can, and probably we will also have to give up the distinction, to which the blurriness of boundaries points at.
Tinkering with Thoughts and Things The one who does not race, who does not dance, thus ignores an aspect of thought. (Stiegler 2009b, 27)
In order to find out what to do and where to direct our interventions, I argue it is inevitable to tinker with ethics and politics. On this note, we must try to imagine what kind of ethics and politics is born from cross-fertilizations in between humans and technologies. Bruno Latour wrote on the “parliament of things” (2005) and, inspired by Heidegger, stated that the thing is much more and else than an external object, and that things are folded. They are folded in myriads of ways, which transgress the fourfold of Heidegger, and they contain within them humans and nonhumans that interact according to laws and rules of the thing. Latour discusses how most democracies in Scandinavia have this folded vision of the thing that manifests in their parliaments, which have been named (since the premodern age) ting (thing). In Iceland, Altinget; in Norway, Stortinget; in Denmark, Folketinget; in Greenland, Landstinget; and on the Faroe Islands, Lagtinget. 1 So, what happens in these parliaments is thinging: there is thinging in between elected members, in between the site/locus and people moving in the corridors, offices et cetera, in between
244
Botin
the overall political framework of democracy which constitutes in laws and regulations and humans/nonhumans. The thing is a living organism, which is kept alive and feels alive through negotiations, discussions, debates, struggles, arguments, quarrels, which, in this case, is not exclusively a human affair. Stiegler has a complementary conception of the thing or the “technical object,” which in this respect, is the same: The industrial technical object is not inert. It harbors a genetic logic that belongs to itself alone, and that is its “mode of existence”. It is not the result of human activity, nor is it a human disposition, only registering its lessons and executing them. The lessons of the machine are “inventions” in the ancient sense of the term: exhumations. (Stiegler 2009a, 68) There is an opening in the genetic logic of the machine towards a human genetic logic, where a cross-fertilization is possible. By this, I do not mean that a brave new world of transhuman chimeras, monsters and/or hybrids is the outcome, but rather that potentials for fast, structured, responsible and sustainable solutions to imminent and immense problems are there to be found. This is why it makes sense to talk about how thinging constitutes thoughts that are political and ethical in their essence. Hitherto I have based my arguments on classical phenomenology through the perspectives of Maurice Merleau-Ponty and Martin Heidegger, and, furthermore, looked at elaborations made by André Leroi-Gourhan, Don Ihde, Peter-Paul Verbeek, Karl Mitcham, Bernard Stiegler, N. Katherine Hayles, Gilles Deleuze and Felix Guattari, and briefly touched upon Bruno Latour and his ideas on what a thing is. In the following, I shall try to frame all this in a more explicit postphenomenological perspective, because I argue that this will explain what goes on in between things and thoughts.
The Mediations of Things and Thoughts The Dutch philosopher of technology Peter-Paul Verbeek has, during the past decades, elaborated on what he calls a mediation theory, and here I shall try to relate main conceptualizations from the above and combine them to Verbeek’s framework. According to Verbeek (2015), mediation is what constitutes the interconnection in between users, designers and technology, where technology is considered on an equal level with “humans” in contributing to the mediation, and the outcome is technological intentionality and scripts, where our perception of reality is transformed and the actions are the result of translated scripts. Verbeek is trying to build a conceptual bridge in between phenomenology (transformation of perception) and Actor Network Theory (translation of action) linking technological intentionality and scripts through mediation.
The Chiasm
245
Ever since the publication of What Things Do (2005), Verbeek has had a focus on how to do patchworks, bricolages and assemblages of different patches and elements, where thoughts and ideas of Don Ihde, Albert Borgmann, Michel Foucault, Bruno Latour, and Martin Heidegger occur in order to capture what things are in their complexity on a philosophical, anthropological, political, and ethical level. In my own work, I have performed the same type of eclectic patchworking, inspired by classical phenomenology, postphenomenology, ANT, Critical Theory, and some posthuman theorists such as Gilles Deleuze, N. Katherine Hayles, and Donna J. Haraway. I am fully aware of the risks in doing this kind of bricolage, but in order to grasp the dance of agency, the round-dance and mirror-play we need to make this kind of scaffolding. In my perspective and in relation to Thinking Things and Thinging Thoughts, I stress the force of transformation and emergence of technology in the process of understanding what is mediated. Peter-Paul Verbeek claims that mediation is distributed, as is intentionality, responsibility and freedom, and that users, designers and technologies play an equal part in the constitutional processes (2011, 2015). In postphenomenological framework, the constitutions of the what are multistable and multiple, which means that chimeras, monsters, hybrids and cyborgs are possible beings, as well as less dramatic and transformed beings may result from the constitutional process. Both a carpenter and a murderer may be constituted through the embodiment of the hammer (Ihde 1990). It is important to note that we are not talking about a process of production or construction, but rather of creation and constitution, where the elements work together in co-creational and co-constitutional ways. Things become as the work goes on, and it is in the working that transformations emerge: creatures and constitutions. Things are mediated by thinking and thinging, and the result is the beginning for reflexive thoughts on the new thing. We are within the hermeneutic spiral (Gadamer 2006), and within the lemniscate of eternal intensification and expansion. The algebraic figure of the lemniscate is, in this specific context, seen as the representation of eternity (a number 8 on its side), and of the ways in which things are infinitely and constantly in perpetual motion and interaction – an idea that was originally introduced by the British mathematician John Wallis in the 17th century (Burton 2011, 566). Thinking thoughts is not exclusively a human affair as Leroi-Gourhan predicted and as Hayles has proven, and thinging things is not about how objects may interact with each other or how they are as mere physical entities. Technology mediates, combines, and composes ever new constitutions, configurations, and creatures that are set in a world, which is also configured and composed in new, multistable ways. Leroi-Gourhan pointed at the human as a “living fossil” in a brave new world of technology, and how this “living fossil” would survive as such, independently of technological development. In my perspective, we have to give up the idea of the human as an immutable given,
246
Botin
because technology is not just “planetary technicity” on a macro level that sets the frame for human activity, but also penetrates into our docile bodies (Foucault 1977), where it transforms our capabilities and forces. We are told that we live in the age of the Anthropocene, where humanity, due to our number and technological capacity, is changing the planet radically in relation to climate, geology and biology, and where we are in a state of permanent crisis on all levels. The anthropocentric attitude in the paradigm of the Anthropocene, which certainly has some rather dystopian notes, has been affiliated to Heidegger’s philosophy of technology (Ihde, forthcoming), but, as I read the current situation, there is a sort of hubris in viewing humanity as this force of destruction: 7.5 billion people is not many and our collective force as humans is very little. What has enabled humans to change the conditions on earth is technology: hence the age that we live in is technological and it is the composition and configuration of technological intentionality in combination with human agency that draws the picture. The attitude should be techno-anthropocentric and the age be coined as Techno-Anthropocene. As shown in the lemniscate (Figure 14.2), there is a center, from which technology as a spider in a web constitutes new humans and new worlds. These are not necessarily dystopian – on the contrary, our being with technology in a world that is constantly evolving and revolving paves the way for new constitutions and creatures that would and could have the potential to safeguard and flourish of human–technology–world hybrids by nurturing and caring. As I argued above, this is not a defense of transhuman/posthuman views or a manifest, because, as I see it, the utopian visions of most transhuman and posthuman philosophies are grim in their conceptions of humans and society. Basically, they focus on egocentric, hierarchical and antidemocratic technological innovations and development, where these individuals control and master technology for their own benefit, in other words, singularity. Elsewhere, I have written about how this technological condition we are living in is sublime (Botin 2017), and by this, I mean that we are at the frontier and on the boundaries of what it means to be human, and we are set in positions where choices have to be made. Transhuman conditions and solutions are a part of the sublime, but so is upheaval and appraisal of what it means to be human, and what a good society is. Verbeek has recently been inspired by Michel Foucault and his views on how we should move at the limits, confronting, challenging, and discussing these limits in order to understand our being within these limits and the limits themselves. Foucault also addresses the concept of “attitude” as a way of being ethically in a world. The “modern attitude” as described in “What is Enlightenment?” is characterized by a critical and in-depth questioning of what reality and presence really is: For the attitude of modernity, the high value of the present is indissociably from a desperate need to imagine it, to imagine it otherwise
The Chiasm
247
than it is, and to transform it not by destroying it but by grasping it in what it is. (Foucault 1984, 40) The chiasm of Thinking Things and Thinging Thoughts is the framework for this transformation, imagination, confrontation, challenge, and discussion. The same thing goes for the organism of the lemniscate. The organic evolution of human–technology–world relations and mediations is infinite and eternal, but still it should happen and happens within the limits. If we place ourselves outside the limits, like transhumans, and stultified science that still believes in criterions like value-neutral and objective knowledge and technology, then we certainly are doomed, as many proponents of the Anthropocene would have it.
Conclusions Claude Lefort wrote in the foreword to the English translation of MerleauPonty’s The Invisible and the Visible: Thus the withdrawal of the things from the world accompanies the withdrawal of him who thinks them, and the work exists completely only in virtue of this double absence, when, all things having become thoughts and all thoughts having become things, it suddenly seems to draw the whole of being to itself and to become, by itself alone, a source of meaning. (1968, xiv) The chiasm of Thinking Things and Thinging Thoughts is an organism that moves in the in between, or what Foucault would think as liminality. The sublime condition of moving on the limits, where things constantly transit, transgress, transcend, and transform in alchemic processes, from material to ephemeral and vice versa. From things to thoughts and from thoughts to things. The autopoietic work of the chiasm as self-referring, and closed as a semantic construction, is transformed by the force of the triad human–technology–world. The expanding and intensifying power and essence of technology that Jacques Ellul and Martin Heidegger pointed towards in their writings from the 1950s challenges the boundaries and transforms the core. I am aware of the fact that by leaning heavily on Martin Heidegger’s extensive framework of thought there is a danger of being blamed for many things. Firstly, buying into his ideological and political observation; secondly, being a transcendentalist; and thirdly, being a technological determinist/essentialist or even a dystopian. I can guarantee that I am nothing of the sort. In fact, in my opinion Heidegger was not an essentialist/determinist/dystopian, because he saw technology as heterogeneous in its “essence,” multi-intentional in its determinism and hence open to all sorts of “topoi,” might that be utopian, eutopian or
248
Botin
dystopian. He got it all wrong on his evaluation of modern technologies, because he was blinded by tradition, nostalgia and ideology. Nevertheless, he tried to case his argumentation through technology, hence not producing transcendental “armchair philosophy” of which he has been blamed by postmodern philosophers and thinkers of the “empirical turn,” among those Don Ihde (2010) and Andrew Feenberg (2017). We are neither “living fossils” as Leroi-Gourhan would have it, nor transhuman supermen in the Anthropocene, but rather liminal creatures that move and are moved through and with things; in other words, technologies. We do not destroy or disrupt, but transform, through our common (for both humans and technologies) imagination, the world in which we live. I have called for a certain Techno-Activism in order to confront the imminent and immense challenges we are facing for what concerns climate change, migration, and social injustice and inequality (Botin 2020). This is the direction in which we should be imagining and moving. Things and technologies in their omnipresence and omnipotence are integrated and indissociably part of this willful action, where the concepts of hope, care, nurture, cherish, and flourishing already and always must be the constitutional power of our common efforts. To love and bear; to hope till Hope creates From its own wreck the thing it contemplates (Prometheus Unbound, Percy Bysshe Shelley, 1820)
Note 1 The Icelandic Altinget was established in 930 AC and was imported from Scandinavia. Gulatinget in Norway and Isøretinget in Denmark are older, and can be dated back to 500 AC. Tinget was a yearly event were the freemen of the nation met and discussed various issues that needed their attention. Occasionally kings were elected at these gatherings, but mostly it was about legislation, regulation, and taxes.
References Botin, Lars. 2017. “Sublime Embodiment of the Media.” In Postphenomenology and Media. Essays on Human-Media-World Relations, edited by Yoni Van Den Eede, Stacey O’Neal Irwin and Galit Wellner, 167–184. Lanham, MD, Boulder, CO, New York, and London: Lexington Books. Botin, Lars. 2020. “Building Scaffolds: How Critical Constructivism and Postphenomenology Could Gather in Common Enterprise.” Techné: Research in Philosophy and Technology 24, nos. 1–2: 41–61. Burton, David M. 2011. The History of Mathematics. An Introduction. London: McGraw-Hill Deleuze, Gilles and Félix Guattari. 2007. A Thousand Plateaus. Capitalism and Schizophrenia. London and New York: Continuum.
The Chiasm
249
Derrida, Jacques. 1981. Dissemination. Chicago: Chicago University Press. Ellul, Jacques. 1964. The Technological Society. New York: Vintage Books. Feenberg, Andrew. 2017. Technosystems. The Social Life of Reason. Cambridge, MA: Harvard University Press. Floridi, Luciano, ed. 2015. The Onlife Manifesto. Being Human in a Hyperconnected Era. New York: Springer Open. Foucault, Michel. 1977. Discipline and Punishment. The Birth of the Prison. London: Penguin Books. Foucault, Michel. 1984. “What is Enlightenment?” In The Foucault Reader, edited by Paul Rabinov, 32–50. New York: Pantheon Books. Gadamer, Hans-Georg. 2006. Truth and Method. London and New York: Continuum. Hayles, N. Katherine. 2017. Unthought. The Power of the Cognitive Nonconscious. Chicago and London: Chicago University Press. Heidegger, Martin. 1971. “The Thing.” In Poetry, Language, Thought. New York: HarperCollins Publishers. Heidegger, Martin. 1977a. “The Question Concerning Technology.” In Basic Writings. New York: HarperCollins Publishers. Heidegger, Martin. 1977b. “Building Dwelling Thinking.” In Basic Writings. New York: HarperCollins Publishers. Huxley, Julian. 1957. “Transhumanism.” In New Bottles for New Wine, 13–17. London: Chatto & Windus. Ihde, Don. 1990. Technology and the Lifeworld. From Garden to Earth. Evanston, Ill: Northwestern University Press. Ihde, Don. 1993. Postphenomenology: Essays in the Postmodern Context. Bloomington: Indiana University Press. Ihde, Don. 2010. Heidegger’s Technologies: Postphenomenological Perspectives. New York: Fordham University Press. Ihde, Don. forthcoming. “From Heideggerian Industrial Gigantism to Nanoscale Technologies” Foundations of Science, special issue “Rethinking Technology in the Anthropocene.” Kudina, Olya. 2019. The Technological Mediation of Morality. Value Dynamism, and the Complex Interaction Between Ethics and Technology. Enschede: University of Twente. Kudina, Olya and Peter-Paul Verbeek. 2019. “Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy.” Science, Technology & Human Values 44, no 2: 291–314. Latour, Bruno. 1999. “Do You Believe in Reality?” In Pandoras’s Hope. Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press. Latour, Bruno. 2005. “From Realpolitik to Dingpolitik or How to Make Things Public.” In Making Things Public. Atmospheres of Democracy, edited by Bruno Latour and Peter Weibel, 14–41, Cambridge, MA: MIT Press. Leroi-Gourhan, André. 1993. Gesture and Speech. Cambridge, MA: MIT Press. Merleau-Ponty, Maurice. 1968. The Visible and the Invisible. Evanston, IL: Northwestern University Press. Minar, Edward H. 1999. “The Thinging of the Thing: A Late Heideggerian Approach to Skepticism?” Philosophical Topics 27, no. 2: 287–307. Mitcham, Carl. 1994. Thinking through Technology. The Path between Engineering and Philosophy. Chicago: Chicago University Press.
250
Botin
Mumford, Lewis. 1967. Technics and Human Development. The Myth of the Machine. New York: Mariner Books. Pickering, Andrew. 1995. The Mangle of Practice. Time, Agency & Science. Chicago: Chicago University Press. Rorty, Richard. 1998. “A Master from Germany. One of the greatest Western Philosophers was also a Nazi.” New York Times, May 3. Rosa, Hartmut. 2015. Social Acceleration. A New Theory of Modernity. New York: Columbia University Press. Shelley, Percy Bysshe. 1898 [1820]. Prometheus Unbound. London: J.M. Dent & Company. Stiegler, Bernard. 2009a. Technics and Time, 1. The Fault of Epimetheus. Palo Alto, CA: Stanford University Press. Stiegler, Bernard. 2009b. Technics and Time, 2. Disorientation. Palo Alto, CA:Stanford University Press. Toadvine, Ted. 2011. “The Chiasm.” In The Routledge Companion to Phenomenology, edited by Sebastian Luft and Søren Overgaard, 336–347. London: Routledge. Verbeek, Peter-Paul. 2005. What Things Do. Philosophical Reflections of Technology, Agency and Design. University Park, PA: Penn State Press. Verbeek, Peter-Paul. 2011. Moralizing Technology. Understanding and Designing the Morality of Things. Chicago and London: Chicago University Press. Verbeek, Peter-Paul. 2015. “Beyond Interaction.” IX Interactions XXII, no. 3 (May– June): 26–31. Virilio, Paul. 1986 [1977]. Speed and Politics: An Essay on Dromology. New York: Semiotext(e).
Index
activism: data 5; Techno-Activism 248 Adams, Joshua 9, 10, 16, 178–85 affect 50, 53, 55, 62, 65–8, 71, 81, 126 affection 62, 66–7, 223 affective: environment see environment; logic 63, 72; publics 66–7; stance 82, 85, 89–93 affordance 49–50, 62, 68, 89, 112; narrative 50, 55, 57; platform 57; technological 68, 99 agency 15, 36, 42, 53, 72, 72n, 82, 239, 245; of storytelling see storytelling; distributed 37, 39, 106; meaningmaking 51; meaning-producing 63–5; narrated 63–5; narrative see narrative; nonconscious 90; nonhuman see nonhuman agent 1, 3, 4, 15, 52, 56, 61–5, 68, 72, 77; autonomous agent 191; market agent 161–73, 174n; moral agent 118; nonconscious agent see nonconscious; rational agent see rationality algorithm: black box algorithm 139, 141, 145; big data algorithm 80; higher education admission algorithm 8, 12, 15, 131, 135–6, 140, 142; predictive algorithm 16, 94, 161; recruitment algorithm 9, 15, 131, 135–6, 140–42, 144–45; social algorithm 134–38; sorting algorithm 8, 140–44, 146 algorithmic ethos see ethos algorithmic governmentality see governmentality alienation 5, 206, 212 Alombert, Anne 8, 14, 16, 202–13, 214 AlphaGo 220–21, 230 Amazon 10, 78, 81, 95n, 112n, 194, 242
analog: analogic reality see reality; specters of 15, 117–18, 125, 127 android 2, 204; schizoid android 15, 98–101 animal 28, 30, 36, 167, 198, 214, 224; animal soul see soul Anthropocene 42, 246–8 Antigone 132, 135–7 Arendt, Hannah 118, 121–3 Aristotle 16, 214, 216, 234 artificial intelligence, AI 2, 3, 5, 13, 14, 16, 17, 124, 139, 146n, 147n, 190, 203, 208, 220–1, 223, 226–7; and algorithms 134, 135, 144, 146; and delegation of human skills 119, 120; and soul 214–5; and technical externalization 209–11 assemblage 15, 51, 60–1, 65, 70, 71–2, 72n, 77–8, 245; cognitive 9, 36–41, 43, 64, 83, 93–4, 98–100, 101, 106–12, 242–43; cyborgian 85, 89; human–technical 51, 61, 64, 67, 68; narrative 57; and schizoid nondroids 99–100, 102–4, 111; sociotechnical 159; technical 156, 157 author, authorship 7, 15, 52, 53, 61, 64, 67, 69–71; see also narrative authority automated trading systems 16, 161, 172 automatic literature see literature automatic society 126, 209 automaton 16, 119, 202, 203–4, 206, 212 autonomous, autonomy 118–19, 123; agent see agent; decisions 172; machine, see machine; subject see subject; vehicle 203, 211, see also self-driving cars; weapon systems 1, 9, 11, 102
252
Index
Bayesian epistemology 161, 163–5, 168–70, 173 behavioral data see data Being 36, 195–200, 215, 234–6, 238–47 Berns, Thomas 80, 94, 133–4, 136, 138, 142 big data see data; algorithm see algorithm biosymbiosis 27, 32, 34, 43 BitTorrent 156, 158, 159 black box 5–6, 13, 138; algorithm see algorithm Botin, Lars 17, 233–50 bounded rationality see rationality Canguilhem, Georges 202–3, 207, 216 Canonical 179–80, 181–4 capitalism 10, 32, 123, 124, 174n, 181, 184, 210, 241; digital 4, 125–6, 128, 202; emotional 51; information 98, 100, 113n; print 68; surveillance 4, 68, 80, 82, 98–100, 101–4, 108, 110–11, 112n, 134, 183 chiasm 233–5, 237–8, 247 classic game see game cognition 30, 35, 37, 72–3n, 82, 193, 203, 209, 241; distributed 39, 100; extended 99, 113n; nonconscious 10, 38, 86; technical 242, 243 cognitive assemblage see assemblage cognizer 30, 31, 37, 39; assemblages of 109; human 104, 106–7, 111; nonhuman or technological 65, 99, 103 Colebrook, Claire 61, 66–7 collective 220, 222, 229; affect 53; body 117; intelligence 16, 207, 209–11; memory 136, 207; user 51, 57 colonialism see digital colonialism computation 1, 11, 83, 225, 226; and actors 67; filtering 80; and media 31, 34–9, 103; and models 172–3 computational media see digital media conscious, consciousness 7, 10, 15, 35, 37–9, 77, 93–4, 104, 134; choices and decisions 81, 91, 143, 153; intentional 216, 223 COVID–19 133, 147n cultural interface see interface cybersymbiosis 27, 34 data: behavioral 15, 68, 98, 103, 107, 111; big 5, 11, 13, 79, 80, 173, 190, 202; activism see
activism; colonialism see digital colonialism; protection 5, 145; personal 5, 81, 145 Descartes, René 199, 215 Dawson, Paul 53, 55, 56, 62, 64, 70, 71 de Finetti, Bruno 163, 164, 174n delegation 159; of cognitive capacities 119, 121, 209, 210; of decision-making 8, 14, 15, 117, 152, 191, 212 Deleuze, Gilles 35, 66, 72n, 241, 242, 244, 245 Derrida, Jacques 125, 132, 140, 141, 144, 216, 228, 229, 234 desire 36, 42, 117, 119, 142, 223; and selection 7, 15, 65, 77, 80–2, 93–4, 208 Dick, Philip K. 98, 101, 112n, 113n digital: colonialism 178–79, 182–83; ecology see ecology; environment see environment; interface see interface; digitalization 1, 13, 117, 124, 128, 131; media 3, 6, 7, 14, 60–72, 77; milieu 16, 33, 37, 127, 205–7, 211–12; platform see platform; reality see reality; tool 11, 14, 15, 16, 178–9, 182–3; zombies 118, 124–5 distributed agency see agency driverless cars see self-driving cars dystopia 9, 101, 103, 107, 111, 235, 242, 246, 247–8 economics, economy: evolutionary 162, 166–7, 173; story 51 ecology: general 9, 35–7, 41, 43; market 167, 172 effectivity 154, 159 Eggers, Dave 15, 99, 101, 103–4, 111, 113n emergence 53, 168, 169, 230 endosomatic, endosomatization 215, 219, 222, 230 entanglement 14, 15, 61–2, 64, 71, 197, 222, 226 environment: environment 15; digital 3, 7, 8, 10, 51, 62, 63, 65, 67, 68, 70–1 environmentality 3, 35, 60 epiphylogenetic memory see memory ethical: ordeal 192, 196–8; world (Sittlichkeit) 132 ethos 52, 55, 131–3, 229; algorithmic 134–9; schizoid 100–1, 104, 111 European Union, EU 5, 6, 12, 106, 135, 145, 147n, 148n evolutionary economics see economics evolutionary game see game
Index exemplum 54, 56–7 exosomatic, exosomatization 14, 126, 215, 222–3 experience 81, 142, 218–9, 221, 225, 238; and content 6–7, 62, 64; and digital environments 9, 35, 60, 67, 71; and interfaces 86, 88, 92–3; moral and ethical 192–3, 194, 196, 198–200; networked 101, 102, 104, 110; and storytelling 11, 14, 49–52, 54–7, 60, 62, 67, 71 experientiality 50, 54, 56–7, 60, 64, 67, 72n, 108 externalization 120, 203, 207–9, 211–12 Facebook 5, 10, 18n, 62, 67, 78, 85, 103, 107, 111, 112n, 138, 180, 242 factuality 151, 154–7, 159 fair game see game Fama, Eugene 162, 174n feedback loop see loop Fisher, Mark 125–6 Foucault, Michel 35, 137, 245, 246–7 fourfold 234, 237–8, 243 game 161, 173; classic 162–3; computer 41; evolutionary 168–70; fair 162, 165, 170, 174n; imperfect 166; theory 162, 174n general ecology see ecology Genette, Gérard 64, 66, 69–70, 73n Georgakopoulou, Alexandra 6, 54, 55, 60, 62, 72n Gibson, William 100, 101, 107–9 Gillespie, Tarleton 61, 77, 82 God 131, 195–6, 240 Google 9, 10, 16, 18n, 81, 103, 112n, 178–84, 203, 242 governance, governmentality 13, 35, 36, 141–2, 206; algorithmic 17, 80, 82, 131, 133–7, 139, 146; statistical 134 Grand Challenges 27–9, 43, 43n Guattari, Félix 10, 35, 36, 72n, 112n, 241, 242, 244 Günther, Gotthard 119–21 Habermas, Jürgen 155, 157 Haraway, Donna 28, 32, 245 Hayles, N. Katherine 3, 8–9, 10, 14, 15, 27–45, 51, 60, 61, 64–5, 68, 71, 72n, 73n, 77, 79, 82–3, 94n, 98–100, 101–4, 107, 110, 112, 112n, 113n, 136, 208, 241–5
253
Hegel, Georg Wilhelm Friedrich 120, 132–3, 153–4, 156 Heidegger, Martin 8, 9, 16, 123, 131–2, 136, 138, 144, 189–91, 195–7, 200n, 207, 217–8, 222, 224, 234–5, 237–8, 240–7 higher education admission algorithm see algorithm Husserl, Edmund 216, 218, 221, 224, 230 Hörl, Erich 3, 9, 35–7, 40, 60 human: activity 3, 52, 61, 151, 153, 203, 204–5, 211, 211, 244, 246; awareness 3, 39, 60; choices 4, 61, 100, 148n; human–machine, human–technology relation 6, 16, 35–6, 37, 40, 64, 67, 68, 71, 72, 78, 100, 121, 206, 233–48; mind 49, 139, 157; perception 60, 63, 67, 120, 224; rights 3–5, 178; species 28–9, 42, 197–8; users 6, 14, 43, 60, 77, 83, 93, 205, 211; see also nonhuman, transhuman humanities 11, 27–8, experimental 71 hypermaterialism see materialism Ihde, Don 153, 238, 244, 245, 246, 248 incalculable 122, 123, 140, 196, 221, 230 inequality 16, 161, 173, 183, 248 information 12, 30–1, 110–11, 124, 143, 146, 215, 226; and cognition 37–9, 64, 72n, 73n, 82, 99, 100, 104, 107, 208; and data protection 144–45; and decision theories 163–5, 171–3, 174n, 175n; and game theory 162, 165–66; and interfaces 78, 80–1, 85, 95n, 99, 107; and search engines 178–80, 182; overflow 49, 84; society 5; technology 190, 202, 205, 206, 211 information capitalism see capitalism infraculture 10, 11 Instagram 15, 78, 79, 82, 84–5 interface 3, 10, 14, 29, 33, 38, 60, 95n, 99, 210; body/technology 100, 107, 110; cultural 7, 15, 65, 77–94; user 68–70 justice 3, 6, 10, 15, 99, 125, 152, 154–6, 159, 170, 181; administrative and law 4; environmental 36; social 5; re-evaluation of 7, 9, 135, 139–41, 136, 228–30; see also delegation Kangaskoski, Matti 7, 10, 15, 60, 65, 77–97 Kant, Immanuel 8, 118, 122, 128n, 210, 216
254
Index
Kaur, Rupi 84–5 Kurzweil, Ray 2, 134, 236 Latour, Bruno 4, 28, 61, 240, 243, 244, 245 lemniscate 238–9, 241, 245–7 Leroi–Gourhan, André 207, 215–16, 218, 220, 235–7, 239, 241–2, 244–5, 248 Levinas, Emmanuel 8, 192–201 Lewis, David 163, 167 Lindberg, Susanna 1–21, 131–50 literature 60, 77, 82–4; automatic literature 93–4; ergodic 66; speculative 99 Longo, Anna 8, 16, 161–177 loop 223–4, 227, 229–30; feedback 7, 15, 64–5, 85, 94, 107; recursive 217, 219–21, 224–5, 227, 229–30; sensorimotor 208, 210, 217 Lotka, Alfred 215, 218 machine 16–17, 67, 138, 196–200, 205–7, 221–4, 230, 244; autonomous 40, 151, 190–1; and communication 37, 40, 64–6, 100–1; digital 6, 60–1; and justice 140–4, 227–9; mega-machine 127, 190, 236; moral 1–2, 9, 14–16, 117–28, 135, 138, 147, 152, 159, 193–4, 203, 209, 211–12, 214–15; in Moral Machine experiment 2, 121, 128n, 147n; story-making 60, 71; and thinking 7–8, 203–5, 243; trans-classic 120; see also human machine learning 3, 5–6, 13, 17, 134, 138, 141, 145–6, 220–3; see also artificial intelligence market 5, 9, 113n, 173n–5n, 134, 161–73, 184; agent see agent; competition 161–2, 166–8, 171, 173, 183; equilibrium 161–3, 165–6, 168–70 materialism 179; hypermaterialism 216; traumatic 99–100, 103, 107–11 Mäkelä, Maria 11, 14, 49–59, 62, 64, 70–1 Maynard Smith, John 167 meaning-producing agency see agency media 6–7, 14, 29–31, 34–9, 65, 68, 81, 95n, 133; as species 14; social 11–12, 49–53, 55–7, 62–4, 68, 72, 78, 89–90, 123, 142; see also digital media mediation 153, 233–5, 237–40, 244–5; of communication 30, 34; of norms 154; as technological 30, 156–9, 238–9, 224–5, 247
memory 204, 207–9, 218, 224; epiphylogenetic 137 Merleau–Ponty, Maurice 233–5, 242, 244 microtemporality see temporality Milkman 87–8 Mitcham, Carl 140–1 MIT Moral Machine experiment 2, 22–3, 41, 43, 117, 121, 128n, 147n moral 1–2, 8–9, 11, 28–9, 57, 121, 131, 191–4, 203; morality 154, 171, 191; action 156–7; decision 14, 16, 117–18, 128; judgments 191; machine see machine; positioning 15, 50–1, 54–6 more-than-human see nonhuman Morgenstern, Oskar 162 motivational uncertainty see uncertainty Msila, Vuyisile Theophilus 183 multi-agent simulation 168–71; see also agent multistable 138, 145 narrative: agency 15, 50, 51, 53, 57, 62–3; authority 57, 61; didacticism 49–57; as universal see universal narrativity 53–5, 60, 64 Nayar, Pramod K. 99, 202–3, 107–11 Nelson, Richard 166–7 Neyrat, Frédéric 9–10, 15, 117–28, 139 Ngcoya, Mvuselelo 179, 181–4 noesis, noetic 14, 207, 211; life 207–8, 211; being 215; functions 210; soul see soul nonconscious 14, 77, 85–6, 90–4, 101, 135–137; agent 77, 82–3, 93; cognition 10, 37–8, 243 nonhuman 6–8, 14, 28, 30–1, 35, 37, 39, 41–3, 99, 233, 243–4; more-than-human 15, 62; agency 52–3, 61–5, 68, 70–2, 85; non-inhuman being 16, 126 normativity 54–7, 151–5, 162, 171 object see technical object Ogien, Ruwen 191–2 O’Neil, Cathy 12, 135–6, 140–1 Older, Malka 15, 99, 109–14 Other, the 196–200 Papacharissi, Zizi 66–7 paratext 54, 64, 68–71, 73n Parcoursup 140–1, 147n peer-to-peer 16, 157
Index Pencolé, Marc-Antoine 8, 15, 151–60 personal data see data Peters, John Durham 29, 30–1 Piippo, Laura 6, 10, 15, 51, 52–3, 60–76 pharmakon 145 phenomenology 35, 197, 221–2, 225–6, 233, 239, 243–5 platform 11, 14, 51, 53, 61, 67, 86, 180; digital 40, 63, 124, 161, 182, 203, 207, 210–11; and Moral Machine experiment 2, 121, 128n; and narrative 50, 57; and peer-to-peer networks 156–8; social media 12, 62–4, 66, 68–72, 73n, 82; and surveillance 103 Plato 123, 141–2; Platonic soul see soul poetics 7, 65, 77–8, 82, 89, 94 posthuman 98–9, 102–3, 107, 111–14, 245–6 postphenomenology, 239, 243, 245 prediction 80–1, 92–4, 104, 161–5, 171–3; predictive algorithm see algorithm probability 80, 82–3, 109, 162–6, 172, 174n procursivity 227, 230 proletarianization 228 push button 81, 85, 90–2 rationality 16, 162–3, 167–9; bounded rationality 161–2, 165, 168, 173n Ramose, Mogobe 179 reality 13, 17, 117, 125–6, 139, 244, 246; analogic 117; digital 3–5, 8–11, 99 readability 77, 83–6, 88–9, 94 recruitment algorithm see algorithm recursive, recursivity 3, 138–9, 227, 229; cycling 61, 71, 243; loop see loop response-ability 28–9, 37 robot 1–3, 17n, 99, 117, 123 193–4, 198, 203–6, 211, 215; social 190, 194, 198; see also android Roine, Hanna-Riikka 1–21, 51, 52–3, 60–76, 98 Rorty, Richard 29 Ross, Daniel 8, 14, 214–32 Rouvroy, Antoinette 80, 82, 94, 128n, 133–4, 136, 138, 142 satire 113n Savage, Leonard 163, 164–5, 171 schizoid android see android schizoid nondroid 15, 98–112 Schumpeter, Joseph 166, 174n
255
science fiction 1–2, 82, 88, 100, 113n search engine 9, 12, 16, 81, 103, 133, 156, 178–84; see also Google Sebbah, François-David 8, 16, 189–201 selection 7, 204, 217–19, 222, 225–6; algorithms 136, 138, 144–5; binary 41; in evolutionary economics 167, 169–70; logic of 15, 77–8, 80–3, 86, 92, 94; natural 32, 237 self-driving car, driverless car 1, 11, 22–3, 40, 193, 196, 199; in Moral Machine experiment 2–3, 43, 121, 128n, 147n; and moral decisions 15, 117–8, 122, 191, 227–8 sensibility 77, 82, 85, 94, 193 sensitive/animal soul see soul sensorimotor loop see loop Shelley, Peter Byshee 128, 248 Simondon, Gilbert 16, 152, 203–7, 210–11, 217 singularity see technological singularity Skyrms, Brian 170 social algorithm see algorithm social media see media social robot see robot sorting algorithm see algorithm soul 141–2, 204; noetic 16, 214, 216–20, 223–4, 226, 230–1; sensitive/animal 215, 218–19, 220, 222, 229; vegetative 216 species 27–34, 41–3, 167, 198, 216–17, 220, 222–3; species-in-biosymbiosis 34, 43; species-in-common 27–29, 34, 41, 43; species-in-cybersymbiosis 27, 34, 43 specters see analog statistical governance see governance Stiegler, Bernard 4, 8, 10, 14, 16, 65, 94, 126, 136–7, 144, 145, 203, 207–11, 215–27, 229–31, 240, 242, 243–4 Stiglitz, Joseph 161, 165, 171, 174n–5n storytelling 11, 14, 49–57, 60–6, 68; agencies of 60–3, 65, 72; boom 14, 49–51; viral 14, 51– 5 subject 61–2, 70, 94, 122, 155, 157, 159, 191–3, 197–8, 211; autonomous 1, 3, 17, 152, 190; neoliberal 51; subjective degree of belief 163–4, 174n; subjectivity 35–7, 107, 151–2; and subject–object divide 120, 152–4, 206 sublime 246–7 Sugden, Robert 169, 170–1 Suoranta, Esko 15, 98–114
256
Index
surveillance capitalism see capitalism sympoiesis 32 tacit negotiation 82, 85 technics 1, 8, 16, 137, 207, 240 technical culture 206, 211–12 technical object 1, 136, 138, 203–6, 211, 244 technological singularity 2, 16, 124, 236, 246 technosphere 41–3 Techno-Activism see activism temporality 33, 35, 72n, 138, 143–4; microtemporality 35, 37–8, 103 Terra nullius 180–1 trading systems see automated trading systems trans-classic machine see machine transhuman 100–1, 113n, 127, 229, 236–7, 244, 246–7 transindividuation 220, 229 traumatic materialism see materialism Twitter 56, 62, 66, 68–70, 73n, 78, 109 Ubuntu: as philosophy 179–80, 182–3; as software 179–81, 183 universal 49–51, 55, 140; in narrative 55–6; truth, 14, 49, 51; versus particular 54–5, 155
Uexküll, Jakob von 16, 208, 217–8, 219, 223 uncertainty 16, 71, 161, 172–3; motivational 155 values 10, 50, 65, 77, 82, 106, 183–4 vegetative soul see soul Verbeek, Peter-Paul 1, 11, 61, 152–4, 238, 239, 244–6 Villani, Cédric 6, 135, 138, 139, 141, 147n virality 53–55, 93, 95n; viral exemplum see exemplum; viral storytelling see storytelling Von Neumann, John 162 vulnerability 196–7 weapon systems see autonomous, autonomy Wikipedia 16, 156, 158–9, 180 Winnicott, Donald 218, 220 Winter, Sidney 166–7, 169 Wolf, Maryanne 79, 83–4, 89, 95n zombies see digital zombies Zuboff, Shoshana 4, 5, 68, 80, 82, 99, 102–4, 107, 112n, 113n, 134, 136, 138, 141, 183