Predicted Humans: Emerging Technologies and the Burden of Sensemaking (Media, Culture and Critique: Future Imperfect) [1 ed.] 1032656913, 9781032656915

Predicting our future as individuals is central to the role of much emerging technology, from hiring algorithms that pre

131 37 6MB

English Pages 140 [147] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Series Information
Title Page
Copyright Page
Table of Contents
Introduction
1 To Be Predicted, Or Not to Be Predicted: That Is the Question
1.1 Insights From Scripture and Mythology
1.2 From Insights From Philosophy to “Have You Ever Asked Yourself ‘When Will I Die?’.”
Notes
References
2 Death Clocks and Other Emerging Technologies to Predict Our Future
2.1 The Current Obsession With Technological Prediction
2.2 Death Clocks
Notes
References
3 Prediction and the Automation of Our Future
3.1 The Fall of Autonomy and the Rise of Automation
3.2 Oedipus, Macbeth and the Automation of Our (Predicted) Future
Notes
References
4 Prediction and the Unbearable Burden of Sensemaking
4.1 Oedipus, Macbeth and Sensemaking as Our Most Unbearable Burden
4.2 The Automation of Sensemaking From Philosophy to Art
Notes
References
5 Should We Have the Right Not to Be Predicted?
5.1 Prediction Taken to the Extreme From Literal Death to Metaphorical Death
5.2 The Wisdom Not to Be Predicted
Notes
References
6 Concluding Remarks: Predicted Humans, Individualism and Their Future
6.1 The Present
6.2 The Future
Notes
References
Index
Recommend Papers

Predicted Humans: Emerging Technologies and the Burden of Sensemaking (Media, Culture and Critique: Future Imperfect) [1 ed.]
 1032656913, 9781032656915

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

PREDICTED HUMANS

Predicting our future as individuals is central to the role of much emerging technology, from hiring algorithms that predict our professional success (or failure) to biomarkers that predict how long (or short) our healthy (or unhealthy) life will be. Yet, much in Western culture, from scripture to mythology to philosophy, suggests that knowing one’s future may not be in the subject’s best interests and might even lead to disaster. If predicting our future as individuals can be harmful as well as beneficial, why are we so willing to engage in so much prediction, from cradle to grave? This book offers a philosophical answer, reflecting on seminal texts in Western culture to argue that predicting our future renders much of our existence the automated effect of various causes, which, in turn, helps to alleviate the existential burden of autonomously making sense of our lives in a more competitive, demanding, accelerated society. An exploration of our tendency in a technological era to engineer and so rid ourselves of that which has hitherto been our primary reason for being –​making life plans for a successful future, while faced with epistemological and ethical uncertainties –​ Predicted Humans will appeal to scholars of philosophy and social theory with interests in questions of moral responsibility and meaning in an increasingly technological world. Simona Chiodo is Professor of Philosophy at Politecnico di Milano. She was Visiting Professor at the University of Cambridge and at the University of Edinburgh, Visiting Scholar at the University of Pittsburgh and spent research stays at Harvard University. She was also Academic Visitor at the Massachusetts Institute of Technology. She is a member of the Technology Foresight Centre of Politecnico di Milano. Her last works focus on the relationship between technological innovation and human autonomy.

Media, Culture and Critique: Future Imperfect Series Editors: Ross Abbinnett (University of Birmingham) and Graeme Gilloch (Lancaster University)

A platform for new works devoted to exploring contemporary crises and drawing on and developing different critical traditions –​such as neo-​Marxism, feminism, postcolonialism, queer theory, poststructuralism, critical discourse analysis, and ecological and environmental approaches –​Media, Culture and Critique: Future Imperfect presents theoretically informed and politically engaged studies that share a fundamental concern with interrogating the discrepancies between the potentialities and rhetorics of new technologies and their actual utilization in the service of hegemonic interests. Whether with respect to 21st-century capitalism and new political economies of culture, global mediascapes and digital economies, urban transformation and militarization, the proliferation of consumption and media sites and spaces, biotechnologies and the body, or the exploitation and destruction of Nature, the series welcomes theoretically driven explorations of the technological and mediatized future, and the cultural manifestations of neoliberalism. Ross Abbinnett Between Habit and Thought in New TV Serial Drama Serial Connections John Lynch Predicted Humans Emerging Technologies and the Burden of Sensemaking Simona Chiodo For more information about this series, please visit: www.routle​dge.com/​Media-​Cult​ure-​and-​ Criti​que-​Fut​ure-​Imperf​ect/​book-​ser​ies/​FI

PREDICTED HUMANS Emerging Technologies and the Burden of Sensemaking

Simona Chiodo

First published 2024 by Routledge 4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 605 Third Avenue, New York, NY 10158 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2024 Simona Chiodo The right of Simona Chiodo to be identified as author of this work has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-​in-​Publication Data A catalogue record for this book is available from the British Library ISBN: 978-​1-​032-​65691-​5 (hbk) ISBN: 978-​1-​032-​64311-​3 (pbk) ISBN: 978-​1-​032-​65688-​5 (ebk) DOI: 10.4324/​9781032656885 Typeset in Sabon by Newgen Publishing UK

CONTENTS

Introduction

1

1 To be predicted, or not to be predicted: That is the question 1.1 Insights from scripture and mythology  8 1.2 From insights from philosophy to “Have you ever asked yourself ‘when will I die?’ ”  18

8

2 Death clocks and other emerging technologies to predict our future 2.1 The current obsession with technological prediction  34 2.2 Death clocks  42

34

3 Prediction and the automation of our future 3.1 The fall of autonomy and the rise of automation  65 3.2 Oedipus, Macbeth and the automation of our (predicted) future  71

65

4 Prediction and the unbearable burden of sensemaking 4.1 Oedipus, Macbeth and sensemaking as our most unbearable burden  86 4.2 The automation of sensemaking from philosophy to art  94

86

newgenprepdf

vi Contents

5 Should we have the right not to be predicted? 5.1 Prediction taken to the extreme from literal death to metaphorical death  106 5.2 The wisdom not to be predicted  116 6 Concluding remarks: predicted humans, individualism and their future 6.1 The present  127 6.2 The future  130 Index

106

127

136

INTRODUCTION

The objective of my work is to try to understand the profound reasons for our increasing obsession with the technological prediction of our future as single individuals, in that I think that the pervasiveness of technological prediction in all aspects of our individual lives may serve as an exceedingly meaningful symptom to try to understand the present time. As a philosopher, I use both historical-​ philosophical tools and theoretical tools, starting with thought experiments. But I hope to write something that may open a dialogue with other disciplines, from human and social sciences to science and technology. Chapter 1 (“To be predicted, or not to be predicted: that is the question”) uses both historical and theoretical tools to define the framework within which the philosophical meaning of the current obsession with technological prediction, which will be analysed in the following chapters, may be understood. The cradle of Western culture, from scripture to mythology to philosophy, continuously suggests that knowledge, specifically the prediction of humans’ future, can be dangerous to humans themselves. Yet, most interestingly, the history of Western culture coincides with the increasing attempt to make knowledge the primary objective of human activities, from philosophy itself to science and technology. More precisely, the more technology develops, the more its primary objective is to know our future, specifically to predict our future as single individuals, from our bodies’ performance to our minds’ performance. The opposition between the strive not to know and the strive to know, to the point that we automatically predict our future as single individuals, is an exceedingly meaningful symptom to try to understand the present time. The key argument developed in Chapter 1 is that even the cradle of Western culture underpins the strict correlations between human identity, technology and DOI: 10.4324/9781032656885-1

2 Introduction

prediction –​humans are thought of as essentially technological and technology is thought of as essentially predictive. Also, the following key argument is introduced (which will be developed in the following chapters): technology, as our most powerful millennial ally, seems to be increasingly designed and used to relieve us from the burden of our autonomous sensemaking, which seems to be more than ever unbearable in the present time as characterised by both the challenges of complexity and uncertainty and neoliberalism’s extreme consequences. Chapter 2 (“Death clocks and other emerging technologies to predict our future”) reviews emerging technologies that are increasingly designed and used to predict our future as single individuals. The present time is characterised by the general attempt to predict all aspects of our individual lives, from birth and health to disease and death through the destiny of our families, work and passions, in addition to our salary, insurance, mortgage and even shopping from what food to eat at noon to what book to read at midnight. Also, the present time is characterised by something more extreme: the particular attempt to predict our death, specifically its cause and date, as accurately as possible. The attempt to predict our death is pursued not only by professional researchers but also by several websites that even popularise it. From a philosophical perspective, the most urgent question to answer is the following: if knowledge, specifically the prediction of humans’ future, can be dangerous to humans themselves, why are we willing to design and use, for instance, more and more accurate “[a]‌ging clocks [that] aim to predict how long you’ll live” (Hamzelou, 2022: 16) and “highly robust predictor[s] of both morbidity and mortality outcomes” (Levine et al., 2018: 585)? More precisely, the key argument developed in Chapter 2 arises from the following question: why do we want to move from knowing through prediction what should be statistically probable to knowing through prediction what should be individually certain? Indeed, the more we are individually predicted, the less our freedom is given room for manoeuvre –​ which means that the less our autonomy is given room for manoeuvre when it comes to applying our reflection, imagination, planning, decision-​making, adaptability and sensemaking to our future. The key argument I propose to try to understand the paradox described above is the following: what we seem to desperately try to do, sometimes consciously and sometimes unconsciously, is to design and use technology as our most powerful millennial ally to relieve ourselves from burdens that, especially in the present time, are more than ever unbearable, starting with the burden of our autonomously making sense of both our life and, especially, our future identity. I especially work on the following idea: the reason why we (try to) control aging and death by predicting them is that prediction makes us capable of automating them –​ and automating something, from aging and death to any kind of burden, makes us capable of getting rid of it. Indeed, we actually trade our autonomy

Introduction  3

for prediction’s technological automation, which is our primary way to get rid of the unbearable burden of what paralyses us, from aging and death to any kind of burden, starting with our autonomy in general and autonomous sensemaking in particular: why should not we use our most powerful millennial ally, i.e. technology, to automate our future, if the exercise of our autonomy when it comes to making sense of our life is more than ever unbearable, especially under uncertainty? Chapter 3 (“Prediction and the automation of our future”) focuses on a phenomenon that may be read as an increasing move from thinking of human identity as autonomous to thinking of human identity as technologically automated, i.e. engineered. Interestingly enough, humans increasingly refer to themselves by replacing words resulting from self-​perception and self-​mastery (for instance, when they say that they feel good) with words resulting from engineering (for instance, when they say that their performance is optimal). More precisely, the key argument is that humans try to get rid of the burden of successfully making their future be the most meaningful realisation of their autonomy by designing and using predictive technologies that make their future be nothing more than the automated, i.e. engineered, effect of heteronomous causes. The move from human autonomy to technological automation, i.e. engineering, through predictive technologies is also argued by using two thought experiments based on the two greatest literary examples of the prediction of one’s future: Sophocles’ Oedipus (in the case of negative prediction) and Shakespeare’s Macbeth (in the case of positive prediction). More precisely, the key argument developed in Chapter 3 is that, sometimes consciously and sometimes unconsciously, we do not want to bear the burdens of our autonomy in that, especially in societies in which failure is almost unspeakable, they are more than ever unbearable. Thus, technology becomes our best ally to the point that it even becomes our ready-​to-​use scapegoat when it comes to escaping from our greatest individual failures: whenever we cannot make sense of our identity by living up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations about the fulfilment of our responsibilities and duties, prediction can unburden us from the unbearable burden of making sense of our identity by driving and, thus, reactivating our decisions and actions –​whenever we cannot make sense of our identity, prediction can unburden us from its unbearable burden by driving and, thus, reactivating our decisions and actions through their automation. Indeed, the more automated we are, the less desperately stuck we are when it comes to living up to the kind of identity that is demanded of us, no matter if we trade our autonomy for technological automation. Thus, we have a strong reason to be willing to be predicted to the point that we endanger our prerogative of acting against all odds: not knowing what to do may lead us to fear, and even mental unbalance, to the point that certainty underpinning extreme action may be more attractive

4 Introduction

than uncertainty underpinning no action. Yet, uncertainty is precisely what gives us possibilities –​uncertainty is precisely what gives us an open future, which means that the price we pay is exceedingly high: the more our future loses its openness, the less we have reason to (pleasantly) make sense of both our present life and, especially, our future life, starting with working on the kind of human being we ideally want to become as the future evolution of our present identity. Chapter 4 (“Prediction and the unbearable burden of sensemaking”) further develops the two thought experiments based on Sophocles’ Oedipus and Shakespeare’s Macbeth to reflect upon the reason why we are willing to trade the openness of our future for its prediction, i.e. automation, by designing and using increasingly powerful predictive technologies. The key argument is that we are willing to trade the former for the latter to get rid of one of the most unbearable burdens that characterise the present time: making sense of our present life under uncertainty by autonomously working on our future, whose success or failure will dramatically prove our success or failure as single individuals (especially in neoliberal societies, which require us not only to be up and running 24/​7 but also performing to the point that we should not age and die, in that aging and death are almost unspeakable failures). Automation increasingly characterises other typical ways to make sense of our life, from philosophy itself to art. In the case of philosophy, artificial intelligence has been recently used to generate philosophical answers to philosophical questions. In the case of art, artificial intelligence has been recently used to generate a portrait that was sold in a Christie’s auction. More precisely, the key argument developed in Chapter 4 is that, even though we are not necessarily expected to become saviours and kings (as in the cases of Oedipus and Macbeth), we increasingly happen to be analogously put under pressure and expected to be exceptional. Thus, as symbols of Western culture, Oedipus and Macbeth show something that the present time takes to the extreme, especially in neoliberal societies: total inaction is even worse than tragic action –​thus we, together with Oedipus and Macbeth, are willing to trade our autonomous sensemaking for decisions and actions that are automatically driven and, thus, reactivated by predictions whenever our autonomous sensemaking is extremely challenging, to the point that it gets stuck and we risk falling into total inaction, which may be the worst unspeakable failure in neoliberal societies. Indeed, if we move from Oedipus and Macbeth to us, we may think of overscheduling, overdoing and overperforming as the extremely high expectations that make our autonomous sensemaking get stuck and us risk falling into total inaction. And Oedipus and Macbeth tragically, and even movingly, show that automated decisions and actions may be far better than total inaction, in that they make them live up to the primary expectation of not falling into total

Introduction  5

inaction. Again, if it is true that, today, making sense of our present identity and life means living up to overscheduling, overdoing and overperforming under uncertainty by autonomously working on our future identity and life, whose success or failure will dramatically prove our success or failure as single individuals, it is also true that we may paralyse ourselves by fear to the point that, sometimes consciously and sometimes unconsciously, we trade sensemaking for technological prediction. But, again, the price we pay is exceedingly high: if we cannot say, at the end of the day, what the profound reasons for our decisions and actions are, we do not practice something essential for human life –​we do not practice sensemaking, which opposes to alienation. Yet, even the prediction of our death can be more reassuring than the autonomous shaping of our future identity: if it is predicted to happen in the far future, we can say that there is time at our disposal to postpone and neglect our toughest challenges and, if it is predicted to happen in the near future, we can say that there is no time at our disposal to face our toughest challenges. Indeed, if it is true that the present time is characterised by unprecedented complexity and uncertainty, from global health emergencies to global climate emergencies to global geopolitical crises to global economic crises to global crises of democracies and ideals, it is also true that sensemaking as the autonomous shaping of our identity through a continuous exchange between our autonomous acting and our context of life may even be our most unbearable burden. Chapter 5 (“Should we have the right not to be predicted?”) uses the results of the two thought experiments developed in Chapters 3 and 4 to work on a possible distinction between cases in which it may be wiser to be predicted as single individuals (and why) and cases in which it may be wiser not to be predicted as single individuals (and why). More precisely, a kind of right not to be predicted may emerge as an unprecedented right to address in the present time, in addition to other kinds of rights that emerging technologies raise as critical issues, starting with the right to be forgotten. From a philosophical perspective, wisdom is introduced as a promising tool to address the possible identification of cases in which we should have the right not to be predicted. The key argument developed in Chapter 5 arises from the following paradox: on the one hand, we think of prediction as our last resort to move from uncertainty to certainty and, consequently, from inaction to action and, on the other hand, prediction means that we severely undermine our future’s almost endless possibilities, in that we automate it. I propose two ideal criteria that can serve as guides, together with the rediscovery of wisdom, if we want to equally rediscover our autonomy, specifically our autonomous sensemaking. The first ideal criterion is that we should learn to think of uncertainty not only as the most complex challenge of the present time but also, and especially, as one of the best opportunities of the present

6 Introduction

time –​uncertainty may even be a value. The second ideal criterion is that we should learn to think of certainty as follows. First, if the prediction happens to be true, we should learn to think of certainty as what may undermine our life –​certainty may even be a disvalue. Second, and more importantly, no matter if the prediction happens to be true or false, we should learn to think of certainty as ideally uncertain –​both individually and socially, we should learn to think of whatever output of the most sophisticated predictive technologies as nothing more than highly probable, especially if there is a significant gap between present circumstances and future, i.e. predicted, circumstances. From a strictly philosophical perspective, we should ensure that, from now on, first, we can still imagine who we ideally want to become and, second, we can still have the strength to work on it day by day. Finally, Chapter 6 (“Concluding remarks: predicted humans, individualism and their future”) proposes concluding remarks by reflecting upon the relationship between the current obsession with technological prediction and the more general context of today’s Western culture. Specifically, the attempt is to answer the following question: if the current obsession with technological prediction may be read as the automation of our future to get rid of the unbearable burden of sensemaking under uncertainty, what may be grasped of the trajectory of Western culture? The proposal is to read the particular phenomenon described above as a symptom of something more general, which is a kind of critical culmination of the individualistic paradigm that has variously characterised the history of Western culture and, today, seems to be taken to the extreme by the extremely high expectations of overscheduling, overdoing and overperforming in constant competition with the others. Specifically, Chapter 6 proposes the following four ideas on which to work. First, the idea according to which living a good (and pleasant) life means being autonomous, and not automated, starting with creatively making sense of our identity and life –​starting with making sense of our future. Second, the idea according to which living a good (and pleasant) life means sharing more than competing, which means a kind of re-​hierarchisation of values. Third, the idea according to which living a good (and pleasant) life means nurturing human relationships more than isolating ourselves. Fourth, the idea according to which living a good (and pleasant) life means distributing resources more than making money by monopolising them. The four ideas proposed result from the prioritisation of the notion of evolution over the notion of revolution. Interestingly enough, evolution also results from the prerogative of acting against all odds –​and the prerogative of acting against all odds opposes by definition to the cause-​and-​effect relationship between technological prediction as the cause and technological automation as the effect, starting with the case of predicted and automated humans, who we may (autonomously) be unwilling to become.

Introduction  7

References Hamzelou, J., “Aging clocks aim to predict how long you’ll live”, MIT Technology Review 125/​4: 14–​15, 2022. Levine, M. E., et al., “An epigenetic biomarker of aging for lifespan and healthspan”, Aging 10/​4: 573–​591, 2018.

1 TO BE PREDICTED, OR NOT TO BE PREDICTED That is the question

1.1  Insights from scripture and mythology

If it is true that the pervasiveness of technology in our present life makes the former a promising way to try to understand the latter, it is also true that the exponential growth of predictive technologies should make us ask the following question: why do we exponentially design and use technologies whose primary objective is to know our future, specifically to predict our future as single individuals, from our bodies’ performance to our minds’ performance, and even to our deaths’ causes and dates? More precisely, what does the current obsession with technological prediction reveal about our present life? Human and social scientists have recently started asking analogous questions (for a legal perspective, see at least Harcourt, 2007; for a techno-​ scientific perspective, see at least Siegel, 2013; Tulchinsky, 2022; Tulchinsky and Mason, 2023; for an economic perspective, see at least Agrawal, Gans and Goldfarb, 2018 and 2022; for an interdisciplinary perspective that is both legal and ethical, see at least Frischmann and Selinger, 2018). Yet, it seems that philosophers still have to work on the more strictly philosophical issues of the questions asked above. Indeed, questioning the current obsession with technological prediction means questioning not only social dimensions but also existential dimensions, starting with the following question: why are we willing to be predicted to the point that we endanger our prerogative of acting against all odds? More precisely, from a philosophical perspective, why are we willing to trade our autonomy for technological automation?1 If it is true that artificial intelligence as the most pervasive technology we may currently think of “is a prediction machine, and that is all it is” DOI: 10.4324/9781032656885-2

To be predicted, or not to be predicted: That is the question  9

(Agrawal, Gans and Goldfarb, 2022: 31), in that “the recent innovations in AI [artificial intelligence] were all advances in making prediction better, faster, and cheaper” (Agrawal, Gans and Goldfarb, 2022: 35), it is also true that philosophy should urgently reflect upon the questions asked above and propose possible answers to try to better understand the present time, specifically the existential meaning of our present life. Indeed, predictive technologies are pervasive not only as something that experts design for non-​ experts who use them daily, sometimes consciously and sometimes unconsciously, but also as something that non-​experts are pushed to use to take predictions to the extreme (as in the case of “[a]‌ging clocks [that] aim to predict how long you’ll live”, Hamzelou, 2022: 16)2 even without the mediation of experts (as in the case of “[r]esearchers [who] design a user-​friendly interface that helps non-​experts make forecasts […] [even by] estimat[ing] a patient’s risk of developing a disease […] so a non-​expert can easily generate a prediction in only a few seconds”, Zewe, 2022: no page number.3 And as in the case of several websites that popularise, for instance, the following kind of question: “Have you ever asked yourself ‘when will I die?’ ”. And the answer is that “our advanced life expectancy calculator will accurately predict your death date for you”).4 From a perspective that is both historical and philosophical, and if we focus on the history of Western philosophy in particular,5 the following paradox emerges. On the one hand, it seems that the more technology develops, the more its primary objective is to know our future, specifically to predict our future as single individuals, from our bodies’ performance to our minds’ performance, and even to our deaths’ causes and dates (as we will see in Chapter 2). On the other hand, both the cradle of Western culture and its historical developments, from scripture to mythology to philosophy, continuously suggest that knowledge, specifically the prediction of humans’ future, can be dangerous to humans themselves (as we will see in the following pages). Thus, the paradox described above is exceedingly meaningful, in that it seems that the present time is driven by something that can even overturn one of the most essential cornerstones of a millenary culture. From a philosophical perspective, the key issue is precisely to try to identify what kind of phenomenon can even overturn the strive not to know, specifically not to be predicted, as one of the most distinctive characteristics of Western culture for millennia. And, again, my key argument is that the identification of the kind of paradoxical phenomenon described above may be most instructive to try to better understand the present time, specifically the existential meaning of our present life. First, let us analyse the historical stages of the idea according to which, in Western culture, knowledge, specifically the prediction of humans’ future, can be dangerous to humans themselves.

10  To be predicted, or not to be predicted: That is the question

The first source is the scripture, specifically Genesis. The particular reference to prediction is implicit, but the general reference to knowledge is explicit, and even founding: “And the Lord God commanded the man, ‘You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat from it you will certainly die’ ” (Gen. 2, 16–​17). From a philosophical perspective, at least two issues are worth considering. The first issue is the meaning of what is forbidden, i.e. “the knowledge of good and evil”, which, from an epistemological perspective, means that humans are forbidden to obtain a kind of omniscience.6 Indeed, speaking of “good and evil” means speaking of the two extreme limits of knowledge. Thus, obtaining a kind of knowledge that goes from the extreme limit of “good” to the extreme limit of “evil” means obtaining a kind of knowledge that equals a kind of omniscience. The second issue is the meaning of what the violation of what is forbidden causes, i.e. “you will certainly die”, which, from an ontological perspective, means that, if humans violate what is forbidden and obtain a kind of omniscience, they stop having their ontological status, i.e. being human. Thus, the scripture introduces something we may define as a kind of dualism, which is both epistemological, when it comes to severely dividing what humans should know from what humans should not know, and ontological, when it comes to severely dividing what humans should be from what humans should not be.7 Epistemologically speaking, we, as humans, should not obtain a kind of omniscience. Ontologically speaking, we, as humans, should not stop being human. Why? More precisely, why does the scripture, as what plays a founding role in Western culture, primarily focuses on limiting our knowledge? We may answer the questions asked above by considering the relationship between knowledge, specifically unlimited knowledge as what is forbidden, and the sacred. According to the seminal works of Durkheim (1912), Otto (1917) and Eliade (1957), the most severe kind of dualism humans have developed in their history (not only in Western culture but also in other cultures) is the kind of dualism that characterises the relationship between the sacred and the profane (see also Freud, 1913, for a psychoanalytical perspective). Thus, whenever the sacred is attributed to knowledge, the characteristics of the former, starting with inviolability, are attributed to the latter. The result is that the idea according to which knowledge, specifically unlimited knowledge, should not be violated by humans is even taken to the extreme. Unlimited knowledge, i.e. “the tree of the knowledge of good and evil”, is thought of as sacred not only in Genesis but also in Exodus, in which theophany, as unlimited knowledge taken to the extreme, occurs by severely dividing the sacred space of God from the profane space of Moses: When the Lord saw that he [Moses] turned aside to see, God called to him out of the bush: ‘Moses, Moses!’. And he said: ‘Here I am’. Then he said:

To be predicted, or not to be predicted: That is the question  11

‘Do not come near. Take your sandals off your feet, for the place on which you are standing is holy ground’. […] And Moses hid his face, for he was afraid to look at God. […] [T]‌he Lord said to Moses: ‘[…] [O]n the third day the Lord will come down on Mount Sinai in the sight of all the people. And you shall set limits for the people all around, saying, take care not to go up into the mountain or touch the edge of it. Whoever touches the mountain shall be put to death. No hand shall touch him, but he shall be stoned or shot, whether beast or man, he shall not live. (Es. 3, 4–​6 and 19, 10–​13) Again, we find the two issues described above. First, the epistemological issue: humans should not obtain a kind of omniscience (which is represented as theophany: “Do not come near. […] [H]‌e was afraid to look at God. […] [Y]ou shall set limits for the people all around, saying, take care not to go up into the mountain or touch the edge of it”). Second, the ontological issue: humans should not stop being human (which is represented as death: “Whoever touches the mountain shall be put to death. No hand shall touch him, but he shall be stoned or shot, […] he shall not live”). If we ask, again, why the scripture primarily focuses on limiting our knowledge, Exodus seems to give us the following insight: if we want knowledge to be a powerful tool for us (starting with knowledge as the ten commandments God gives Moses immediately after), we should not obtain a kind of omniscience (starting with knowledge as theophany). And the reason for the almost paradoxical balance between what we should know and what we should not know is quite clear if we consider the meaning of the ten commandments God gives Moses immediately after: they underpin the covenant itself, i.e. the relationship itself, between humans and God. From a philosophical perspective, speaking of the conditions under which the relationship itself between humans and God is possible is crucial. Let us imagine to obtain, conversely, a kind of omniscience. Would we need the ten commandments as a guide for the future? We should easily answer no, in that we would already have everything we need. And would we need the relationship itself between us and God as an epistemological and ontological otherness that complements our epistemological and ontological status of deprivation? We should easily answer no, in that we would already have everything we need. Thus, the crucial insight we may learn from the scripture as what plays a founding role in Western culture may be translated into philosophical words as follows. The key reason why one of the primary ideas of its first two books, i.e. Genesis and Exodus, is to limit our knowledge is the following: 1 First, unlimited knowledge would lead us to a kind of epistemological absoluteness, which means that our epistemological status could not evolve by changing, in that it would already be perfectly complete.

12  To be predicted, or not to be predicted: That is the question

2 Second, epistemological absoluteness would lead us to a kind of ontological absoluteness, which means that our ontological status could not strive for relationships, in that it would already be perfectly complete. Thus, from a philosophical perspective, the crucial insight we may learn is that humans are thought of as essentially evolving by changing and striving for relationships –​the key reason why one of the primary ideas of the cradle of Western culture is to limit human knowledge is that it is precisely by limiting human knowledge that humans can be thought of as essentially evolving by changing and striving for relationships. If we move from scripture to mythology as another source of the cradle of Western culture, the idea of humans as necessarily characterised by limited knowledge continues to emerge. More precisely, human knowledge is thought of as limited both as knowledge in general and as prediction in particular. Let us analyse cases in which what should be limited is human knowledge in general. The phenomenon of sparagmos (σπαραγμός) frequently emerges as the tragic destiny of humans who violate the limits of human knowledge. Their tragic destiny is reported both as “tearing, rending, mangling” (especially in Euripides, see Liddell and Scott, 1889) and as “convulsion, spasm” (especially in Sophocles, see Liddell and Scott, 1889). The case of Euripides’ Bacchae is paradigmatic. Pentheus, as a representative of humans, asks Dionysus, as a representative of gods: “Do you perform the rites by night or by day?” (Eur. Bacch. 485–​486). Dionysus answers: “Mostly by night; darkness conveys awe” (Eur. Bacch. 486). But Pentheus violates Dionysus’ warning by spying “the rites” and his tragic destiny is the following: Agave said: ‘Come, standing round in a circle, each seize a branch, Maenads, so that we may catch the beast [Pentheus] who has climbed aloft, and so that he does not make public the secret dances of the god’. […] His mother, as priestess, began the slaughter, and fell upon him. […] His body lies in different places, part under the rugged rocks, part in the deep foliage of the woods, not easy to be sought. (Eur. Bacch. 1105–​1139) Interestingly enough, both the epistemological issue and the ontological issue we have seen in Genesis and Exodus emerge. Epistemologically speaking, humans should not obtain a kind of omniscience (which is represented, again, as theophany, at least in the sense that “the rites” strictly correlate with the sacred, and even the divine). Ontologically speaking, humans should not stop being human (which is represented, again, as death). Mythology offers several analogous cases, starting with Actaeon, Orpheus and Semele. The first, i.e. Actaeon, sees the goddess Artemis as naked and ends up being punished by being transformed into a deer and mauled to death by dogs (see, for instance,

To be predicted, or not to be predicted: That is the question  13

Burkert, 1972). The second, i.e. Orpheus, enters the underworld of Hades to (unsuccessfully) save his wife Eurydice and ends up being mauled to death by the Maenads (see, for instance, Miles, 1999). The third, i.e. Semele, sees the god Zeus not in disguise, but as a god, and ends up being struck to death by lightning (see, for instance, Burkert, 1972). Again, the violation of the limits of human knowledge is represented as most dangerous and, consequently, forbidden. We may read the myths described above analogously to the way we have read Genesis and Exodus: 1 First, unlimited knowledge would lead us to a kind of epistemological absoluteness. If we could totally know “the rites”, the goddess Artemis as naked, the underworld of Hades and the god Zeus as a god, our thirst for knowledge would end –​our epistemological qualities and capabilities would move from evolution to stasis. 2 Second, epistemological absoluteness would lead us to a kind of ontological absoluteness. If we could totally know “the rites”, the goddess Artemis as naked, the underworld of Hades and the god Zeus as a god, our thirst for relationships would end –​our ontological qualities and capabilities would move from evolution to stasis. Thus, from a philosophical perspective, we may refine the crucial insight we may learn as follows. If it is true that humans are thought of as essentially evolving by changing (which means constitutively thirsty for knowledge) and striving for relationships (which means constitutively thirsty for relationships), it is also true that the kind of thirst described above is exceedingly more important that their quench when it comes to identifying what is essential and constitutive of human identity –​having good reasons to continuously work on our knowledge (as never complete and satisfied) and our relationships (as never complete and satisfied) is exceedingly more important than having good reasons to stop working on them when it comes to identifying what is essential and constitutive of human identity. Thus, there is a sense in which incompleteness, both epistemologically speaking and ontologically speaking, is more important than completeness. Endlessly asking questions is more important than obtaining answers once and for all –​indeed, endlessly asking questions means evolution and obtaining answers once and for all means stasis. If we analyse cases in which what should be limited is human prediction in particular, mythology offers several instructive examples. The prophet Tiresias sees Athena as naked, who is the goddess of knowledge itself, and goes blind: Only Tiresias, on whose cheek the down was just darkening, still ranged with his hounds the holy place. And, athirst beyond telling, he came unto

14  To be predicted, or not to be predicted: That is the question

the flowing fountain, wretched man! And unwillingly saw that which is not lawful to be seen. And Athena was angered, yet said to him: ‘What god, o son of Everes, led thee on this grievous way? Hence shalt thou never more take back thine eyes!’. (Call. H. 5, 75–​81) It is worth noting that something unprecedented happens in addition to the two issues described above. Epistemologically speaking, Tiresias “saw that which is not lawful to be seen”, analogously to the other cases we have analysed. Ontologically speaking, Tiresias “never more take[s]‌back […] [his] eyes”, which means that he stops being who he was, analogously to the other cases we have analysed (even though Tiresias, through the intercession of his mother, does not die). But what is unprecedented is that, interestingly enough, Athena’s punishment develops in two phases, which balance Athena’s wrath with Tiresias’ mother’s intercession. First, he goes blind. Second, he obtains the capability of prediction. Thus, most interestingly, knowledge as prediction is introduced as something ambiguous. On the one hand, prediction is positive, in that it balances a divine punishment. On the other hand, prediction is negative, in that it results, in any case, from punishment for violation –​the idea of knowledge as prediction is introduced as what may be thought of as the worst violation of the limits of human knowledge. Indeed, if it is true that Tiresias does not die, it is also true that prophets’ destiny is even more tragic than other violators’ destiny of death, as the cases of other prophets show. For instance, the prophet Cassandra’s destiny is even more tragic. Her predictions are destined to be unheard and, thus, she is destined to be hated not only by the god Apollo (who gives her both the capability of prediction and the punishment of being unheard) but also by humans (to whom she predicts misfortunes). Her tragic destiny is masterfully represented by Aeschylus’ following words: Alas, alas, the sorrow of my ill-​starred doom! For it is my own affliction, crowning the cup, that I bewail. Ah, to what end did you bring me here, unhappy as I am? For nothing except to die –​and not alone. (Aesch. Ag. 1136–​1139) The case of Oedipus is also worth considering, in which the idea of knowledge as prediction is even worse. Oedipus is both predicted (in most of his story) and predicting (when he predicts misfortunes to his sons). Analogously to Tiresias, Oedipus goes blind after having known something he should not have known. But Oedipus’ blindness results, interestingly enough, from self-​ punishment. As Sophocles tells us in his masterpiece, the cause of Oedipus’ tragedy is precisely his will to be predicted by the oracle of Delphi: “I went to Delphi […] and Phoebus […] set forth other things, full of sorrow and terror

To be predicted, or not to be predicted: That is the question  15

and woe: that I was fated to defile my mother’s bed, that I would reveal to men a brood which they could not endure to behold, and that I would slay the father that sired me. When I heard this, I turned in flight from the land of Corinth, from then on thinking of it only by its position under the stars, to some spot where I should never see fulfilment of the infamies foretold in my evil fate. And on my way I came to the land in which you say that this prince perished” (Soph. OT 787–​800), which means that the cause of the fulfilment of the prediction is precisely Oedipus’ attempt not to make it come true after having being predicted –​the cause of Oedipus’ tragedy is precisely his having being predicted. It is no coincidence that his blindness results from self-​ punishment. Indeed, Oedipus clearly understands that the worst thing he did was to be predicted: “Light, may I now look on you for the last time” (Soph. OT 1183). The case of Oedipus also introduces the issue of self-​fulfilling predictions, which lead one’s decisions and actions to fulfil predictions themselves, sometimes consciously and sometimes unconsciously (see at least Merton, 1948, for social sciences’ contribution, which is primary. See also Popper, 1957; Buck, 1963; Romanos, 1973, for philosophy’s contribution). As we will see in Chapters 3 and 4, the issue of self-​fulfilling predictions means taking predictions’ negativity to the extreme, in that human autonomy is not only overturned (in the case of Oedipus, his decisions and actions are totally influenced by the oracle) but also self-​overturned (more precisely, in the case of Oedipus, his decisions and actions are totally influenced by his fear of being totally influenced by the oracle). It is also worth noting that the strict correlation between the capability of prediction and blindness continuously emerges (see at least Gartziou-​Tatti, 2010, for literary studies’ contribution, which is primary). Metaphorically speaking, there is a sense in which the more humans can see their future (as the content of their capability of prediction), the less they can see their present (as the content of their sight) –​there is a sense in which the more humans can live their future, the less they can live their present (as we will see in the following chapters). Since the focus of my work is the relationship between prediction and technology, let us analyse the case of Prometheus as one last paradigmatic case of what we may learn from the cradle of Western culture. Prometheus is not a human being, but a Titan. Yet, as a Titan, he mediates between humans and the Olympians, who are the gods ruled by Zeus and described above. Specifically, he tries to help the former against the latter. And, meaningfully, his best way to help humans is to give them technology’s symbol itself, i.e. the divine fire (see at least Kahn, 1970; Dougherty, 2006).8 But what is most meaningful is that Prometheus, who gives humans technology, is strictly correlated with prediction, even from an etymological perspective. Indeed, the word “Prometheus” literally identifies who “knows” (μανθάνω, transliterated as manthano) “before” (πρό, transliterated as pro). Prometheus has the capability of prediction, from the general capability of foresight (as

16  To be predicted, or not to be predicted: That is the question

when he gives humans not only the divine fire but also the divine wisdom in the arts to make them safer in the future, see Hes. Theog.; Plat. Prot.) to the particular capability of prediction (as when he predicts Zeus’ future fall, see Aesch. PB). But the most meaningful details are the following. First, he also gives humans a kind of capability of prediction. Second, he makes humans capable of prediction not in a direct way, but in an indirect way, specifically through the mediation of tools, from natural objects to artefactual objects –​what is most meaningful is that Prometheus makes humans capable of prediction through the use of techniques as forerunners of technologies. As Aeschylus’ Prometheus says, I marked out many ways by which they [humans] might read the future, and among dreams I first discerned which are destined to come true; and voices baffling interpretation I explained to them, and signs from chance meetings. The flight of crook-​taloned birds I distinguished clearly –​which by nature are auspicious, which sinister –​their various modes of life, their mutual feuds and loves, and their consortings; and the smoothness of their entrails, and what colour the gall must have to please the gods, also the speckled symmetry of the liver-​lobe; and the thigh-​bones, wrapped in fat, and the long chine I burned and initiated mankind into an occult art. Also, I cleared their vision to discern signs from flames, which were obscure before this. (Aesch. PB 480–​495) Thus, we learn at least two key lessons from Prometheus, who is represented as strictly correlated with both the idea of technology and the idea of prediction not only by Hesiod, Aeschylus and Plato in ancient Western culture, as we have seen, but also by Shelley in her masterpiece Frankenstein, or the modern Prometheus in modern Western culture (see Shelley, 1818 and 1831. See also Morton, 2002; Finn, Guston and Robert, 2017; Cambra-​Badii, Guardiola and Banos, 2020; Nagy et al., 2020). The first key lesson is that even the cradle of Western culture thinks of humans as essentially technological. Indeed, when Prometheus gives them the divine fire as technology’s symbol itself, in addition to the divine wisdom in the arts, he literally gives them their ontological status9 –​technology, as both the capability to design it and the capability to use it, is thought of as what essentially qualifies human identity. The second key lesson is that even the cradle of Western culture thinks of technology as essentially predictive. Indeed, Prometheus, who means prediction even from an etymological perspective, gives humans technology as what essentially qualifies human identity. Also, technology, starting with the use of the divine fire as its symbol, is the tool through which humans obtain the capability of prediction, from “the long chine […] burned” to the “signs from flames”. Thus, even the cradle of Western culture underpins

To be predicted, or not to be predicted: That is the question  17

the strict correlations between human identity, technology and prediction10 –​ humans are thought of as essentially technological and technology is thought of as essentially predictive. Again, the paradox described above emerges. On the one hand, there is a strict correlation between human identity, technology and prediction. On the other hand, prediction is dangerous to human identity itself (it is no coincidence that Zeus as the ruler of the living creatures counters humans’ obtainment of technology, even though unsuccessfully).11 Thus, humans are thought of as essentially challenging the following paradox as one of the key issues of their identity itself: balancing two opposing forces –​balancing human thirst for knowledge, specifically prediction obtained by designing and using technology, with the limits of human knowledge, which can be dangerous to human identity itself if they are violated. If we move from ancient Western culture to modern Western culture, the echo of the words of Sophocles’ masterpiece is strong. Sophocles’ words are the following: “Alas, how terrible it is to have wisdom [i.e. knowledge, in the present case] when it does not benefit those who have it” (Soph. OT 316–​ 317). And, for instance, Gray’s words are the following: “Where ignorance is bliss, ’tis folly to be wise [i.e. to know, in the present case]” (Gray, 1747: 99–​ 100). To limit our analysis to the cases on which we have focused and will focus, let us consider at least two last instructive, and paradigmatic, examples as steps from ancient Western culture to contemporary Western culture. First, as we will see in Chapters 3 and 4, we should not forget that Shakespeare’s Macbeth continues to say that “Present fears are less than horrible imaginings. My thought, whose murder yet is but fantastical, shakes so my single state of man that function is smothered in surmise, and nothing is but what is not” (Shakespeare Mac. ACT 1, SC 3, 150–​155), which means that one of humans’ most difficult challenges is precisely deciding and acting after having been predicted. Second, we should not forget that Shelley’s Frankenstein, who explicitly represents the modern Prometheus, as the modern predictor by definition, ends in tragedy after having taken the limits of human knowledge to the extreme. More precisely, he uses technology to challenge divine prerogatives, which are, first, the creation of living human matter, i.e. the monstrous creature, from nonliving matter and, second, the attribution of resistance to disease and death to his creation: “if I could banish disease from the human frame and render man invulnerable to any but violent death!” (Shelley, 1818 and 1831: 37), which means that one of humans’ primary ways to fall into their most difficult challenges is precisely technology –​we may say that technology is precisely the way in which humans take the challenge to their limits to the extreme, especially when it comes to trying to obtain divine prerogatives (which are even symbolised by the divine fire, as we have seen), from prediction as a kind of omniscience to creation of humans resisting to disease and death (which, most interestingly, is not only

18  To be predicted, or not to be predicted: That is the question

Frankenstein’s objective in modern Western culture but also, and especially, emerging technologies’ objective in contemporary Western culture, as we will see in the following pages and chapters). 1.2  From insights from philosophy to “Have you ever asked yourself ‘when will I die?’ ”12

The key insights we have learned from scripture and mythology are, first, that humans are thought of as essentially technological and, second, that technology is thought of as essentially predictive. Yet, we have also learned that both human knowledge in general and human prediction in particular should have precise limits not to severely damage humans themselves. More precisely, the key reason why human knowledge and prediction should have precise limits is that, conversely, humans would not endlessly exercise their epistemological and ontological prerogatives –​which means that humans would not endlessly evolve by changing and striving for relationships, which also means that humans would not endlessly think of their future as possibly better than their present and act accordingly (as we will see in the following chapters). The history of Western philosophy shows something analogous. To limit our analysis to the most instructive cases, let us consider, on the one hand, the contributions of philosophy itself and, on the other hand, the contributions of science and technology as deeply influenced by philosophy itself. Again, ancient philosophy plays a founding role. Even philosophical thinking itself, as it is founded by Socrates, starts with the recognition of the limits of what humans cannot know, which is exceedingly important, from what humans can know. Plato’s Socrates is described as the wisest human being, whose wisdom results from his recognition of his ignorance: “he [Chaerephon] went to Delphi and […] asked if there were anyone wiser than I [Socrates]. Now the Pythia replied that there was no one wiser” (Plat. Apol. 21 a). And the reason why Socrates is the wisest human being is the following: I [Socrates] am wiser than this man [considered as the wisest]; for neither of us really knows anything fine and good, but this man thinks he knows something when he does not, whereas I, as I do not know anything, do not think I do either. (Plat. Apol. 21 d) Thus, something we have seen emerges. Knowledge does not result from knowledge as the belief of having unlimited knowledge. Conversely, knowledge results from a specific kind of ignorance as the belief of having limited knowledge. Again, the limits of human knowledge are beneficial to humans, in that they are the condition for the most promising kind of knowledge

To be predicted, or not to be predicted: That is the question  19

humans can obtain, which is not a matter of stasis, but a matter of evolution. The reason why Socrates is “wiser than this man [considered as the wisest]” is that the latter “thinks he knows something”, which is the condition for a sense of epistemological satisfaction. And epistemological satisfaction means epistemological stasis: there is no more thirst for knowledge. Conversely, the former thinks he does “not know anything”, which is the condition for a sense of epistemological dissatisfaction. And epistemological dissatisfaction means epistemological evolution: there is more thirst for knowledge. If Plato’s Socrates can be considered as the milestone of ancient epistemology, Descartes can be considered as the milestone of modern epistemology. The recognition of the limits of what humans cannot know continues to be the condition for the most promising kind of knowledge humans can obtain: “I observed that this truth, I think, therefore I am (cogito ergo sum), was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged” (Descartes, 1637: IV), which means that Descartes’ certain knowledge results from uncertain knowledge. More precisely, his methodic doubt is his systematic way to pursue certain knowledge (which is “I think, therefore I am (cogito ergo sum)”) after having doubted anything (from knowledge from tradition to empirical knowledge to mathematical knowledge: the first is dubitable because of countering traditions, the second is dubitable because of perceptual errors and the third is dubitable because of miscalculations). Kant also plays a founding role (after having been deeply influenced by Hume’s scepticism, see Hume, 1739–​1740 and 1748). His philosophical masterpiece even starts with the recognition of the limits of human knowledge: Human reason has the peculiar fate in one species of its cognitions that it is burdened with questions which it cannot dismiss, since they are given to it as problems by the nature of reason itself, but which it also cannot answer, since they transcend every capacity of human reason. (Kant, 1781: A, VII) It is worth noting that, according to Kant, recognising the limits of human knowledge also means recognising the following extraordinary opportunity. If it is true that certain knowledge can give us the truth, it is also true that uncertain knowledge can give us the extraordinary opportunity to endlessly think, in that our thought cannot obtain completeness and satisfaction (which is an idea we have seen when we have analysed scripture’s and mythology’s insights). Kant sophisticatedly develops the idea according to which the limits of human knowledge can also mean opportunities. On the one hand, he defines through the notion of determining judgement our capability to obtain certain knowledge when it comes to trying to (successfully) know what there is inside the limits of human knowledge (for instance, an object’s height,

20  To be predicted, or not to be predicted: That is the question

which is objective and, thus, about which we obtain an answer once and for all). On the other hand, he defines through the notion of reflective judgement our capability to endlessly think when it comes to trying to (unsuccessfully) know what there is outside the limits of human knowledge (for instance, an object’s beauty, which is subjective and, thus, about which we endlessly ask questions). More precisely, endlessly asking questions means endlessly “spread[ing] itself [imagination] over a multitude of related representations, which let one think more than one can express in a concept determined by words” (Kant, 1790: 5, 315). Again, it seems that the condition for the epistemological exercise, i.e. “think[ing] more” by “spread[ing] itself [imagination]”, is more important than the condition for the epistemological completeness and satisfaction, i.e. “express[ing] in a concept determined by words” –​again, it seems that humans are thought of as essentially imagining and thinking, even more than as determining and knowing. And we may start asking a question upon which we will reflect in the following chapters: if the present time is increasingly characterised by irreducible complexity and uncertainty, from global health emergencies to global climate emergencies to global geopolitical crises to global economic crises to global crises of democracies and ideals, what may possibly save us? Trying to determine and know what is irreducibly complex and uncertain? Or trying to imagine and think of what is irreducibly complex and uncertain? Let us consider one last philosopher as the milestone of contemporary epistemology. According to Wittgenstein, the definition of the limits of human knowledge is one of the primary tasks of philosophy itself: “Philosophy sets limits to […] what can be thought; and, in doing so, to what cannot be thought. It must set limits to what cannot be thought by working outwards through what can be thought” (Wittgenstein, 1921: 4.113–​4.114), which is an idea that deeply influences the philosophy of the last century, from logical empiricism to the most recent debate. More precisely, contemporary philosophy especially focuses on the following two perspectives: first, the epistemological meanings of ignorance (see Le Morvan, 2010 and 2011; Peels, 2010, 2011 and 2017. See also Smithson, 1988; Firestein, 2012; Gross and McGoey, 2015; Arfini, 2019; Arfini and Magnani, 2021) and, second, the ethical meanings of ignorance, especially when it comes to reflecting upon the insidious relationships between knowledge and ignorance, on the one hand, and fairness and unfairness, on the other hand (see Smith, 1983; Driver, 1989; Flanagan, 1990; Zimmerman, 1997; Sullivan and Tuana, 2007. See also Rawls, 1971). Again, ignorance may be better than knowledge. As far as fairness and unfairness are concerned, we may think of several situations in which ignorance may improve fairness, starting with ignoring our gender when we are offered a fair employment contract. As far as more general ethical issues are concerned, we may think of several situations in which ignorance may improve our quality of life, starting with

To be predicted, or not to be predicted: That is the question  21

the kind of question that several websites popularise, as we have seen: “Have you ever asked yourself ‘when will I die?’ ”. We may rephrase as follows: do we think that our quality of life may be improved if “our advanced life expectancy calculator will accurately predict your death date for you”? We will thoroughly work on possible answers in the following chapters. But we should start reflecting upon the opposition between the strive not to know (which continuously emerges from scripture to mythology to philosophy, as we have seen) and the strive to know (which continuously emerges from the current obsession with technological prediction, as we will see in Chapter 2). From a perspective that is both historical and theoretical, we may say that, paradoxically enough (but meaningfully), even though the history of Western culture is characterised by the attempt to give human knowledge and prediction precise limits, it is also characterised by the (increasing) attempt to make human knowledge and prediction the primary objective of human activities, from philosophy itself to science and technology. More precisely, the more technology develops, the more its primary objective is to know our future, specifically to predict our future as single individuals, from our bodies’ performance to our minds’ performance (as we will see in Chapter 2). The key argument of my work is precisely the following: the opposition between the strive not to know and the strive to know, to the point that we automatically predict our future as single individuals, is an exceedingly meaningful symptom to try to understand the present time. If we consider the milestones of the attempt to obtain certainty when it comes to human knowledge and prediction, the first thing we should say is that, from Socrates (and Plato) to Descartes to Kant (and Hume) to Wittgenstein, their attempt to limit human knowledge and prediction are also the condition for underpinning their attempt to obtain certainty when it comes to human knowledge and prediction, as we have seen. More precisely, a specific kind of capability increasingly emerges as the most promising way humans have to obtain knowledge and prediction13: logos (λόγος), which literally means “computation, reckoning” (Liddell and Scott, 1889),14 as the strictest form of rationality humans have among other forms of rationality, starting with metis (μῆτις), which literally means “wisdom, skill, craft” (Liddell and Scott, 1889)15 –​thus, most interestingly, the most promising way we have to obtain knowledge and prediction shares its literal meaning with the word we use to define emerging technologies: the latter “compute” and “reckon” precisely as the former means “computation, reckoning”. If we consider ancient philosophy, according to Plato, “the most perfect knowledge arises from the addition of rational explanation [logos] to true opinion” (Plat. Theaet. 206 c) and, according to Aristotle, “we declare that the function of man is a certain form of life, and define that form of life as the exercise of the soul’s faculties and activities in association with rational

22  To be predicted, or not to be predicted: That is the question

principle [logos]” (Arist. Eth. Nic. I, 14, 1098 a). Logos as the strictest form of rationality is not only what makes humans obtain “the most perfect knowledge” but also what distinguishes humans’ “form of life” from the other living creatures’ “form of life” (which is the meaning of the context of Aristotle’s words quoted above). If we move from ancient philosophy to modern philosophy, other forms of rationality, starting with metis as “wisdom, skill, craft”, get weaker and weaker. Conversely, logos as “computation, reckoning” gets stronger and stronger. On the one hand, the philosophers we have quoted, from Descartes to Kant, emphasise the primacy of logos when it comes to obtaining knowledge, especially in science. On the other hand, scientists as deeply influenced by philosophers themselves even increase the emphasis on the primacy of logos, from Descartes, who is both a philosopher and a scientist, to Galilei as the founder of the modern scientific method, which distinguishes it from ancient science because of an even stronger logos that leads to even stronger abstraction and idealisation.16 As far as philosophers are concerned, we should at least refer to Kant’s words according to which the determining judgement as our capability to obtain certain knowledge is logical: “Every determining judgement is logical because its predicate is a given objective concept” (Kant, 1790: 20, 223). As far as scientists are concerned, we should at least refer to Galilei not only as the founder of the modern scientific method but also as one of the authors who especially work on the attempt to obtain certainty when it comes to human knowledge and prediction: “I say that the human intellect does understand some of them [propositions] perfectly, and thus in these it has as much absolute certainty as nature itself has. Of such are the mathematical sciences alone; that is, geometry and arithmetic” (Gal. Dialog. I). Again, “understand[ing] […] perfectly” and obtaining “absolute certainty” are a matter of logos, specifically “the mathematical sciences alone” as its highest form. If we move from modern philosophy to contemporary philosophy, the dialogue between philosophers and scientists continues to emphasise the primacy of logos when it comes to obtaining knowledge. For instance, Cassirer’s philosophical reading of Einstein’s science is most instructive to understand that the primacy of logos is given by its power to take abstraction and idealisation to the extreme, which means to move from the variability of reality to the invariability of ideality: the general theory of relativity stands methodologically at the end of this series, since it collects all particular systematic principles into the unity of a supreme postulate, in the postulate not of the constancy of things, but of the invariance of certain magnitudes and laws with regard to all transformations of the system of reference. (Cassirer, 1923: 404)

To be predicted, or not to be predicted: That is the question  23

To consider at least Wittgenstein as the contemporary philosopher we have quoted, the primacy of logos when it comes to obtaining knowledge continues to emerge even more emphatically: Thought can never be of anything illogical, since, if it were, we should have to think illogically. It used to be said that God could create anything except what would be contrary to the laws of logic. The truth is that we could not say what an ‘illogical’ world would look like. (Wittgenstein, 1921: 3.03–​3.031. See also Wittgenstein, 1969) And, if we move from the founders of contemporary philosophy to more recent philosophers, at least Berlin’s words are worth quoting: I wish to be a subject, not an object; to be moved by reasons, by conscious purposes, which are my own, not by causes which affect me, as it were, from outside. I wish to be somebody, not nobody; a doer –​deciding, not being decided for, self-​directed and not acted upon by external nature or by other men as if I were a thing, or an animal, or a slave incapable of playing a human role. That is, of conceiving goals and policies of my own and realising them. This is at least part of what I mean when I say that I am rational, and that it is my reason that distinguishes me as a human being from the rest of the world. I wish, above all, to be conscious of myself as a thinking, willing, active being, bearing responsibility for my choices and able to explain them by reference to my own ideas and purposes. I feel free to the degree that I believe this to be true, and enslaved to the degree that I am made to realise that it is not. (Berlin, 1958: 203) At least four issues are worth considering. First, rationality, even though as something that seems to exceed its strictest form, i.e. logos, is nothing less than what distinguishes humans from anything else and, thus, defines the core of human identity: saying that “I am rational” means saying that I am not “a thing, or an animal, or a slave incapable of playing a human role” and, thus, “it is my reason that distinguishes me as a human being from the rest of the world”. Second, rationality underpins the condition for being “a subject, not an object”, when it comes to identifying humans’ epistemological prerogatives, especially capabilities: it is my rationality what makes me not only “a thinking […] being” but also, and especially, “conscious of myself” and even “conceiving goals and policies of my own”, in addition to being “able to explain” “my choices […] by reference to my own ideas and purposes”. Third, rationality underpins the condition for being “a subject, not an object”, when it comes to identifying humans’ ethical prerogatives, especially capabilities: it is my rationality what makes me “somebody” not

24  To be predicted, or not to be predicted: That is the question

only as a “willing, active being” and “a doer”, i.e. “deciding” and “self-​ directed”, when it comes to “realising” my “goals and policies”, but also, and especially, as “bearing responsibility for my choices”. Fourth, rationality underpins the condition for being, consequently, “free” from both an epistemological perspective (human freedom is described as resulting from something humans “believe”) and an ethical perspective (human freedom is described as resulting from something humans “feel”). Thus, rationality is thought of as even the core of the definition of humans –​and, most interestingly, the core of the definition of rationality is logos as humans’ most promising capability to obtain knowledge and prediction. But, if the history of Western culture, from philosophy to science, coincides with the rise of the idea according to which humans should pursue knowledge especially resulting from logos as “computation, reckoning”, what is the meaning of the idea, which has never totally fallen, according to which humans should pursue ignorance (especially resulting from metis as “wisdom, skill, craft”)? In Chapter 2, we will review emerging technologies that take humans’ pursuit of knowledge, specifically prediction, to the extreme. Two philosophical issues are worth considering as preliminary. First, logos as “computation, reckoning” is thought of not only as the core of the definition of humans but also as the core of the definition of emerging predictive technologies. Second, emerging predictive technologies are frequently thought of as far better than humans at “computation, reckoning”, which means that the former are frequently thought of as far better than the latter at knowing and predicting. But what is the meaning of the shift of knowledge and prediction from humans to emerging predictive technologies? More precisely, what is the meaning of the shift of even the core of the definition of humans themselves to emerging predictive technologies? As far as the first philosophical issue is concerned, we may reason as follows. Saying that logos as “computation, reckoning” is thought of not only as the core of the definition of humans but also as the core of the definition of emerging predictive technologies means saying something that has characterised Western culture for millennia, as we have seen. From the divine fire to emerging predictive technologies, technology is not an ontological addition to human ontology –​conversely, technology is essential and constitutive of human ontology. Thus, any criticism against the current obsession with technological prediction should start with the recognition of technology as something that, being essential and constitutive of human identity, should be used as its natural ally, and not as something that, being additional to human identity, should be removed as its natural enemy. It is no coincidence that, according to Ovid, Prometheus, who gives humans technology, may be the creator of humans.17 Thus, if we move from the Greek mythological tradition to the Roman mythological tradition,

To be predicted, or not to be predicted: That is the question  25

humans’ technological constitution and essence even increase. And, if we move from the ancient Prometheus to the modern Prometheus, who is a human being, specifically a scientist, technology even makes humans increase their ontological prerogatives, as we have seen: the (divine) prerogative of prediction as a kind of omniscience and the (divine) prerogative of creation of humans resisting to disease and death. Thus, we should not be surprised that humans and technology share their distinctive characteristics, starting with logos as “computation, reckoning”. But we should be surprised that humans seem to increasingly shift their distinctive characteristics from themselves to technology, to the point that speaking of shifting does not mean speaking of sharing –​conversely, speaking of shifting means speaking of humans who seem to increasingly deprive themselves of even the core of their definition, according to which they are essentially qualified by the following capabilities: logos and, consequently, knowledge and prediction (which leads us from the first philosophical issue to the second philosophical issue). As far as the second philosophical issue is concerned, we may reason as follows. If we reflect upon the meaning of the shift of knowledge and prediction from humans to emerging predictive technologies, again, we should not forget that it does not mean the switch between two totally divided ontologies. Conversely, it means the switch between humans as essentially and constitutively technological, on the one hand, and technology, on the other hand. Yet, humans and technology are two ontologies, even though they are not totally divided. Humans are living creatures and technology is an artefact, specifically the most powerful artefact humans design and use both to try to strengthen their prerogatives and to try to obtain other prerogatives, as we have seen. Thus, what may be critical is not a matter of making technology capable of “computation, reckoning” as a way to strengthen humans’ capability of knowledge and prediction (as in the case of processing more data to strengthen my capability of decision-​making when it comes to planning my future). Conversely, what may be critical is a matter of making technology capable of “computation, reckoning” as a way to replace humans’ capability of knowledge and prediction (as in the case of processing more data to replace my capability of decision-​making when it comes to planning my future). In the first case, I happen to act as follows. First, I reflect upon the meaning of the data. Second, I reflect upon the relationship between the meaning I give to the data and my future as I ideally imagine it, which also means that I exercise my imagination. Third, I plan and make decisions accordingly, specifically by adapting my future as I ideally imagine it to the meaning I give to the data, which also means that I exercise my adaptability. Thus, not only my capability of knowledge and prediction but also several other capabilities, which underpin it, are exercised and, consequently, strengthened: my reflection, my imagination, my planning, my

26  To be predicted, or not to be predicted: That is the question

decision-​making and my adaptability. In the second case, I happen to act as follows. First, I do not reflect upon the meaning of the data. For instance, if the data say that my “death date” is X, I take it for granted, without reflecting, together with experts, upon data quality and, consequently, their possible meaning. Second, I do not reflect upon the relationship between the meaning I give to the data and my future as I ideally imagine it. For instance, if the data say that my “death date” is X, I take it for granted, without imagining not only possible scenarios in which X is false but also, and especially, my possibility to act against all odds. Third, I do not plan and make decisions accordingly, specifically by adapting my future as I ideally imagine it to the meaning I give to the data. For instance, if the data say that my “death date” is X, I take it for granted, without planning and making decisions accordingly to my sensemaking, from my future as I ideally imagine it, on the one hand, to the meaning I give to the data, on the other hand. Thus, what I do not exercise and, consequently, strengthen even exceeds my capability of knowledge and prediction as underpinned by my reflection, my imagination, my planning, my decision-​making and my adaptability –​what I do not exercise and, consequently, strengthen is even my sensemaking (as we will see in the following chapters). Let us add one last example to clarify the meaning of the second philosophical issue described above. And let us imagine that our professional future as we ideally imagine it is a high-​stress job. The data to be hired are the following: Those hiring for high-​stress environments should look no further than PDE4B: higher expression of this gene can be associated with lower anxiety and higher problem-​solving capacity. Higher expression of the APOE, TERT and APP genes appears to provide physical and mental longevity, and higher levels of the DEC2 gene allow humans to sleep less but still function at high levels and achieve increased overall vitality. Employees with such genes could be encouraged to apply for jobs that require long hours, such as physicians, drivers and pilots, and their gene-​expression patterns could be monitored to make sure the genes are fully genetically active. (Tulchinsky and Mason, 2023: 143) Again, the issue is not to remove technology as an enemy of humans. Conversely, the issue is to use technology as an ally of humans. Thus, the question may be the following: if our professional future as we ideally imagine it is a high-​stress job, would we want us to be hired accordingly to the data described above as taken for granted? If we hesitate to answer yes, we may translate our hesitation into the following philosophical words: being predicted by our genes, which is precisely what several

To be predicted, or not to be predicted: That is the question  27

emerging technologies do (as we will see in Chapter 2), would deprive us of our possibility to act against all odds –​being predicted by several emerging technologies would deprive us of what we have started defining as our sensemaking. And speaking of our sensemaking means speaking of our autonomy (as we will see in Chapters 3 and 4). More precisely, the argument I will develop is that the meaning of the shift of knowledge and prediction from humans to emerging predictive technologies is strictly correlated with the following kind of phenomenon: what we seem to desperately try to do, sometimes consciously and sometimes unconsciously, is precisely to relieve ourselves from our sensemaking and our autonomy, in that their burdens are more than ever unbearable –​and technology, as our most powerful millennial ally again and again, seems to be designed and used precisely to relieve us from the burdens of our sensemaking and our autonomy, which seem to be more than ever unbearable in the present time as characterised by both the challenges of complexity and uncertainty and neoliberalism’s extreme consequences.18 Also, the argument I will develop is that the shift of knowledge and prediction from humans to emerging predictive technologies has a high price to pay. On the one hand, the shift described above means that we are willing to trade our autonomy for technological automation. Indeed, whenever we design and use technology as a way to replace, and not to strengthen, our capability of knowledge and prediction when it comes to planning our future, we automate our decisions and actions –​we trade our autonomous reflection, imagination, planning, decision-​making and adaptability, which underpin our autonomous knowledge, prediction, decisions and actions, for technological automation as the data we uncritically take for granted. On the other hand, the shift described above means that we are willing to automate our future in particular. Indeed, whenever we trade our autonomy for technological automation, we automate our capabilities to work on our future in particular. Even though reflection, imagination, planning, decision-​ making and adaptability, together with knowledge and prediction, are also key when it comes to understanding our past and our present, they are more than ever key when it comes to working on our future. We may even say that our future is the most important time of our life, in that it is open to what we have seen as crucial from scripture to mythology to philosophy: the endless exercise of our epistemological and ontological prerogatives –​which means endlessly thinking of ourselves as essentially evolving by changing and striving for relationships. And what better time may there be than our future to be essentially evolving by changing and striving for relationships? Also, thinking of our future as open, i.e. resulting from our autonomy, means thinking of it as possibly better than our present and act accordingly, which means increasing our hope and, consequently, our happiness. Thus, our questions are even

28  To be predicted, or not to be predicted: That is the question

more urgent. From a philosophical perspective, why are we willing to trade our autonomy for technological automation? From a practical perspective, why are we willing to be predicted to the point that we endanger our prerogative of acting against all odds? More precisely, why are we willing to be predicted to the point that “our advanced life expectancy calculator will accurately predict your death date for you”? Notes 1 I worked on the relationship between human autonomy and technological automation in Chiodo, 2022, 2023a and 2023b. See also Millar, 2015; Sharon, 2017; Owens and Cribb, 2019. 2 The reference is to research conducted by Horvath, 2013; Horvath and Raj, 2018; Levine et al., 2018; Belsky et al., 2020. 3 The reference is to research conducted by Agarwal, Alomar and Shah, 2020. 4 See www.death-​clock.org/​ (accessed: August 28, 2023). See also www.dea​thcl​ock. com/​, https://​thedea​thcl​ock.co/​ and www.medin​dia.net/​patie​nts/​calc​ulat​ors/​deat​ h_​cl​ock.asp (accessed: August 28, 2023). 5 As a scholar of Western philosophy, my focus cannot extend to other cultures. Yet, I hope that my reading of Western philosophy in particular may offer, at least through comparisons, hints to try to understand something that may characterise the present time more in general, especially if we consider that, in a globalised world, the dialogue between cultures remarkably increases. 6 The reading I propose translates into epistemology the most frequent reading theology proposes by considering “the knowledge of good and evil” as a merism. See also Aug. Conf. VII, 10. 7 I worked on dualism in Chiodo, 2013. 8 I worked on the relationship between Prometheus and technology in Chiodo, 2020. 9 According to Hesiod, the reason why Prometheus gives humans the divine fire and the divine wisdom in the arts is that his brother Epimetheus, who is responsible for the attribution of qualities to the living creatures, forgets to attribute qualities to humans. Thus, Prometheus fixes his mistake by giving humans something special: again, the divine fire and the divine wisdom in the arts, which the other living creatures do not have (see Hes. Theog.). 10 References are several. See at least Howe and Wain, 1993; Halpern, 2000. See also Mayor, 2018. 11 According to Hesiod, Zeus deprives humans of the divine fire (which Prometheus gives them back): “from that time forward, ever mindful of the fraud, he [Zeus] did not give the strength of untiring fire to wretched mortal men, who dwell upon the earth” (Hes. Theog. 560). 12 The title of the section is a quote from www.death-​clock.org/​ (accessed: August 28, 2023). 13 References are countless. See at least Weizenbaum, 1976; Porter, 1995; Crosby, 1997; Golumbia, 2009; Bouk, 2015; Hansson, 2018; Schafer, 2018. See also the notion of quantified self, starting with Wolf, 2010; Kelly, 2012; Lupton, 2016; Nafus, 2016; Wolf and De Groot, 2020. 14 See www.pers​eus.tufts.edu/​hop​per/​text?doc=​Pers​eus%3At​ext%3A1​999. 04.0057%3Aen​try%3Dlo%2Fgos (accessed: September 7, 2023).

To be predicted, or not to be predicted: That is the question  29

15 See www.pers​eus.tufts.edu/​hop​per/​text?doc=​Pers​eus%3At​ext%3A1​999. 04.0057%3Aen​try%3Dmh%3Dt​is1 (accessed: September 7, 2023). 16 Again, references are countless. See at least Brzeziński et al., 1990; Coniglione, Poli and Rollinger, 2004; Cartwright and Jones, 2005. 17 “Still missing was a creature finer than these [the other living creatures], with a great mind, one who could rule the rest: man was born, whether fashioned from immortal seed by the Master Artisan who made this better world, or whether Earth, newly parted from Aether above, and still bearing some seeds of her cousin Sky, was mixed with rainwater by Titan Prometheus and moulded into the image of the omnipotent gods. And while other animals look on all fours at the ground he gave to humans an upturned face, and told them to lift their eyes to the stars” (Ovid Met. I, 77–​87). 18 References are several and will be offered in the following chapters. See at least Han, 2015 and 2017, for a general framework.

References Aeschylus, Agamemnon, ed. by H. W. Smyth, Cambridge: Harvard University Press, 1926. Aeschylus, Prometheus bound (PB), transl. by H. W. Smith, Cambridge: Harvard University Press, 1926. Agarwal, A., Alomar, A., Shah, D., “TspDB. Time series predict DB”, Proceedings of Machine Learning Research 1: 1–​31, 2020. Agrawal, A., Gans, J., Goldfarb, A., Power and prediction. The disruptive economics of artificial intelligence, Boston: Harvard Business Review Press, 2022. Agrawal, A., Gans, J., Goldfarb, A., Prediction machines. The simple economics of artificial intelligence, Boston: Harvard Business Review Press, 2018. Arfini, S., The ignorant cognition. A philosophical investigation of the cognitive features of not knowing, Basel: Springer, 2019. Arfini, S., Magnani, L., “Introduction. Knowing the unknown. Philosophical perspectives on ignorance”, Synthese 199: 689–​693, 2021. Aristotle, Nicomachean ethics, transl. by H. Rackham, Cambridge: Harvard University Press, 1934. Augustine, Confessions, ed. by C. J. -​B. Hammond, Cambridge: Harvard University Press, 2014 (398). Belsky, D. W., et al., “Quantification of the pace of biological aging in humans through a blood test, the DunedinPoAm DNA methylation algorithm”, eLife 9: 1–​ 25, 2020. Berlin, I., “Two concepts of liberty”, in Id., The proper study of mankind, London: Chatto & Windus, 1997, 191–​242 (1958). Bouk, D., How our days became numbered. Risk and the rise of the statistical individual, Chicago: University of Chicago Press, 2015. Brzeziński, J., et al., “Idealisation I. General problems”, Poznań Studies in the Philosophy of the Sciences and the Humanities 16, 1990. Buck, R. C., “Reflexive predictions”, Philosophy of Science 30/​4: 359–​369, 1963. Burkert, W., Homo necans. The anthropology of ancient Greek sacrificial ritual and myth, transl. by P. Bing, Berkeley: University of California Press, 1983 (1972). Callimachus, Hymn to Athena, ed. by A. W. Mair, London-​New York: Heinemann-​ G. P. Putnam’s Sons, 1921.

30  To be predicted, or not to be predicted: That is the question

Cambra-​Badii, I., Guardiola, E., Banos, J. -​E., “The ethical interest of Frankenstein. Or, the modern Prometheus. A literature review 200 years after its publication”, Science and Engineering Ethics 26/​5: 2791–​2808, 2020. Cartwright, N., Jones, M. R., eds., Idealisation XII. Correcting the model. Idealisation and abstraction in the sciences (Poznań Studies in the Philosophy of the Sciences and the Humanities, vol. 86), Amsterdam, The Netherlands: Rodopi, 2005. Cassirer, B., “Einstein’s theory of relativity”, in Id., Substance and function and Einstein’s theory of relativity, transl. by W. C. Swabey and M. C. Swabey, Chicago: Open Court, 1923. Chiodo, S., Apologia del dualismo. Un’indagine sul pensiero occidentale, Roma: Carocci, 2013. Chiodo, S., Technology and anarchy. A reading of our era, Lanham-​Boulder-​ New York-​ London: Lexington Books-​ The Rowman & Littlefield Publishing Group, 2020. Chiodo, S., “Human autonomy, technological automation (and reverse)”, AI & Society 37/​1: 39–​48, 2022. Chiodo, S., Technology and the overturning of human autonomy, Cham: Springer, 2023a. Chiodo, S., “Trading human autonomy for technological automation”, in Handbook of critical studies of artificial intelligence, ed. by S. Lindgren, Cheltenham: Edward Elgar Publishers, 2023b, 67–​78. Coniglione, F., Poli, R., Rollinger, R., eds., Idealisation XI. Historical studies on abstraction and idealisation (Poznań Studies in the Philosophy of the Sciences and the Humanities, vol. 82), Amsterdam, The Netherlands: Rodopi, 2004. Crosby, A. W., The measure of reality. Quantification and Western society, 1250–​ 1600, Cambridge-​New York: Cambridge University Press, 1997. Descartes, R., A discourse on method, ed. by J. Veitch, London: J. M. Dent and Sons, 1812 (1637). Dougherty, C., Prometheus, London: Taylor & Francis, 2006. Driver, J., “The virtues of ignorance”, The Journal of Philosophy 86/​7: 373–​384, 1989. Durkheim, É., The elementary forms of religious life, transl. by C. Cosman, Oxford-​ New York: Oxford University Press, 2001 (1912). Eliade, M., The sacred and the profane, transl. by W. R. Trask, New York: Harper & Row, 1961 (1957). Euripides, Bacchae, transl. by T. A. Buckley, London: Bohn, 1850. Finn, E., Guston, D., Robert, J. S., eds., Frankenstein. Or, the modern Prometheus. Annotated for scientists, engineers, and creators of all kinds, Cambridge: MIT Press, 2017. Firestein, S., Ignorance. How it drives science, Oxford: Oxford University Press, 2012. Flanagan, O., “Virtue and ignorance”, Journal of Philosophy 87/​8: 420–​428, 1990. Freud, S., Totem and taboo. Some points of agreement between the mental lives of savages and neurotics, transl. by J. Strachey, London: Routledge, 1950 (1913). Frischmann, B., Selinger, E., Re-​engineering humanity, Cambridge: Cambridge University Press, 2018. Galilei, G., Dialogue concerning the two chief world systems Ptolemaic and Copernican, transl. by S. Drake, Berkeley: University of California Press, 1967 (1632). Gartziou-​Tatti, A., “Blindness as punishment”, in Light and darkness in ancient Greek myth and religion, ed. by M. Christopoulos, E. Karakantza and O. Levaniouk, Lanham: Lexington Books, 2010, 75–​108.

To be predicted, or not to be predicted: That is the question  31

Golumbia, D., The cultural logic of computation, Cambridge: Harvard University Press, 2009. Gray, T., Ode on a distant prospect of Eton College, London: Dodsley, 1747. Gross, M., McGoey, L., eds., Routledge international handbook of ignorance studies, Abingdon: Routledge, 2015. Halpern, P., The pursuit of destiny. A history of prediction, Cambridge: Perseus, 2000. Hamzelou, J., “Aging clocks aim to predict how long you’ll live”, MIT Technology Review 125/​4: 14–​15, 2022. Han, B., The burnout society, transl. by E. Butler, Stanford: Stanford University Press, 2015. Han, B., Psychopolitics. Neoliberalism and new technologies of power, transl. by E. Butler, London-​Brooklyn: Verso, 2017. Hansson, S. O., ed., Technology and mathematics. Philosophical and historical investigations, Cham: Springer, 2018. Harcourt, B. E., Against prediction. Profiling, policing, and punishing in an actuarial age, Chicago: University of Chicago Press, 2007. Hesiod, Theogony, transl. by G. Nagy and J. Banks, Cambridge: Centre for Hellenic Studies, Harvard University, https://​chs.harv​ard.edu/​prim​ary-​sou​rce/​hes​iod-​theog​ ony-​sb/​. Horvath, S., “DNA methylation age of human tissues and cell types”, Genome Biology 14, article 3156: 2–​19, 2013. Horvath, S., Raj, K., “DNA methylation-​based biomarkers and the epigenetic clock theory of ageing”, Nature Reviews Genetics 19: 371–​384, 2018. Howe, L., Wain, A., eds., Predicting the future, Cambridge-​New York: Cambridge University Press, 1993. Hume, D., An enquiry concerning human understanding, ed. by T. L. Beauchamp, Oxford-​New York: Oxford University Press, 1999 (1748). Hume, D., A treatise of human nature, ed. by D. F. Norton and M. J. Norton, Oxford-​ New York: Oxford University Press, 2000 (1739–​1740). Kahn, A. D., “Every art possessed by man comes from Prometheus. The Greek tragedians and science and technology”, Technology and Culture 11/​ 2: 133–​ 162, 1970. Kant, I., Critique of pure reason, ed. by P. Guyer and A. Wood, Cambridge-​ New York: Cambridge University Press, 1988 (1781). Kant, I., Critique of the power of judgment, ed. by P. Guyer, Cambridge-​ New York: Cambridge University Press, 2000 (1790). Kelly, K., The quantified century, http://​qua​ntif​i eds​elf.com/​con​fere​nce/​Palo-​Alto-​2012. Le Morvan, P., “Knowledge, ignorance and true belief”, Theoria 77/​1: 32–​41, 2010. Le Morvan, P., “On ignorance. A reply to Peels”, Philosophia 39/​2: 335–​344, 2011. Levine, M. E., et al., “An epigenetic biomarker of aging for lifespan and healthspan”, Aging 10/​4: 573–​591, 2018. Liddell, H. G., Scott, R., An intermediate Greek-​English lexicon, Oxford: Clarendon Press, 1889. Lupton, D., The quantified self, Malden: Polity, 2016. Mayor, A., Gods and robots. Myths, machines and ancient dreams of technology, Princeton-​Oxford: Princeton University Press, 2018. Merton, R. K., “The self-​ fulfilling prophecy”, The Antioch Review 8/​ 2: 193–​ 210, 1948.

32  To be predicted, or not to be predicted: That is the question

Miles, G., Classical mythology in English literature. A critical anthology, London-​ New York: Routledge, 1999. Millar, J., “Technology as moral proxy. Autonomy and paternalism by design”, IEEE Technology and Society Magazine 34: 47–​55, 2015. Morton, T., ed., A Routledge literary sourcebook on Mary Shelley’s Frankenstein, London-​New York: Routledge, 2002. Nafus, D., Quantified. Biosensing technologies in everyday life, Cambridge: MIT Press, 2016. Nagy, P., et al., “Facing the pariah of science. The Frankenstein myth as a social and ethical reference for scientists”, Science and Engineering Ethics 26/​2: 737–​759, 2020. Otto, R., The idea of the holy. An inquiry into the non-​rational factor in the idea of the divine and its relation to the rational, transl. by J. W. Harvey, London-​ New York: Oxford University Press, 1958 (1917). Ovid, Metamorphoses, transl. by S. Lombardo, Indianapolis-​Cambridge: Hackett, 2010. Owens, J., Cribb, A., “ ‘My Fitbit thinks I can do better!’. Do health promoting wearable technologies support personal autonomy?”, Philosophy & Technology 32: 23–​38, 2019. Peels, R., “What is ignorance?”, Philosophia 38/​1: 57–​67, 2010. Peels, R., “Ignorance is lack of true belief. A rejoinder to Le Morvan”, Philosophia 39/​2: 345–​355, 2011. Peels, R., The epistemic dimensions of ignorance, Cambridge: Cambridge University Press, 2017. Plato, Theaetetus, transl. by H. N. Fowler, Cambridge: Harvard University Press, 1921. Plato, Apology, ed. by H. N. Fowler, Cambridge-​London: Harvard University Press-​ Heinemann, 1966. Plato, Protagoras, transl. by W. R. M. Lamb, Cambridge: Harvard University Press, 1967. Popper, K., The poverty of historicism, London-​New York: Routledge, 1957. Porter, T. M., Trust in numbers. The pursuit of objectivity in science and public life, Princeton: Princeton University Press, 1995. Rawls, J., A theory of justice. Revised edition, Cambridge: Harvard University Press, 1999 (1971). Romanos, G. D., “Reflexive predictions”, Philosophy of Science 40/​1: 97–​109, 1973. Schafer, K., “A brief history of rationality. Reason, reasonableness, rationality, and reasons”, Manuscrito 41/​4: 501–​529, 2018. Shakespeare, W., The tragedy of Macbeth, ed. by B. A. Mowat and P. Werstine, New York: Folger Shakespeare Library, 2013 (1623). Sharon, T., “Self-​tracking for health and the quantified self. Re-​articulating autonomy, solidarity, and authenticity in an age of personalized healthcare”, Philosophy & Technology 30: 93–​121, 2017. Shelley, M., Frankenstein. Or, the modern Prometheus, London: Penguin Books, 1994 (1818 and 1831). Siegel, E., Predictive analytics. The power to predict who will click, buy, lie, or die, Hoboken: Wiley, 2012. Smith, H., “Culpable ignorance”, Philosophical Review 92/​4: 543–​571, 1983.

To be predicted, or not to be predicted: That is the question  33

Smithson, M., Ignorance and uncertainty. Emerging paradigms, New York: Springer, 1988. Sophocles, Oedipus Tyrannus (OT), ed. by R. Jebb, Cambridge: Cambridge University Press, 1887. Sullivan, S., Tuana, N., Race and epistemologies of ignorance, New York: State University of New York Press, 2007. The holy bible, new international version, East Brunswick: New York International Bible Society, 1983. Tulchinsky, I., The age of prediction. How data and technology are driving exponential change, Davos: World Economic Forum, www.wefo​rum.org/​age​nda/​2022/​05/​age-​ of-​pre​dict​ion/​. Tulchinsky, I., Mason, C. E., The age of prediction. Algorithms, AI, and the shifting shadows of risk, Cambridge: MIT Press, 2023. Weizenbaum, J., Computer power and human reason. From judgment to calculation, San Francisco: Freeman, 1976. Wittgenstein, L., Tractatus logico-​philosophicus, ed. by D. Pears and B. McGuinness, London: Routledge & Kegan Paul, 1963 (1921). Wittgenstein, L., On certainty, Oxford: Blackwell, 1969. Wolf, G. I., “The data-​driven life”, The New York Times Magazine, www.nyti​mes. com/​2010/​05/​02/​magaz​ine/​02s​elf-​meas​urem​ent-​t.html?pag​ewan​ted=​all&_​r=​0. Wolf, G. I., De Groot, M., “A conceptual framework for personal science”, Frontier in Computer Science 21/​2: 1–​5, 2020. Zewe, A., “A tool for predicting the future. Researchers design a user-​friendly interface that helps nonexperts make forecasts using data collected over time”, MIT News, https://​news.mit.edu/​2022/​ten​sor-​pre​dict​ing-​fut​ure-​0328. Zimmerman, M., “Moral responsibility and ignorance”, Ethics 107/​3: 410–​426, 1997.

2 DEATH CLOCKS AND OTHER EMERGING TECHNOLOGIES TO PREDICT OUR FUTURE

2.1  The current obsession with technological prediction

From scripture to mythology to philosophy, humans are thought of as essentially technological and technology is thought of as essentially predictive, as we have seen in Chapter 1. But the present time seems to take predictive technologies to the extreme, to the point that several websites even popularise the attempt to predict our death, specifically its cause and date.1 As far as professional researchers are concerned, the world’s leading university,2 i.e. the Massachusetts Institute of Technology, has recently reported in the MIT Technology Review that [s]‌cientists have spent the last decade developing tools called aging clocks that assess markers in your body to reveal your biological age. The big idea behind aging clocks is that they’ll essentially indicate how much your organs have degraded, and thus predict how many healthy years you have left. (Hamzelou, 2022: 14, which refers to Horvath, 2013; Horvath and Raj, 2018; Levine et al., 2018; Belsky, 2020, as we will see in the following section) The general attempt to predict whatever aspect of our life as single individuals seems to characterise emerging technologies as their primary objective, starting with the exponential growth of data, computing power and artificial intelligence,3 to the point that, for instance, [m]‌ achine learning will increasingly be able to discern patterns and make predictions from very different data, including proxy data, beyond DOI: 10.4324/9781032656885-3

Death clocks and other emerging technologies  35

just underwriting ever more granular health risks revealed by genomic analysis. Policies will reflect your financial risk profile, your driving, your penchant for sports, your level of natural-​disaster or crime risk based on where you live. (Tulchinsky and Mason, 2023: 90) A decade ago, Siegel reviewed “147 examples of predictive analytics” (Siegel, 2012, table section, also for the following quotes). The examples covered several areas: “family and personal life” (for instance, when predictions are about where one will be, with whom one will be friend, with whom one will fall in love, who will be pregnant, who will be unfaithful, who will divorce and who will die); “marketing, advertising and the web” (for instance, when predictions are about who will buy something, what one will buy, who will not buy something, what one will not buy, what will be easily sold, what will not be easily sold, which personalised recommendations, advertisements, posts and tweets will be successful, which personalised recommendations, advertisements, posts and tweets will not be successful, what will be spam and what will not be spam); “financial risk and insurance” (for instance, when predictions are about which vehicles will cause bodily harm, who will suffer bodily harm, at what age one will die, who will pay mortgages and loans and which investments will be successful); “healthcare” (for instance, when predictions are about who will die after surgery, who will need palliative care, when infections will increase, which tissue is likely to degenerate into cancer, who will respond positively to drugs, which births will be premature, the number of days patients will spend in hospitals, who will not ensure compliance with drug prescriptions and who will need preventive healthcare and healthcare); “crime fighting and fraud detection” (for instance, when predictions are about who is likely to evade taxes, who is likely to defraud public and private bodies, which claims are likely to be fraudulent when it comes to insurance and warranty, who will commit crimes again, when and where crimes will happen, when and where terrorist attacks will happen, which murders will be solved and which online activities are likely to be malicious); “fault detection for safety and efficiency” (for instance, when predictions are about which satellites are likely to require prompt maintenance, which nuclear reactors are likely to require prompt maintenance, which energy distribution cables will break and need replacement, which train tracks are likely to break and need replacement, which train wheels are likely to break and need replacement, which office equipment will break and need replacement, what will be the rate of oil production, which aviation incidents are likely to be fatal, which flights will be delayed, which travels will be delayed and which outages will happen); “government, politics, non-​profit and education” (for instance, when predictions are about who will be positively influenced by campaign contact when it comes to elections, who

36  Death clocks and other emerging technologies

will be negatively influenced by campaign contact when it comes to elections, who will contribute when it comes to donations, who will not contribute when it comes to donations, which applications for research grants will be approved, which applications for disability claims will be approved, which students will drop out of school, which grades students’ written essays will be, which students will need academic assistance, which students will be successful when it comes to algebra problems and which students will not be successful when it comes to algebra problems); “human language understanding, thought and psychology” (for instance, when predictions are about which statements are likely to be lies, which statements are likely to be insults, which posts are likely to be approved, which clients are likely to leave, which drivers are likely to suffer from fatigue, who are likely to suffer from psychopathy, who are likely to suffer from schizophrenia and which brain activity will result from what one sees); “staff and employees, human resources” (for instance, when predictions are about which voluntary editors of Wikipedia will quit, which candidates will succeed at work and for which positions candidates will apply). More recently, Tulchinsky and Mason reviewed emerging predictive technologies whose primary objective is, again, prediction and even defined the present time as “the age of prediction” (Tulchinsky and Mason, 2023: VII), in that “[w]‌e are now living in a world that is increasingly wired by billions of predictive algorithms, a world in which almost everything can be predicted and risk and uncertainty appear to be diminishing in almost all areas of life” (Tulchinsky and Mason, 2023: VII).4 They focused on the following emerging predictive technologies in particular: technologies related to healthcare, from medicine in general to vaccine in particular, starting with “the use of machine learning […] [that] was integral to the explosion of vaccine work that has come from the COVID-​19 pandemic. […] Predictive algorithms were used to understand viruses better, predict what design elements would generate the greatest immune response, track the evolution of variants and make sense of both experiment and clinical trial data” (Tulchinsky and Mason, 2023: 7–​8. See Döpfner, 2021); technologies related to the working world, from hiring to succeeding to quitting, starting with the case of a professional athlete asked to take a genetic test related to hypertrophic cardiomyopathy mutations (see Krisha, 2007; Menge, 2007) and the case of predictive algorithms that increasingly select candidates (see Dattner et al., 2019; Black and van Esch, 2020); technologies related to money, from insurance to finance (see Powers, 2011; Balasubramanian, Libarikian and McElhany, 2021); technologies related to warfare, from defence to attack, starting with predicting both enemy’s strategy in particular and one’s strategy in general, which is increasingly based on autonomous weapons (see Scharre, 2018); technologies related to elections, from campaigning to polling (see Jackson, 2020a and 2020b); technologies related to justice, from literally predicting

Death clocks and other emerging technologies  37

crimes one will commit to metaphorically predicting who committed crimes through “DNA testing and other molecular forensics technologies […] enabling predictions about guilt and innocence based not only on DNA left at the scene of the crime but also on the analysis of DNA that could have been found anywhere” (Tulchinsky and Mason, 2023: 7–​8. See Ehrlich, 2015). It is not worth adding further references to the variety of emerging predictive technologies, not only in that they are endless but also, and especially, in that the objective of my work is not their review, but the philosophical meaning of the current obsession with technological prediction. Yet, before moving from the general case of predictive technologies5 to the particular case of death clocks in the following section, it is worth noting that the history of Western science and technology has been continuously characterised by the attempt to predict the future, especially from the 17th century to the present time. First, the growth of probability and statistics (see Hacking, 1975 and 1990; Stigler, 1986), second, the growth of data, computing power and artificial intelligence (as we have seen) and, third, the growth of genomics (as we have seen and will see in the following section) have continuously increased our scientific and technological power to predict our future. And neoliberalism has further taken predictive technologies to the extreme, in that the more one can predict others’ decisions and actions, the more one can make money on them (see at least Crary, 2013; Han, 2017; Sadowski, 2020; Zuboff, 2020). But, if we consider the variety of emerging predictive technologies we have reviewed, something meaningful emerges: the more predictive technologies develop, the more their focus move from society to single individuals, to the point that we move from predicting “the total number of ice cream cones to be purchased next month in Nebraska […] [to predicting] which individual Nebraskans are most likely to be seen with cone in hand” (Siegel, 2012: 12). Thus, if we want to try to understand the present time, we should question not only the meaning of the current obsession with technological prediction but also the meaning of the increasing obsession with the prediction of our future as single individuals –​why do we want to move from knowing what should be statistically probable to knowing what should be individually certain? We may rephrase as follows: why do we want to move from a condition that gives our freedom more room for manoeuvre to a condition that gives our freedom less room for manoeuvre? Digital twins are cases in point (also for their extensive use). They are defined by the following three characteristics (see Glaessgen and Stargel, 2012; Grieves, 2014. See also Barricelli, Casiraghi and Fogli, 2019; Fuller et al., 2020; Jones et al., 2020; van der Valk et al., 2020; Liu et al., 2021; Botín-​Sanabria et al., 2022). First, the presence of a physical object, sometimes inorganic and sometimes organic, as in the case of humans. Second, the presence of a technological, specifically digital, object, which is the physical object’s digital twin. Third, the presence of a special relationship between them, which means

38  Death clocks and other emerging technologies

that, if the former changes, the latter also changes accordingly and, if the latter changes, the former also changes accordingly. Digital twins are more frequently used in the case of inorganic physical objects, from manufacturing (see Holler, Uebernickel and Brenner, 2016; Kritzinger et al., 2018; Lu et al., 2020) to smart cities (see Mohammadi and Taylor, 2017; Deng, Zhang and Shen, 2021; Shahat, Hyun and Yeom, 2021). Their objective is to increase the optimisation of inorganic physical objects’ performance in faster and cheaper ways, which are guaranteed by digital twins’ power of prediction resulting from their data, computing power and artificial intelligence. Less frequently, but increasingly, digital twins are used in the case of organic physical objects, especially humans. Objectives and ways are analogous: the optimisation of humans’ performance and digital twins’ power of prediction. Human digital twins are especially used in personalised medicine,6 which is officially defined as a medical model using characterisation of individuals’ phenotypes and genotypes (e.g. molecular profiling, medical imaging, lifestyle data) for tailoring the right therapeutic strategy for the right person at the right time, and/​or to determine the predisposition to disease and/​or to deliver timely and targeted prevention.7 Personalised medicine powered by human digital twins has great potential when it comes to healthcare, from therapy to surgery (see Gahlot, Reddy and Kumar, 2019; Kamel Boulos and Zhang, 2021; Okegbile et al., 2023; Sahal, Alsamhi and Brown, 2022). Indeed, speaking of therapy and surgery that are tailored to us as single individuals means speaking of more successful therapy and surgery, in that our diseases are not addressed statistically, but individually. And, if it is true that the word “statistically” should mean nothing more than probability, it is also true that the word “individually” should mean something more than probability: it should mean certainty, as we have started seeing. Again, personalised medicine powered by human digital twins has great potential. I myself am grateful for what personalised medicine may offer me. Yet, critical issues also emerge (see at least Bruynseels, Santoni de Sio and van den Hoven, 2018; Braun, 2021; Korenhof, Blok and Kloppenburg, 2021; Popa et al., 2021). For instance, a human scientist provocatively writes: “the digital twin simulating my cardiovascular system may one day ‘knock on my door’ in order to warn me of an imminent severe heart disease” (Braun, 2021: 397) and “this will change my life completely because besides the possibility of offering me opportunities to ward off the physical manifestation of disease, the digital twin will be exercising power over me and my life” (Braun, 2021: 397). Other human scientists reflect upon analogous critical issues. Let us imagine that “[t]‌he subject is healthy according to current

Death clocks and other emerging technologies  39

healthcare practices, but her digital twin indicates a certain likelihood of developing a disease later on, therefore making that the person ‘is not ok’ ” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 6). Should we think that one’s digital twin is “merely instrumental in better decisions in healthcare interventions” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 6)? Or should we think that one’s digital twin is also “part of that person’s identity” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 6), in the sense that “the mere fact that other people or institutions think that you are going to be sick or weak or short-​lived may make you sick, weak or short-​lived” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 8)? According to the authors, the answer is not the former, but the latter. The more public and private bodies think of human digital twins as true, and even certain, the more one moves from being thought of as actually healthy to being thought of as potentially sick. And thinking of one as potentially sick is quite disruptive, in that it even means redefining one’s identity: in the absence of disease, one is not defined in terms of health, but in terms of sickness. Being defined as potentially sick is precisely one of the objectives of human digital twins as predictive technologies, which make medicine move from being a matter of healthcare, from therapy to surgery, to being also, and especially, a matter of prevention, and even prediction. Thus, what may be the philosophical meaning of the words quoted above, according to which “the digital twin will be exercising power over me and my life”? Human and social scientists more frequently focus on their social meaning. More precisely, they reflect upon social risks, staring with the following. First, the critical balance between the rise of private healthcare, which exponentially uses predictive technologies,8 and the fall of public healthcare, whose financial resources cannot always guarantee their use (see at least Dickenson, 2013). Second, the critical balance between the rise of the paradigm according to which individuals are clients and the fall of the paradigm according to which individuals are patients. Indeed, even from a linguistic perspective, the word “patient” is increasingly replaced by the word “client”. For instance, we happen to read that “healthcare providers and stakeholders […] maximise their business by making personalised decisions and recommendations for their clients” (Sahal, Alsamhi and Brown, 2022: 11), specifically “clients (i.e. patients)” (Sahal, Alsamhi and Brown, 2022: 9). And even our sleep happens to make us clients of companies that optimise it “to fit around the escalating temporal demands of daily life, thereby helping remedy the increasing misalignment between biological and social time […] in late modern society, where alertness is prized, sleepiness is problematised and vigilance is valorised” (Williams, Coveney and Gabe, 2013: 40, as we will see in the following section and in Chapter 3). Conversely, the philosophical meaning of the words quoted above, according to which “the digital twin will be exercising power over me and my

40  Death clocks and other emerging technologies

life”, is less frequently addressed. Let us start with the following reflection. We design and use human digital twins not to predict our future in general, i.e. statistically, but to predict our future in particular, i.e. individually. As far as the causes of what we do are concerned, the case of human digital twins seems to stress the importance of optimisation: we predict the future of our bodies and minds to optimise their performance (especially in neoliberal societies, which require us to be up and running 24/​7). It is no coincidence that, according to the authors quoted above, human digital twins make experts even describe humans not in terms of bodily and mental optimisation (for instance, when we say that we feel good or we do not feel good), but in terms of engineering optimisation, from speaking of “normal functioning” to speaking of “malfunctioning” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 2). Again, humans are described not in terms of bodily and mental characteristics (for instance, when we speak of care, patient recovery and personal development), but in terms of engineering characteristics, from speaking of “predictive maintenance” to speaking of “performance optimisation” to speaking of “implementation of new functionality” (Bruynseels, Santoni de Sio and van den Hoven, 2018: 2). As far as the effects of what we do are concerned, the case of human digital twins seems to stress the following phenomenon, which is key to my argument: the more I am predicted (not statistically, but individually), the more “power over me and my life” is “exercise[ed]” –​the more I am individually predicted, the less my freedom is given room for manoeuvre, which means that the less my autonomy is given room for manoeuvre when it comes to applying my reflection, my imagination, my planning, my decision-​making, my adaptability and my sensemaking to my future. I will develop my argument through more theoretical tools in the following chapters. But let us start reflecting upon the meaning of the phenomenon described above. If we are continuously told that “[n]‌ow in medical science we don’t want to know […] just how cancer works; we want to know how your cancer is different from my cancer” (Siegel, 2012: 23),9 we end up believing that a diagnosis is not a diagnosis (which implies probability), but a sentence (which implies certainty). Again, I myself am grateful for what predictive technologies in general and personalised medicine in particular may offer me. Yet, history teaches us that phenomena taken to the extreme are dangerous. And speaking of phenomena taken to the extreme means speaking of the absence of both critical thinking and, consequently, regulation as tools that replace enthusiasm’s irrationality with foresight’s rationality, specifically wisdom (as we will see in Chapter 5). Thus, on the one hand, from an empirical perspective, we should not forget that, even though emerging predictive technologies are exceedingly powerful, our future may not coincide with their prediction. For instance, the prediction

Death clocks and other emerging technologies  41

of the emergence of a phenotype is not the same as the actual manifestation of that trait. The transition from potential trait to actual trait should be acted upon only when it occurs and perhaps only if the ‘owner’ agrees. (Tulchinsky and Mason, 2023: 112–​113) More generally, we should not forget that there is hardly certainty. Let us imagine, for instance, that “the digital twin simulating my cardiovascular system […] ‘knock[s]‌on my door’ in order to warn me of an imminent severe heart disease”. And let us imagine to ask the following questions: what if I totally change my lifestyle? Am I totally certain that, whatever I do, I will have “an imminent severe heart disease”? On the other hand, we should not forget to reflect upon the phenomenon described above from an ideal perspective (which is one of the most important tasks of philosophy). Let us use a thought experiment and imagine that I can be totally certain that I will have “an imminent severe heart disease”. And let us imagine to ask the following questions: is it worth being totally certain? More precisely, is it worth being totally certain when it comes to my reflection, my imagination, my planning, my decision-​making, my adaptability and my sensemaking? Even more precisely, is it worth being totally certain when it comes to the meaning of my life? Again, I will answer the questions asked above in the following chapters. But let us start at least with the following reflection. What I may need most is (high) probability, and not (total) certainty. If I am told that it is (highly) probable that I will have “an imminent severe heart disease”, I have good reasons to do my best to change my future and, thus, be proactive and make sense of my present life. Conversely, if I am told that it is (totally) certain that I will have “an imminent severe heart disease”, I have not good reasons to do my best to change my future, in that I am doomed and hopeless –​and there is hardly anything worse than being doomed and hopeless when it comes to being proactive and making sense of one’s present life. Thus, both from an empirical perspective and from an ideal perspective, there seem to be good reasons to exercise both critical thinking and, consequently, regulation when it comes to being individually predicted. In the following chapters, my objectives will be, first, reasoning about the philosophical meaning of being individually predicted and, second, reasoning about the possible distinction between cases in which it would be wiser not to be individually predicted (and why) and cases in which it would be wiser to be individually predicted (and why). Before working on my objectives, it is necessary to move from the general issue of predictive technologies to the particular issue of technologies that predict our future as single individuals even when it comes to our deaths’ causes and dates.

42  Death clocks and other emerging technologies

2.2  Death clocks

The MIT Technology Review quoted above especially refers to Horvath’s, Levine’s and Belsky’s research (in addition to the publications we have seen, see also Levine, 2013 and 2020; Horvath et al., 2015. See also Corbyn, 2022). But there are several researchers who work on technologies that predict our deaths’ causes and dates more and more accurately (see at least Cawthon et al., 2003; Bocklandt at al., 2011; Sanders et al., 2012; Sanders and Newman, 2013; Fischer et al., 2014; Wayt Gibbs, 2014; Marioni et al., 2015; Peters et al., 2015; Pirracchio et al., 2015; Chen et al., 2016; Lin et al., 2016; Zhang et al., 2017; Cole et al., 2018; Foreman et al., 2018; Liu et al., 2018; Gao et al., 2019; Zhavoronkov et al., 2019; Eiriksdottir et al., 2021). Two kinds of predictive technologies especially emerge. The first kind of predictive technologies focuses on DNA methylation, which is a biological process that changes with age not only in the case of different individuals but also in the case of individuals’ different organs. Thus, two individuals who share the same chronological age (for instance, 80 years old) may not share the same age in terms of DNA methylation (for instance, the first individual may be 75 years old and the second individual may be 85 years old). More precisely, even though the two individuals share the same chronological age, they may not share the same epigenetic age, i.e. clock, which is popularised as death clock (in addition to the kinds of death clocks that do not result from scientific research, as we have seen in Chapter 1). And, even though epigenetic clocks do not mean certainty, it is worth noting that, first, their power of prediction is more and more accurate and, second, it is increasingly exploited by public and private bodies to exercise power over us and our life, to the point that, for instance, “[t]‌his company wants to analyse your saliva –​to try to predict when you’ll die” (Robbins, 2017: no page number). The second kind of predictive technologies focuses on telomeres, which are structures at the end of chromosomes that protect them and whose lengths change with age. The longer telomeres are, the less aging is. Thus, researchers design technologies that try not only to predict aging but also to extend telomeres’ lengths. Again, telomeres do not mean certainty, but, especially in correlation with other markers starting with epigenetic clocks, their power of prediction is more and more accurate and increasingly exploited by public and private bodies. It is worth considering at least the words of the researchers quoted by the MIT Technology Review, i.e. Horvath, Levine and Belsky. The first researcher writes: I developed a multi-​tissue predictor of age that allows one to estimate the DNA methylation age of most tissues and cell types. The predictor, which is freely available, was developed […] [to] form an aging clock […].

Death clocks and other emerging technologies  43

This novel epigenetic clock can be used to address a host of questions in developmental biology, cancer and aging research. […] I will show that the resulting age predictor performs remarkably well across a wide spectrum of tissues and cell types. (Horvath, 2013: 1–​2) The second researcher, who develops Horvath’s research, writes: “we develop a new epigenetic biomarker of aging, DNAm PhenoAge, that strongly outperforms previous measures in regards to predictions for a variety of aging outcomes, including all-​cause mortality, cancers, healthspan, physical functioning and Alzheimer’s disease” (Levine et al., 2018: 574), and which is defined as “a highly robust predictor of both morbidity and mortality outcomes” (Levine et al., 2018: 585). The third researcher writes: “We conducted machine-​learning analysis of the original Pace of Aging measure using elastic-​net regression and whole-​genome blood DNA methylation data. We trained the algorithm to predict how fast a person was aging” (Belsky, 2020: 13). Three issues especially emerge. First, the power of prediction. Second, a kind of popularisation of it. For instance, in the case of Horvath, his “predictor […] is freely available” and, in the case of Levine, who was interviewed by Corbyn in The Guardian, she “published a method of estimating biological age that combines nine blood measures and the calculator to do it is free online”10 (Corbyn, 2022: no page number). Third, a kind of ambiguity when it comes to aging and, consequently, death. It is worth adding Levine’s words published in The Guardian before proposing a possible reading. When asked on aging (“At what point does an obsession with staying young and healthy become negative? Shouldn’t wisdom, wrinkles and grey hair be celebrated rather than fought and derided?”, Corbyn, 2022: no page number), she answers: I struggle with this. I don’t want to stigmatise ageing. For most people wrinkles and grey hair don’t have a huge effect on quality of life. Delaying biological ageing is about preventing or slowing the accumulation of diseases, which do affect quality of life. A lot of people in the field want to call ageing a disease. I disagree. (Corbyn, 2022: no page number) Thus, on the one hand, aging and, consequently, death are not “stigmatise[d]‌” and, on the other hand, they are the focus of research whose objective is precisely to increasingly control them. It is no coincidence that, when asked on her lifestyle, Levine says that she regularly measures her epigenetic age: “I always calculate my biological age based on my clinical numbers from my annual physical and I’m due another epigenetic test” (Corbyn, 2022: no page number. Also, she stresses the importance of regularly measuring one’s

44  Death clocks and other emerging technologies

epigenetic age in her audiobook, which is meaningfully entitled True age. Cutting-​edge research to help turn back the clock, Levine, 2022). Finally, she adds details that show her control of her lifestyle as a way to control her aging: I try to eat a mostly plant-​based diet, stay active and exercise. Sleep and stress levels are the areas where I know I need to pay more attention. I do intermittent fasting where I restrict the time window in which I eat. (Corbyn, 2022: no page number) We may read the kind of ambiguity described above as something that strictly correlates with the key argument of my work, according to which what we seem to desperately try to do, sometimes consciously and sometimes unconsciously, is to design and use technology as our most powerful millennial ally to relieve ourselves from burdens that, especially in the present time as characterised by both the challenges of complexity and uncertainty and neoliberalism’s extreme consequences, are more than ever unbearable: the burdens of our sensemaking and our autonomy. Let us use a thought experiment and imagine that we can jointly use the further developments of the current predictive technologies focused on DNA methylation and telomeres. More precisely, let us imagine to regularly measure markers whose power of prediction is more and more accurate when it comes to aging and death. What does it mean from an existential perspective? We may imagine at least two opposite scenarios. The first scenario is negative: according to our markers, we are aging remarkably quickly. Specifically, we know that we will die of a heart attack in five years. More precisely, what we know is powered by something novel: a kind of certainty, in that we are continuously told, first, that we are individually, and not statistically, predicted and, second, that the technologies we use have an enormous power of prediction. What do we do? We may imagine at least two opposite reactions. In the first case, we are extremely passive, which means that we believe that the prediction is true and there is nothing we can do. Since it is a prediction about what is most essential for our future life (that is, will we have a future life or not?), we cannot have the strength to make sense of both our present and our future by reflecting, imagining, planning and making decisions –​we cannot have the strength to make sense of both our present and our future by exercising our autonomy (whose deep philosophical meaning will be analysed in the following chapters). We may even end up thinking that, if we had not known our future, we would have lived our last five years better (but, if it is true that our “company wants to analyse […] [our] saliva […] to try to predict when […] [we]’ll die”, it is also true that we could not have escaped from the prediction without specific regulation). In the second

Death clocks and other emerging technologies  45

case, we are extremely active, which means that, even though we believe that the prediction is true, we also believe that there is something we can do. Since it is a prediction about what is most essential for our future life (that is, will we have a future life or not?), we struggle to have the strength to make sense of both our present and our future by reflecting, imagining, planning and making decisions. More precisely, we struggle to exercise our autonomy by totally changing our lifestyle and rigorously undergoing countless preventive therapies, which means that we quit, move to another city and upset the life of our family, friends and colleagues. We struggle to the point that we are exhausted, and we may even end up thinking that, if we had not known our future, we would have lived our last five years better not only when it comes to our life but also when it comes to the life of our family, friends and colleagues (but, if it is true that our “company wants to analyse […] [our] saliva […] to try to predict when […] [we]’ll die”, it is also true that we could not have escaped from the prediction without specific regulation). The second scenario is positive: according to our markers, we are aging remarkably slowly. Specifically, we know that we will die of old age in fifty years. More precisely, what we know is powered by something novel: a kind of certainty, in that we are continuously told, first, that we are individually, and not statistically, predicted and, second, that the technologies we use have an enormous power of prediction. What do we do? We may imagine at least two opposite reactions. In the first case, we are extremely passive, which means that, since we believe that we will live a long healthy life, we postpone and neglect both our medical checks and our reflection, imagination, planning and decision-​making for our future. Indeed, the exercise of our autonomy is always challenging, and we will have time at our disposal. Thus, we end up in one of the following situations. First, we end up compromising our health precisely by postponing and neglecting our medical checks. Second, we end up compromising our reflection, imagination, planning and decision-​ making in general, and for our future in particular, precisely by postponing and neglecting their exercise, which means the exercise of our autonomy. Finally, we may even end up thinking that, if we had not known our future, we would have lived our life better both in terms of health and in terms of autonomy, in that our life would have been not only healthier but also, and especially, more meaningful. In the second case, we are extremely active, which means that, since we believe that we will live a long healthy life, we take our reflection, imagination, planning and decision-​making for our future to the extreme. Indeed, we move from exercising our autonomy (which is always rational, as we will see in the following chapters) to exercising a kind of irrational overestimation of what can be done with a long healthy life. Thus, we end up in one of the following situations. First, we end up failing precisely because of our irrational overestimation. Second, we end up in

46  Death clocks and other emerging technologies

burnout precisely because of our irrational overestimation. Finally, we may even end up thinking that, if we had not known our future, we would have lived our life better. The scenarios described above may have alternatives. But what they share is quite clear: even though predictive technologies may offer great opportunities, we cannot design and use them without exercising both critical thinking and, consequently, regulation, especially when it comes to taking their power of prediction to the extreme –​and being individually predicted about our deaths’ causes and dates means taking their power of prediction to the extreme, in that speaking of numbers of healthy years to live means speaking of something that has a huge impact on the complex variety of things that make life human. Thus, why do we do it? An insight is offered by the kind of ambiguity described above. On the one hand, aging and, consequently, death are not “stigmatise[d]‌” and, on the other hand, they are the focus of research whose objective is precisely to increasingly control them, as we have seen. We may rephrase by using the following philosophical words. Aging and death are mostly unspeakable in the present time (see at least Coombs, 2017), especially in neoliberal societies, which require us not only to be up and running 24/​7, as we have seen, but also performing to the point that we should not age and die, in that aging and death are almost unspeakable failures. When they are not unspeakable, i.e. “stigmatise[d]”, they are increasingly controlled by the most sophisticated technologies –​and, most interestingly, the most sophisticated technologies’ primary way to control aging and death is precisely their power of prediction. Again, a kind of ambiguity seems to emerge: if aging and death, specifically our aging and death as single individuals, paralyse us to the point that they are mostly unspeakable, why should we control them by making them explicit to the point that they are even predicted? Would not it be better to make them as implicit as possible, and even Freudianly repressed? I believe that the key of the philosophical meaning of the current obsession with technological prediction may be found by trying to answer the questions asked above. And the answer I propose, and on which I will thoroughly work in the following chapters, is the following: the reason why we control aging and death by predicting them is that prediction makes us capable of automating them –​ and automating something, from aging and death to any kind of burden, makes us capable of getting rid of it. Let us rethink of the scenarios described above. In any case, from the best case scenario to the worst case scenario, what we actually obtain from prediction is the sabotage, specifically the self-​sabotage, of our autonomy –​ more precisely, we actually trade our autonomy for prediction’s technological automation, which is our primary way to get rid of the unbearable burden of what paralyses us, from aging and death to any kind of burden, starting with our autonomy.

Death clocks and other emerging technologies  47

Indeed, in the first scenario, our autonomy ends up being sabotaged, and even self-​sabotaged, in that, once our death has been predicted, we cannot have the strength to make sense of both our present and our future by exercising our autonomy, as we have seen. Thus, we may argue that our autonomy is automated, i.e. engineered, in the sense that the prediction of our death has the power to turn it off. In the second scenario, our autonomy ends up being sabotaged, and even self-​sabotaged, in that, once our death has been predicted, we are totally influenced: we even quit, move to another city and upset the life of our family, friends and colleagues, as we have seen. Thus, we may argue that our autonomy is automated, i.e. engineered, in the sense that the prediction of our death has the power to replace autonomous decisions and actions with heteronomous, and even irrational, decisions and actions. In the third scenario, our autonomy ends up being sabotaged, and even self-​sabotaged, in that, once our death has been predicted, we end up compromising our reflection, imagination, planning and decision-​making in general, and for our future in particular, precisely by postponing and neglecting their exercise, which means the exercise of our autonomy, as we have seen. Thus, we may argue that our autonomy is automated, i.e. engineered, in the sense that the prediction of our death has, again, the power to turn it off. In the fourth scenario, our autonomy ends up being sabotaged, and even self-​sabotaged, in that, once our death has been predicted, we move from exercising our autonomy to exercising a kind of irrational overestimation of what can be done with a long healthy life, as we have seen. Thus, we may argue that our autonomy is automated, i.e. engineered, in the sense that the prediction of our death has, again, the power to replace autonomous decisions and actions with heteronomous, and even irrational, decisions and actions. If it makes sense, we may also argue that the prediction of one’s death through the most sophisticated technologies is exceedingly meaningful for the following reason: there is hardly anything more impactful than the prediction of one’s death when it comes to exercising one’s autonomy, in that the exercise of one’s autonomy is especially a matter of one’s future. Again, we are more than ever autonomous when we reflect, imagine, plan and make decisions to make sense of our life –​and there is hardly anything more challenging than our future when it comes to making sense of our life. Thus, why should not we use our most powerful millennial ally, i.e. technology, to automate our future, if the exercise of our autonomy when it comes to making sense of our life is more than ever unbearable? Indeed, the cases in which we may think of technological prediction as our primary way to get rid of our burdens are countless. Outwardly, they are ways through which we can optimise ourselves, from our bodies’ performance to our minds’ performance. Yet, inwardly, they are ways through which we can get rid of the unbearable burden of self-​optimisation itself as neoliberal

48  Death clocks and other emerging technologies

societies’ dogma. For instance, when I “calculate my biological age based on my clinical numbers from my annual physical”, I automatically obtain data that increasingly move the burden of autonomously feeling and knowing myself from myself to technological prediction. I am not saying that it is not useful. Conversely, I am saying that one of the reasons why it is useful is precisely that it makes my optimisation easier and easier –​my optimisation increasingly moves from resulting from the more challenging exercise of my autonomy to resulting from the less challenging exercise of technological automation. To obey data is exponentially easier than to be capable of autonomously feeling and knowing oneself. Let us consider the case of sleep, which is by definition the metaphor of death. Countless technologies are designed and used to optimise our sleep, especially through their power of prediction (see Kitchin and Dodge, 2011; Littlefield, 2018; Lyall, 2021; Nansen, Mannell and O’Neill, 2021; Lyall and Nansen, 2023).11 Again, the idea according to which we should be up and running 24/​7 emerges in at least the following two senses. First, our sleep is not an empty time that is far from our daily life, but a time we should optimise minute by minute to optimise our daily life’s performance, from our bodies to our minds. Second, if it is true that our sleep is not an empty time, but a time we should optimise minute by minute, it is also true that our sleep moves from being a kind of mystery we do not know in detail to being not only something we know minute by minute but also something we predict minute by minute. The analogy between sleep and death is quite clear: in both cases, we design and use countless technologies to optimise ourselves by moving from a kind of mystery to knowledge, specifically prediction. And the analogy between sleep and death goes even further: in both cases, speaking of knowledge and prediction means speaking of automation and engineering –​ in both cases, what we actually obtain from knowledge and prediction is not that we optimise our capabilities (for instance, our capabilities of self-​ regulation, in the case of sleep, and self-​mastery, in the case of death), but that we get rid of their burdens (for instance, we can stop self-​regulating ourselves through the challenging exercise of our wisdom if our sleep is a matter of automated and engineered data that direct us minute by minute and we can stop self-​mastering ourselves through the challenging exercise of our wisdom if our death is a matter of automated and engineered data that direct us day by day). Let us consider cases in point that show that, in the case of sleep as the metaphor of death, knowledge and prediction are actually used to obtain a paradoxical optimisation, which may be thought of as a technological optimisation coinciding with a human atrophy –​which means, again, trading human autonomy for technological automation and engineering. Countless

Death clocks and other emerging technologies  49

technologies share the general objective of “[s]‌et[ting] automations to make life easier”.12 For instance, we can buy mattresses accurately measuring […] [our] biometrics –​[…] [our] average heart and breathing rates and movements –​throughout the night, then automatically adjusting to […] [our] individual needs, designed to continuously help improve […] [our] sleep over time. […] [We]’ll see a daily snapshot of factors that can dramatically affect […] [our] sleep quality: duration, efficiency and timing.13 We can buy mattress covers that make us “[r]‌each […] [our] full potential”,14 first, by monitoring and tracking our “nightly health metrics such as HRV, heart rate, respiratory rate and more”, second, by giving us “[d]eeper sleep with automatic temperature adjustments” and, third, by giving us “the easiest wake up possible” “with a vibration and thermal alarm”. And we can buy masks that make us have “more insight [that] allows […] [us] to make better decisions throughout the day”15 by “collect[ing] an unprecedented volume of sleep data”, “[f]rom head orientation tracking to EEG brainwave monitoring”. Thus, on the one hand (outwardly), we optimise ourselves, from “mak[ing] better decisions” in particular to “[r]each[ing] […] [our] full potential” in general. On the other hand (inwardly), we optimise ourselves in a paradoxical, and even self-​ sabotaging, way: we optimise ourselves (starting with our sleep) by replacing challenging optimisations of our capabilities (starting with self-​regulating our sleep through healthy lifestyles) with “easier” optimisations of technologies (starting with whatever automates and engineers our sleep. It is no coincidence that we can find the word “automation” three times: “[s]et[ting] automations”, “automatically adjusting” and “automatic temperature adjustments”). The paradoxical, and even self-​sabotaging, replacement of human optimisation with technological optimisation also emerges in the case of self-​ tracking technologies we exponentially design and use. For instance, we may happen to experience the following situation, which is reported as a testimony in the website of the quantified self16: Unless I spend a mostly sleepless night, or have fewer than six hours of sleep, how I feel about the quality of that sleep isn’t always correlated with or supported by the data measured by my Basis watch. […] Not to mention feeling well rested in general. But then, when I see the numbers from Basis [watch] contradict my positive assessment of my rest, my sense of having rested well is suddenly undermined as if those Basis [watch] numbers somehow have the power to make me doubt my own experience.17

50  Death clocks and other emerging technologies

The point is not what is truer. Human “feeling”, “sense” and “experience”? Or technological “data” that are “measured” and “numbers”? Conversely, from a philosophical perspective, the point is that, even in the case in which the latter are truer than the former, the more we replace the former with the latter, the more we atrophy capabilities that may be essential for us not only in a variety of experiences, from grasping the quality of our sleep in particular to grasping the quality our health in general, but also in a kind of meta-​experience, which is the meta-​experience of (pleasantly) making sense of our life. Whenever we can “feel” ourselves, from the quality of our sleep to the quality of our health, we can be capable of both experiencing our life and (pleasantly) meta-​experiencing ourselves as making sense of our life by “feeling” it and grasping its quality –​we can be capable of (pleasantly) meta-​ experiencing what being human means. The more we replace our “feeling” with technological “data”, the more we atrophy our capability to (pleasantly) meta-​experience ourselves as humans who make sense of their life by “feeling” it and grasping its quality. Interestingly enough, according to another testimony in the website of the quantified self, the reason why the self-​tracker decided to stop tracking was the following: I had stopped trusting myself /​letting the numbers drown out /​my intuition /​my instincts […] I was addicted /​to my iPhone apps /​to getting the right numbers […] [.]‌Each day /​my self-​worth was tied to the data /​[. O]ne pound heavier this morning? /​You’re fat. /​2 g too much fat ingested? /​ You’re out of control. /​Skipped a day of running? /​You’re lazy. /​Didn’t help 10 people today? /​You’re selfish.18 Interestingly enough, the correlation between atrophying human capabilities and automating and engineering them, together with human identity itself, emerges. Indeed, the fall of the former (“trusting myself […] [,]‌my intuition /​my instincts […] [and] my self-​worth”) correlates with the rise of the latter (“the numbers […] [and] the data”). More precisely, the former, together with human identity itself, are automated and engineered by the latter: being “fat” automatically results from being “one pound heavier this morning”, being “out of control” automatically results from “2 g too much fat ingested”, being “lazy” automatically results from having “[s]kipped a day of running” and being “selfish” automatically results from not having “help[ed] 10 people today”. Again, why do we do it? The answer seems to be clearer. The cases in point we have considered show that “automations […] make life easier”, as we have seen. If I want to understand if I am “fat […] [,]‌out of control […] [,] lazy […] [and] selfish” (especially in a society that obsessively requires me to be the opposite), I need time and effort. First, I need to exercise a variety of capabilities: not only “feeling” myself, as we have seen, but also introspection, starting with

Death clocks and other emerging technologies  51

analysis and abstraction. Second, I need the mediation of experts, from dieticians to psychologists (and from scientists’ writings to human and social scientists’ writings). Third, I need to exercise a variety of capabilities, from making comparisons between my perspective and the experts’ perspectives to making autonomous decisions to acting accordingly. Conversely, I do not need time and effort if I automate and engineer my being “fat […] [,] out of control […] [,] lazy […] [and] selfish”. More precisely, if I live in a society that obsessively requires me to be the opposite, I need nothing more than immediate “numbers […] [and] data” to know, and even predict, who I am. Again, optimisation is not ours, but technology’s –​optimisation is not something we actually want to obtain when it comes to our capabilities, but something of which we actually want to get rid, in that, especially in a society that obsessively requires us optimal performance, the burden of self-​ optimisation is more than ever unbearable. The burden of self-​optimisation is also challenged by one of the most typical characteristics of the present time: uncertainty, from global health emergencies to global climate emergencies to global geopolitical crises to global economic crises to global crises of democracies and ideals. Uncertainty is mostly represented as the most complex challenge of the present time, in that, as Keynes says, it is a matter of “no scientific basis on which to form any calculable probability whatever. We simply do not know” (Keynes, 1979: 114).19 Interestingly enough, Keynes also says that “the expectation of life is only slightly uncertain” (see note 19), which is precisely the focus of countless emerging technologies whose objective is to make “the expectation of life” less and less “uncertain”, as we have seen. Before moving to the following chapters, in which I will develop my argument through more theoretical tools, we may start questioning the philosophical meaning of the move described above: is it wise to move from an “expectation of life [that] is only slightly uncertain”, i.e. probable, to an “expectation of life” that is more and more certain (especially if we consider the further developments of the current predictive technologies)? And, again, why are we willing to be predicted to the point that we endanger our prerogative of acting against all odds? As far as the second question is concerned, I will work on it in the following chapters. As far as the first question is concerned, I start answering by proposing another perspective from which we may think of uncertainty. Uncertainty is surely challenging, in that it frequently means a radical absence of knowledge and prediction: “We simply do not know”. Yet, whenever “[w]‌e simply do not know”, we can experience the (pleasant) opening up of possibilities –​whenever we “do not know” and predict, we can (pleasantly) exercise our reflection, imagination, planning and decision-​making to make sense of both our present life and, especially, our future life. If it is true that uncertainty is the most complex challenge of the present time, it is also true

52  Death clocks and other emerging technologies

that we, as humans, can be capable of making a virtue of necessity (which is precisely one of the meanings of wisdom, starting with stoic philosophy, as we will see in the following chapters). If we “do not know” and predict, for instance, that we will die of a heart attack in five years, we can (pleasantly) work, day by day, on the kind of individual we want to be in fifty years –​we can (pleasantly) work, day by day, on our evolution by reflecting, imagining, planning and making decisions that can make us substantially move from who we are today to who we will be in fifty years. And there is hardly anything better than (pleasantly) working on our evolution from a real present to an ideal future when it comes to making sense of our present life (even though we will not have a future life). If the insight described above makes sense, I may rephrase the objectives on which I will work in the following chapters as follows. First, I will question the philosophical meaning of our willingness to trade an ideally open future (in which uncertainty may even play a positive role) with a really narrow future (in which uncertainty may even be totally replaced by individual predictions as what public and private bodies increasingly exploit, no matter if they are actually true or actually false). Second, I will question the possible distinction between cases in which it would be wiser not to try to replace uncertainty with individual predictions (and why, at least from a philosophical perspective) and cases in which it would be wiser to try to replace uncertainty with individual predictions (and why, at least from a philosophical perspective). In both cases, a further philosophical issue, which has already emerged, will serve as a guiding star: questioning the meaning of making sense of human life –​and, especially, questioning the relationship between the meaning of making sense of human life and the way in which humans design and use technology in the present time. More precisely, what does the latter reveal about the former?. Notes 1 See www.death-​clock.org/​ (accessed: August 28, 2023), as we have seen in ­Chapter 1. See also www.dea​thcl​ock.com/​, https://​thedea​thcl​ock.co/​ and www. medin​dia.net/​patie​nts/​calc​ulat​ors/​deat​h_​cl​ock.asp (accessed: August 28, 2023). 2 At least according to the QS World University Rankings 2024, see www.topu​nive​rsit​ ies.com/​uni​vers​ity-​ranki​ngs/​world-​uni​vers​ity-​ranki​ngs/​2024 (accessed: September 12, 2023). 3 References are several, starting with founding works by both human and social scientists (with a focus on a critical perspective, both philosophical in general and ethical in particular), on the one hand, and scientists and technologists, on

Death clocks and other emerging technologies  53

the other hand (from more optimistic views to more pessimistic views). See at least Turing, 1950; Winner, 1977; Ihde, 1978, 1993, 2001 and 2009; Latour, 1987, 1991 and 1999; Foucault, 1988; Rosenberg, 1994; Dyson, 1997 and 2012; Shrader-​Frechette and Westra, 1997; Clark, 2003; Garreau, 2005; Moor, 2006; Böhme, 2008; Floridi, 2008 and 2014; Wallach and Allen, 2008; Arthur, 2009; Nilsson, 2009; Kelly, 2010; van de Poel and Goldberg, 2010; Anderson and Anderson, 2011; Cheney-​Lippold, 2011; van de Poel and Royakkers, 2011; Kroes, 2012; Mayer-​Schönberger and Cukier, 2013; Brynjolfsson and McAfee, 2014 and 2017; Kitchin, 2014; Domingos, 2015; Pasquale, 2015; Rosenberger and Verbeek, 2015; Alpaydin, 2016; Harari, 2016; Mittelstadt et al., 2016; O’Neil, 2016; Seyfert and Roberge, 2016; Vallor, 2016; Finn, 2017; Greenfield, 2017; Lindgren, 2017; Tegmark, 2017; Wallach and Asaro, 2017; Baricco, 2018; Noble, 2018; DeNardis, 2020; Sadowski, 2020; Crawford, 2021; Nowotny, 2021. As far as the distinction between risk and uncertainty is concerned (the former is 4 predictable in a way in which the latter is not predictable), see at least Keynes, 1979; Bernstein, 1996; Hansson, 1996; Kay and King, 2020. 5 In any case, see at least the following references: on predictions related to physical and mental healthcare (excluding death clocks, which will be addressed in the following section), see Kapoor, Burleson and Picard, 2007; Frankham et al., 2011; Dalenberg, 2014; Brandt, 2017; Gadebusch Bondio, Spöring and Gordon, 2017; Reece and Danforth, 2017; Schmack et al., 2017; Yarkoni and Westfall, 2017; Sterzer et al., 2018; Belsher et al., 2019; Goldman, 2019 and 2020; Olavsrud, 2019; Pohl et al., 2019; Rae et al., 2019; Kube et al., 2020; Tanwar et al., 2021; Makin, 2022; on predictions related to students’ performance, see Judge and Zapata, 2015; Hall, O’Connell and Cook, 2017; Selzam et al., 2017; Gardner and Brooks, 2018; Helal et al., 2018; Morgan et al., 2019; on predictions related to the working world, see Acuna, Allesina and Kording, 2012; Richter and Dumke, 2015; Persson, 2016; Bogen, 2019; Rhea et al., 2022; on predictions related to money, see Betz et al., 2014; Blumenstock, Cadamuro and On, 2015; Telpaz, Webb and Levy, 2015; Alain et al., 2016; Jean et al., 2016; Barboza, Kimura and Altman, 2017; Dong, Ratti and Zheng, 2019; Kiviat, 2019; D’Agostino, 2020; Leong et al., 2020; on predictions related to warfare, see Basuchoudhary, 2018; Deeks, 2018; on predictions related to elections, see Grover et al., 2019; on predictions related to justice, see Gottfredson and Tonry, 1987; McCue, 2007; Jackson, 2012; Baesens, 2015; Kleinberg et al., 2015; Baron, Losey and Berman, 2016; Lum and Isaac, 2016; Tayebi, 2016; Athey, 2017; Chouldechova, 2017; Rosser et al., 2017; Shapiro, 2017; Barlett, 2019; De Keijser, Roberts and Ryberg, 2019; Brayne, 2021; Egbert, 2021; McDaniel and Pease, 2021; Iftimiei and Iftimiei, 2022; Medvedeva, Wieling and Vols, 2023; on predictions related to society, see Lavner, Karney and Bradbury, 2016; Singh et al., 2017; Parkinson, Kleinbaum and Wheatley, 2018; Rosenfeld and Kraus, 2018; Xiao, Lou and Frisby, 2018; Fernández-​Rovira and Giraldo-​Luque, 2022; Preiss, 2023. 6 Human digital twins are also used in other areas, from society in general (see, for instance, Ukko et al., 2022) to economy in particular (see, for instance, Anshari, Almunawar and Masri, 2022). But personalised medicine is their most important area both in general and for the focus of my work.

54  Death clocks and other emerging technologies

7 See https://​resea​rch-​and-​inn​ovat​ion.ec.eur​opa.eu/​resea​rch-​area/​hea​lth/​perso​nali​ sed-​medi​cine​_​en (accessed: September 18, 2023). 8 See, for instance, 23andMe (www.23an​dme.com/​en-​int/​, accessed: September 18, 2023). 9 Siegel quotes Gladwell’s TED talk entitled “Choice, happiness and spaghetti sauce”, see www.yout​ube.com/​watch?v=​iIiA​AhUe​R6Y (accessed: September 20, 2023). 10 See www.lon​gevi​tyad​vant​age.com/​mortal​ity-​score-​and-​phe​noty​pic-​age-​cal​cula​tor/​ (accessed: September 20, 2023). 11 As far as specific technologies are concerned, see, for instance, www.slee​pnum​ ber.com/​pages/​slee​piq-​sleep-​trac​ker (accessed: September 22, 2023), www.eig​htsl​ eep.com/​eu/​pod-​cover/​ (accessed: September 22, 2023), https://​sleeps​heph​erd. com/​(accessed: September 22, 2023), https://​us.morp​hee.co/​ (accessed: September 22, 2023), www.phil​ips-​hue.com/​en-​gb (accessed: September 22, 2023) and https://​sleep​gadg​ets.io/​nigh​ting​ale-​sleep-​sys​tem/​ (accessed: September 22, 2023). I had the opportunity to work on the technological optimisation of sleep in two interdisciplinary publications: De Cristofaro and Chiodo, 2023; Chiodo and Forino, forthcoming. 12 See www.phil​ips-​hue.com/​en-​gb (accessed: February 7, 2023). 13 See www.slee​pnum​ber.com/​pages/​slee​piq-​sleep-​trac​ker (accessed: September 24, 2023). 14 See, also for the following quotes, www.eig​htsl​eep.com/​eu/​pod-​cover/​ (accessed: September 24, 2023). 15 See, also for the following quotes, https://​sleeps​heph​erd.com/​ (accessed: September 24, 2023). 16 See Chapter 1, note 13. See also Lupton, 2012 and 2018; Baron et al., 2017. 17 See https://​qua​ntif​i eds​elf.com/​show-​and-​tell/​?proj​ect=​756 (accessed: September 24, 2023). 18 See https://​qua​ntif​i eds​elf.com/​blog/​why-​i-​stop​ped-​track​ing/​ (accessed: September 26, 2023). 19 It is worth quoting more extensively: By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-​owners in the social system in 1970. About these matters, there is no scientific basis on which to form any calculable probability whatever. We simply do not know. (Keynes, 1979: 113–​114)

References Acuna, D. E., Allesina, S., Kording, K. P., “Predicting scientific success”, Nature 7415/​489: 201–​202, 2012. Alain, Y. L. C., et al., “Predicting online product sales via online reviews, sentiments, and promotion strategies. A big data architecture and neural network approach”,

Death clocks and other emerging technologies  55

International Journal of Operations & Production Management 36/​ 4: 358–​ 383, 2016. Alpaydin, E., Machine learning. The new AI, Cambridge: MIT Press, 2016. Anderson, M., Anderson, S. L., Machine ethics, New York: Cambridge University Press, 2011. Anshari, M., Almunawar, M. N., Masri, M., “Digital twin. Financial technology’s next frontier of robo-​advisor”, Journal of Risk and Financial Management 163/​ 15: 1–​9, 2022. Arthur, W. B., The nature of technology. What it is and how it evolves, New York: Free Press, 2009. Athey, S., “Beyond prediction. Using big data for policy problems”, Science 6324/​ 355: 483–​485, 2017. Baesens, B., Fraud analytics using descriptive, predictive, and social network techniques. A guide to data science for fraud detection, Hoboken: Wiley, 2015. Balasubramanian, R., Libarikian, A., McElhany, D., “Insurance 2030. The impact of AI on the future of insurance”, 2021, www.mckin​sey.com/​ind​ustr​ies/​financ​ial-​ servi​ces/​our-​insig​hts/​insura​nce-​2030-​the-​imp​act-​of-​ai-​on-​the-​fut​ure-​of-​insura​nce. Barboza, F., Kimura, H., Altman, E., “Machine learning models and bankruptcy prediction”, Expert Systems with Applications 83: 405–​417, 2017. Baricco, A., The game, Torino: Einaudi, 2018. Barlett, C. P., Predicting cyberbullying. Research, theory, and intervention, London: Academic Press, 2019. Baron, J. R., Losey, R. C., Berman, M. D., eds., Perspectives on predictive coding. And other advanced search methods for the legal practitioner, Chicago: American Bar Association, 2016. Baron, K., et al., “Orthosomnia. Are some patients taking the quantified self too far?”, Journal of Clinical Sleep Medicine 13/​2: 351–​354, 2017. Barricelli, B., Casiraghi, E., Fogli, D., “A survey on digital twin. Definitions, characteristics, applications, and design implications”, IEEE Access 7: 167653–​ 167671, 2019. Basuchoudhary, A., Predicting hotspots. Using machine learning to understand civil conflict, Lanham: Lexington Books-​The Rowman & Littlefield Publishing Group, 2018. Belsher, B. E., et al., “Prediction models for suicide attempts and deaths. A systematic review and simulation”, Archives of General Psychiatry 76/​6: 642–​651, 2019. Belsky, D. W., et al., “Quantification of the pace of biological aging in humans through a blood test, the DunedinPoAm DNA methylation algorithm”, eLife 9: 1–​25, 2020. Bernstein, P. L., Against the gods. The remarkable story of risk, New York, Wiley, 1996. Betz, F., et al., “Predicting distress in European banks”, Journal of Banking & Finance 45: 225–​241, 2014. Black, J. S., van Esch, P., “AI-​enabled recruiting. What is it and how should a manager use it?”, Business Horizon 63/​2: 215–​226, 2020. Blumenstock, J., Cadamuro, G., On, R., “Predicting poverty and wealth from mobile phone metadata”, Science 6264/​350: 1073–​1076, 2015. Bocklandt, S., et al., “Epigenetic predictor of age”, PloS One 6/​6: 1–​6, 2011. Bogen, M., “All the ways hiring algorithms can introduce bias”, Harvard Business Review, May 6, 2019, no page numbers.

56  Death clocks and other emerging technologies

Böhme, G., Invasive technification. Critical essays in the philosophy of technology, transl. by C. Shingleton, London-​New York: Bloomsbury, 2012 (2008). Botín-​Sanabria, D. M., et al., “Digital twin technology challenges and applications. A comprehensive review”, Remote Sensing 1335/​14: 1–​25, 2022. Brandt, M. J., “Predicting ideological prejudice”, Psychological Science 28/​6: 713–​ 722, 2017. Braun, M., “Represent me, please! Towards and ethics of digital twins in medicine”, Journal of Medical Ethics 47: 394–​400, 2021. Brayne, S., Predict and surveil. Data, discretion, and the future of policing, New York: Oxford University Press, 2021. Bruynseels, K., Santoni de Sio, F., van den Hoven, J., “Digital twins in healthcare. Ethical implications of an emerging engineering paradigm”, Frontiers in Genetics 9: 1–​11, 2018. Brynjolfsson, E., McAfee, A., The second machine age. Work, progress and prosperity in a time of brilliant technologies, New York: Norton & Co., 2014. Brynjolfsson, E., McAfee, A., Machine, platform, crowd. Harnessing our digital future, New York: Norton & Co., 2017. Cawthon, R. M., et al., “Association between telomere length in blood and mortality in people aged 60 years or older”, The Lancet 9355/​361: 393–​395, 2003. Chen, B. H., et al., “DNA methylation-​based measures of biological age. Meta-​ analysis predicting time to death”, Aging 9/​8: 1844–​1865, 2016. Cheney-​Lippold, J., “A new algorithmic identity. Soft biopolitics and the modulation of control”, Theory, Culture and Society 28/​6: 164–​181, 2011. Chiodo, S., Forino, I., “Beds as symbols of human identity. A varied history of power”, in Sleep and its meanings. Essays from critical sleep studies, ed. by D. De Cristofaro, Cambridge: MIT Press, forthcoming. Chouldechova, A., “Fair prediction with disparate impact. A study of bias in recidivism prediction instruments”, Big Data 5/​2: 153–​163, 2017. Clark, A., Natural-​ born cyborgs. Minds, technologies and the future of human intelligence, Oxford: Oxford University Press, 2003. Cole, J. H., et al., “Brain age predicts mortality”, Molecular Psychiatry 23: 1385–​ 1392, 2018. Coombs, S., Young people’s perspectives on end-​ of-​ life. Death, culture and the everyday, Cham: Springer, 2017. Corbyn, Z., “Morgan Levine: ‘Only 10–​30% of our lifespan is estimated to be due to genetics’ ”, The Guardian, www.theg​uard​ian.com/​scie​nce/​2022/​may/​07/​mor​gan-​ lev​ine-​only-​10-​30-​of-​our-​lifes​pan-​is-​estima​ted-​to-​be-​due-​to-​genet​ics. Crary, J., 24/​ 7. Terminal capitalism and the ends of sleep, London-​Brooklyn-​ New York: Verso, 2013. Crawford, K., Atlas of AI. Power, politics, and the planetary costs of artificial intelligence, New Haven: Yale University Press, 2021. D’Agostino, A., Predicting personality. Using AI to understand people and win more business, Hoboken: Wiley, 2020. Dalenberg, J. R., “Evoked emotions predict food choice”, PloS One 12/​ 9: 115388, 2014. Dattner, B., et al., “The legal and ethical implications of using AI in hiring”, Harvard Business Review, April 25, 2019, no page numbers. De Cristofaro, D., Chiodo, S., “Quantified sleep. Self-​tracking technologies and the reshaping of 21st-​century subjectivity”, Historical Social Research 48/​ 2: 176–​ 193, 2023.

Death clocks and other emerging technologies  57

De Keijser, J. W., Roberts, J. V., Ryberg, J., eds., Predictive sentencing. Normative and empirical perspectives, Oxford-​Portland: Hart Publishing, 2019. Deeks, A. S., “Predicting enemies”, Virginia Law Review 104/​8: 1529–​1592, 2018. DeNardis, L., The internet in everything. Freedom and security in a world with no off switch, New Haven: Yale University Press, 2020. Deng, T., Zhang, K., Shen, Z. J., “A systematic review of a digital twin city. A new pattern of urban governance toward smart cities”, Journal of Management Science and Engineering 6: 125–​134, 2021. Dickenson, D., Me medicine vs. we medicine. Reclaiming biotechnology for the common good, New York: Columbia University Press, 2013. Domingos, P., The master algorithm. How the quest for the ultimate learning machine will remake our world, London: Allen Lane, 2015. Dong, L., Ratti, C., Zheng, S., “Predicting neighbourhoods’ socioeconomic attributes using restaurant data”, Proceedings of the National Academy of Sciences 116/​ 31: 15447–​15452, 2019. Döpfner, M., “BioNTech founders Özlem Türeci and Ugur Sahin on developing the BioNTech-​Pfizer COVID-​19 vaccine, the future of fighting cancer and whether people can live to 200”, Business Insider, March 23, 2021. Dyson, G., Darwin among the machines. The evolution of human intelligence, New York: Basic, 1997. Dyson, G., Turing’s cathedral. The origin of the digital universe, New York: Pantheon, 2012. Egbert, S., Criminal futures predictive policing and everyday police work, Abingdon: Routledge, 2021. Ehrlich, Y., “A vision for ubiquitous sequencing”, Genome Research 25/​10: 1411–​ 1416, 2015. Eiriksdottir, T., et al., “Predicting the probability of death using proteomics”, Communications Biology 758/​4: 1–​11, 2021. Fernández-​Rovira, C., Giraldo-​Luque, S., eds., Predictive technology in social media, Boca Raton: CRC Press, 2022. Finn, E., What algorithms want. Imagination in the age of computing, Cambridge: MIT Press, 2017. Fischer, K., et al., “Biomarker profiling by nuclear magnetic resonance spectroscopy for the prediction of all-​ cause mortality. An observational study of 17.345 persons”, PloS Medicine 11/​2: 1–​12, 2014. Floridi, L., “Artificial intelligence’s new frontier. Artificial companions and the fourth revolution”, Metaphilosophy 39/​4–​5: 651–​655, 2008. Floridi, L., The 4th revolution. How the infosphere is reshaping human reality, New York: Oxford University Press, 2014. Foreman, K. J., et al., “Forecasting life expectancy, years of life lost, and all-​cause and cause-​specific mortality for 250 causes of death. Reference and alternative scenarios for 2016–​40 for 195 countries and territories”, The Lancet 10159/​ 392: 2052–​2090, 2018. Foucault, M., “Technologies of the self”, in Ethics. Subjectivity and truth, ed. by P. Rabinow, New York: The New Press, 1997, 223–​252 (1988). Frankham, R., et al., “Predicting the probability of outbreeding depression”, Conservation Biology 25/​3: 465–​475, 2011. Fuller, A., et al., “Digital twin. Enabling technologies, challenges and open research”, IEEE Access 8: 108952–​108971, 2020.

58  Death clocks and other emerging technologies

Gadebusch Bondio, M., Spöring, F., Gordon, J. S., eds., Medical ethics, prediction, and prognosis. Interdisciplinary perspectives, New York: Routledge, 2017. Gahlot, S., Reddy, S. R. N., Kumar, D., “Review of smart health monitoring approaches with survey analysis and proposed framework”, IEEE Internet of Things Journal 6/​2: 2116–​2127, 2019. Gao, X., et al., “Comparative validation of an epigenetic mortality risk score with three aging biomarkers for predicting mortality risks among older adult males”, International Journal of Epidemiology 48/​6: 1958–​1971, 2019. Gardner, J., Brooks, C., “Students success prediction in MOOCs”, User Modelling and User-​Adapted Interaction 28/​2: 127–​203, 2018. Garreau, J., Radical evolution. The promise and peril of enhancing our minds, our bodies, and what it means to be human, New York: Broadway Books, 2005. Glaessgen, E., Stargel, D., “The digital twin paradigm for future NASA and US Air Force vehicles”, in 53rd AIAA/​ASME/​ASCE/​AHS/​ASC structures, structural dynamics and materials conference, American Institute of Aeronautics and Astronautics, 1–​14, 2012. Goldman, D., “Predicting depression”, The American Journal of Psychiatry 176/​ 8: 598–​599, 2019. Goldman, D., “Predictive suicide”, The American Journal of Psychiatry 177/​10: 881–​ 883, 2020. Gottfredson, D. M., Tonry, M., eds, Prediction and classification. Criminal justice decision making, Chicago: University of Chicago Press, 1987. Greenfield, A., Radical technologies. The design of everyday life, London-​ New York: Verso, 2017. Grieves, M., Digital twin. Manufacturing excellence through virtual factory replication, NASA, Washington, DC: NASA, 2014. Grover, P., et al., “Polarization and acculturation in US election 2016 outcomes. Can twitter analytics predict changes in voting preferences”, Technological Forecasting & Social Change 145: 438–​460, 2019. Hacking, I., The emergence of probability. A philosophical study of early ideas about probability, induction and statistical inference, Cambridge: Cambridge University Press, 1975. Hacking, I., The taming of chance, Cambridge: Cambridge University Press, 1990. Hall, J. D., O’Connell, A. B., Cook, J. G., “Predictors of student productivity in biomedical graduate school admissions”, PloS One 12/​1: 0169121, 2017. Hamzelou, J., “Aging clocks aim to predict how long you’ll live”, MIT Technology Review 125/​4: 14–​15, 2022. Han, B. C., Psychopolitics. Neoliberalism and new technologies of power, transl. by E. Butler, London-​Brooklyn: Verso, 2017. Hansson, S. O., “Decision making under great uncertainty”, Philosophy of the Social Sciences 26/​3: 369–​386, 1996. Harari, Y. N., Homo deus. A brief history of tomorrow, London: Harvill Secker, 2016. Helal, S., et al., “Predicting academic performance by considering student heterogeneity”, Knowledge-​Based Systems 161: 134–​146, 2018. Holler, M., Uebernickel, F., Brenner, W., “Digital twin concepts in manufacturing industries. A literature review and avenues for further research”, in Proceedings of the 18th international conference on industrial engineering, Seoul, Korea, 1–​ 9, 2016.

Death clocks and other emerging technologies  59

Horvath, S., “DNA methylation age of human tissues and cell types”, Genome Biology 14, article 3156: 2–​19, 2013. Horvath, S., Raj, K., “DNA methylation-​based biomarkers and the epigenetic clock theory of ageing”, Nature Reviews Genetics 19: 371–​384, 2018. Horvath, S., et al., “Decreased epigenetic age of PBMCs from Italian semi-​ supercentenarians and their offspring”, Aging 12/​7: 1159–​1170, 2015. Iftimiei, A., Iftimiei, M., “Law and IT technologies. Predictive justice”, Perspectives of Law and Public Administration 11/​1: 169–​175, 2022. Ihde, D., Technics and praxis, Dordrecht-​Boston: Reidel, 1978. Ihde, D., Postphenomenology. Essays in the postmodern context, Evanston: Northwestern University Press, 1993. Ihde, D., Bodies in technology, Minneapolis: University of Minnesota Press, 2001. Ihde, D., Postphenomenology and technoscience. The Peking University lectures, Albany: State University of New York Press, 2009. Jackson, G. M., Predicting malicious behaviour. Tools and techniques for ensuring global security, Indianapolis: Wiley, 2012. Jackson, N., “Poll-​based election forecasts will always struggle with uncertainty”, Sabato’s crystal ball, August 6, 2020a, no page numbers. Jackson, N., “Trump-​Biden 2020. The polls were off, but they’re not crystal balls and aren’t meant to be”, USA Today, December 5, 2020b, no page numbers. Jean, N., et al., “Combining satellite imagery and machine learning to predict poverty”, Science 6301/​353: 790–​794, 2016. Jones, D., et al., “Characterising the digital twin. A systematic literature review”, CIRP Journal of Manufacturing Science and Technology 29: 36–​52, 2020. Judge, T. A., Zapata, C. P., “The person-​situation debate revisited. Effect of situation strength and trait activation on the validity of the big five personality traits in predicting job performance”, Academy of Management Journal 58/​ 4: 1149–​ 1179, 2015. Kamel Boulos, M. N., Zhang, P., “Digital twins. From personalised medicine to precision public health”, Journal of Personalised Medicine 745/​11: 1–​12, 2021. Kapoor, A., Burleson, W., Picard, R. W., “Automatic prediction of frustration”, International Journal of Human-​Computer Studies 65/​8: 724–​736, 2007. Kay, J., King, M., Radical uncertainty. Decision-​ making beyond the numbers, New York, Norton & Co., 2020. Kelly, K., What technology wants, New York: Penguin, 2010. Keynes, J. M., The collected writings of John Maynard Keynes, London: Macmillan, 1979. Kitchin, R., The data revolution. Big data, open data, data infrastructures and their consequences, London: Sage, 2014. Kitchin, R., Dodge, M., Code/​ space. Software and everyday life, Cambridge-​ London: MIT Press, 2011. Kiviat, B., “The moral limits of predictive practices. The case of credit-​based insurance scores”, American Sociological Review 84/​6: 1134–​1158, 2019. Kleinberg, J., et al., “Prediction policy problems”, American Economic Review 105/​ 5: 491–​495, 2015. Korenhof, P., Blok, V., Kloppenburg, S., “Steering representations. Towards a critical understanding of digital twins”, Philosophy & Technology 34: 1751–​1773, 2021.

60  Death clocks and other emerging technologies

Krisha, D., “DNA testing for Eddy Curry? Creating a new constitutional protection”, University of Pennsylvania Journal of Constitutional Law 9/​4: 1105–​1129, 2007. Kritzinger, W., et al., “Digital twin in manufacturing. A categorical literature review and classification”, IFAC Papers on Line 51/​11: 1016–​1022, 2018. Kroes, P., Technical artefacts. Creations of mind and matter. A philosophy of engineering design, Dordrecht: Springer, 2012. Kube, T., et al., “Rethinking post-​traumatic stress disorder. A predictive processing perspective”, Neuroscience and Biobehavioural Reviews 113: 448–​460, 2020. Latour, B., Science in action. How to follow scientists and engineers through society, Milton Keynes: Open University Press, 1987. Latour, B., We have never been modern, Cambridge: Harvard University Press, 1993 (1991). Latour, B., Pandora’s hope. Essays on the reality of science studies, Cambridge: Harvard University Press, 1999. Lavner, J. A., Karney, B. R., Bradbury, T. N., “Does couples’ communication predict marital satisfaction, or does marital satisfaction predict communication?”, Journal of Marriage and Family 78/​3: 680–​694, 2016. Leong, L. Y., et al., “Predicting the antecedents of trust in social commerce. A hybrid structural equation modelling with neural network approach”, Journal of Business Research 110: 24–​40, 2020. Levine, M. E., “Modeling the rate of senescence. Can estimated biological age predict mortality more accurately than chronological age?”, The Journals of Gerontology 68/​6: 667–​674, 2013. Levine, M. E., “Assessment of epigenetic clocks as biomarkers of aging in basic and population research”, Journals of Gerontology: Biological Sciences 75/​3: 463–​ 465, 2020. Levine, M. E., True age. Cutting edge research to help turn back the clock, Hachette: Yellow Kite, 2022. Levine, M. E., et al., “An epigenetic biomarker of aging for lifespan and healthspan”, Aging 10/​4: 573–​591, 2018. Lin, Q., et al., “DNA methylation levels at individual age-​associated CpG sites can be indicative for life expectancy”, Aging 8/​2: 394–​401, 2016. Lindgren, S., Digital media and society, London: Sage, 2017. Littlefield, M. M., Instrumental intimacy. EEG wearables and neuroscientific control, Baltimore: Johns Hopkins University Press, 2018. Liu, M., et al., “Review of digital twin about concepts, technologies, and industrial applications”, Journal of Manufacturing Systems 58: 346–​361, 2021. Liu, Z., et al., “A new aging measure captures morbidity and mortality risk across diverse subpopulations from NHANES IV. A cohort study”, PloS Medicine 15/​ 12: 1–​20, 2018. Lu, Y., et al., “Digital twin-​ driven smart manufacturing. Connotation, reference model, applications and research issues”, Robotics and Computer-​ Integrated Manufacturing 61: 1–​14, 2020. Lum, K., Isaac, W., “To predict and serve?”, Significance 13/​5: 14–​19, 2016. Lupton, D., “M-​health and health promotion. The digital cyborg and surveillance society”, Social Theory & Health 10/​3: 229–​244, 2012. Lupton, D., “How do data come to matter? Living and becoming with personal data”, Big Data & Society 2: 1–​11, 2018.

Death clocks and other emerging technologies  61

Lyall, B., “The ambivalent assemblages of sleep optimization”, Review of Communication 2/​21: 144–​160, 2021. Lyall, B., Nansen, B., “Redefining rest. A taxonomy of contemporary digital sleep technologies”, Historical Social Research 48/​2: 135–​156, 2023. Makin, S., “Predicting a pandemic”, Nature 7933/​601: 42–​44, 2022. Marioni, R. E., et al., “DNA methylation age of blood predicts all-​cause mortality in later life”, Genome Biology 25/​1: 1–​12, 2015. Mayer-​Schönberger, V., Cukier, K., Big data. A revolution that will transform how we live, work and think, London: John Murray, 2013. McCue, C., Data mining and predictive analysis. Intelligence gathering and crime analysis, Amsterdam-​Boston: Butterworth-​Heinemann, 2007. McDaniel, J. L. M., Pease, K. G., eds., Predictive policing and artificial intelligence, Abingdon-​New York: Routledge, 2021. Medvedeva, M., Wieling, M., Vols, M., “Rethinking the field of automatic prediction of court decisions”, Artificial Intelligence and Law 31: 195–​212, 2023. Menge, S. K., “Should players have to pass to play? A legal analysis of implementing genetic testing in the National Basketball Association”, Marquette Sports Law Review 17/​2: 459–​480, 2007. Mittelstadt, B. D., et al., “The ethics of algorithms. Mapping the debate”, Big Data & Society 3/​2: 1–​21, 2016. Mohammadi, N., Taylor, J. E., “Smart city digital twins”, 2017 IEEE Symposium Series on Computational Intelligence, IEEE, 1–​5, 2017. Moor, J., “The nature, importance and difficulty of machine ethics”, IEEE Intelligent Systems 21/​4: 18–​21, 2006. Morgan, P. L., et al., “Kindergarten children’s executive functions predict their second-​ grade academic achievement and behaviour”, Child Development 90/​ 5: 1802–​1816, 2019. Nansen, B., Mannell, K., O’Neill, C., “Senses and sensors of sleep. Mediation and disconnection in sleep architectures”, in Disentangling. The geographies of digital disconnection, ed. by A. Jansson, P. C. Adams, New York: Oxford University Press, 137–​162, 2021. Nilsson, N. J., The quest for artificial intelligence. A history of ideas and achievements, New York: Cambridge University Press, 2009. Noble, S. U., Algorithms of oppression. How search engines reinforce racism, New York: New York University Press, 2018. Nowotny, H., In AI we trust. Power, illusion and control of predictive algorithms, Cambridge-​Medford: Polity Press, 2021. O’Neil, C., Weapons of math destruction. How big data increases inequality and threatens democracy, New York: Crown, 2016. Okegbile, S. D., et al., “Human digital twin for personalized healthcare. Vision, architecture and future directions”, IEEE Network 37/​2: 262–​269, 2023. Olavsrud, T., “Identifying high-​risk patients with predictive analytics”, CIO, July 9, 2019, no page numbers. Parkinson, C., Kleinbaum, A. M., Wheatley, T., “Similar neural responses predict friendship”, Nature Communications 9/​1: 1–​14, 2018. Pasquale, F., The black box society. The secret algorithms that control money and information, Cambridge: Harvard University Press, 2015.

62  Death clocks and other emerging technologies

Persson, A., “Implicit bias in predictive data profiling within recruitments”, in Privacy and identity management. Facing up to next steps, ed. by A. Lehmann et al., Cham: Springer, 2016, 212–​230. Peters, M. J., et al., “The transcriptional landscape of age in human peripheral blood”, Nature Communications 6, article 8570: 1–​14, 2015. Pirracchio, R., et al., “Mortality prediction in intensive care units with the super ICU learner algorithm (SICULA). A population-​based study”, The Lancet. Respiratory Medicine 3/​1: 42–​52, 2015. Pohl, K. M., et al., eds., Adolescent brain cognitive development neurocognitive prediction. First challenge, Cham: Springer, 2019. Popa, E. O., et al., “The use of digital twins in healthcare. Socio-​ethical benefits and socio-​ethical risks”, Life Sciences, Society and Policy 17/​6: 1–​25, 2021. Powers, M. R., Acts of God and man. Ruinations on risk and insurance, New York: Columbia Business School Publishing, 2011. Preiss, J., “Predicting the impact of online news articles. Is information necessary? Application to COVID-​ 19 articles”, Multimedia Tools and Applications 82/​ 6: 8791–​8809, 2023. Rae, J. R., et al., “Predicting early-​ childhood gender transitions”, Psychological Science 30/​5: 669–​681, 2019. Reece, A. G., Danforth, C. M., “Instagram photos reveal predictive markers of depression”, EPJ Data Science 6/​1: 1–​12, 2017. Rhea, A. K., et al., “An external stability audit framework to test the validity of personality prediction in AI hiring”, Data Mining and Knowledge Discovery 36/​ 6: 2153–​2193, 2022. Richter, K., Dumke, R. R., Modeling, evaluating, and predicting IT human resources performance, Boca Raton: CRC Press, 2015. Robbins, R., “This company wants to analyse your saliva –​to try to predict when you’ll die”, Stat, www.statn​ews.com/​2017/​03/​13/​insura​nce-​dna-​death-​pre​dict​ion/​. Rosenberg, N., Exploring the black box. Technology, economics and history, Cambridge: Cambridge University Press, 1994. Rosenberger, R., Verbeek, P. P., eds., Postphenomenological investigations. Essays on human-​technology relations, Lanham-​Boulder-​New York-​London: Lexington Books-​The Rowman & Littlefield Publishing Group, 2015. Rosenfeld, A., Kraus, S., Predicting human decision-​making. From prediction to action, Cham: Springer, 2018. Rosser, G., et al., “Predictive crime mapping. Arbitrary grids or street networks?”, Journal of Quantitative Criminology 33/​3: 569–​594, 2017. Sadowski, J., Too smart. How digital capitalism is extracting data, controlling our lives, and taking over the world, Cambridge: MIT Press, 2020. Sahal, R., Alsamhi, S. H., Brown, K. N., “Personal digital twin. A close look into the present and a step towards the future of personalised healthcare industry”, Sensors 5918/​22: 1–​35, 2022. Sanders, J., et al., “Understanding the aging process using epidemiologic approaches”, in The epidemiology of aging, ed. by A. Newman, J. Cauley, Dordrecht: Springer, 2012, 187–​214. Sanders, J. L., Newman, A. B., “Telomere length in epidemiology. A biomarker of aging, age-​related disease, both, or neither?”, Epidemiologic Reviews 35/​1: 112–​ 131, 2013.

Death clocks and other emerging technologies  63

Scharre, P., Army of none. Autonomous weapons and the future of war, New York: Norton, 2018. Schmack, K., et al., “Enhanced predictive signalling in schizophrenia”, Human Brain Mapping 38/​4: 1767–​1779, 2017. Selzam, S., et al., “Predicting educational achievement from DNA”, Molecular Psychiatry 22/​2: 267–​272, 2017. Seyfert, R., Roberge, J., eds., Algorithmic cultures. Essays on meaning, performance and new technologies, Abingdon-​New York: Routledge, 2016. Shahat, E., Hyun, C. T., Yeom, C., “City digital twin potentials. A review and research agenda”, Sustainability 13: 1–​20, 2021. Shapiro, A., “Reform predictive policing”, Nature 7638/​541: 458–​460, 2017. Shrader-​Frechette, K., Westra, L., eds., Technology and values, Lanham: Rowman & Littlefield, 1997. Siegel, E., Predictive analytics. The power to predict who will click, buy, lie, or die, Hoboken: Wiley, 2012. Singh, J. P., et al., “Predicting the ‘helpfulness’ of online consumer reviews”, Journal of Business Research 70: 346–​355, 2017. Sterzer, P., et al., “The predictive coding account of psychosis”, Biological Psychiatry 84/​9: 634–​643, 2018. Stigler, S., The history of statistics. The measurement of uncertainty before 1900, Cambridge: Harvard University Press, 1986. Tanwar, P., et al., Computational intelligence and predictive analysis for medical science. A pragmatic approach, Berlin-​Boston: De Gruyter, 2021. Tayebi, M. A., Social network analysis in predictive policing. Concepts, models and methods, Cham: Springer, 2016. Tegmark, M., Life 3.0. Being human in the age of artificial intelligence, London: Allen Lane, 2017. Telpaz, A., Webb, R., Levy, D. J., “Using EEG to predict consumers’ future choices”, Journal of Marketing Research 52/​4: 511–​529, 2015. Tulchinsky, I., Mason, C. E., The age of prediction. Algorithms, AI, and the shifting shadows of risk, Cambridge: MIT Press, 2023. Turing, A. M., “Computing machinery and intelligence”, Mind 49: 433–​460, 1950. Ukko, J., et al., “Digital twins’ impact on organisational control. Perspectives on formal vs social control”, Information Technology & People 35/​8: 253–​272, 2022. Vallor, S., Technology and the virtues. A philosophical guide to a future worth wanting, Oxford-​New York: Oxford University Press, 2016. van de Poel, I., Goldberg, D. E., eds., Philosophy and engineering. An emerging agenda, Dordrecht: Springer, 2010. van de Poel, I., Royakkers, L., Ethics, technology, and engineering. An introduction, Oxford-​Malden: Wiley-​Blackwell, 2011. van der Valk, H., et al., “A taxonomy of digital twins”, Americas Conference on Information Systems 26: 1–​10, 2020. Wallach, W., Allen, C., Moral machines. Teaching robots right from wrong, Oxford: Oxford University Press, 2008. Wallach, W., Asaro, P., eds., Machine ethics and robot ethics, London: Routledge, 2017. Wayt Gibbs, W., “Biomarkers and aging. The clock-​watcher”, Nature 508: 168–​ 170, 2014.

64  Death clocks and other emerging technologies

Williams, S., Coveney, C., Gabe, J., “Medicalisation or customisation? Sleep, enterprise and enhancement in the 24/​ 7 society”, Social Science & Medicine 79: 40–​47, 2013. Winner, L., Autonomous technology. Technics-​out-​of-​control as a theme in political thought, Cambridge-​London: MIT Press, 1977. Xiao, J., Lou, Y., Frisby, J., “How likely am I to find parking? A practical model-​ based framework for predicting parking availability”, Transportation Research 112: 19–​39, 2018. Yarkoni, T., Westfall, J., “Choosing prediction over explanation in psychology. Lessons from machine learning”, Perspectives on Psychological Science 12/​ 6: 1100–​1122, 2017. Zhang, Y., et al., “DNA methylation signatures in peripheral blood strongly predict all-​cause mortality”, Nature Communications 8, article 14617: 1–​11, 2017. Zhavoronkov, A., et al., “Artificial intelligence for aging and longevity research. Recent advances and perspectives”, Ageing Research Reviews 49: 49–​66, 2019. Zuboff, S., The age of surveillance capitalism. The fight for a human future at the new frontier of power, New York: Public Affairs, 2020.

3 PREDICTION AND THE AUTOMATION OF OUR FUTURE

3.1  The fall of autonomy and the rise of automation

Interestingly enough, humans increasingly refer to themselves by replacing words resulting from self-​perception and self-​mastery (for instance, when they say that they feel good) with words resulting from engineering (for instance, when they say that their performance is optimal). Also, optimisation is increasingly addressed not in bodily and mental terms, but in engineering terms, as we have seen in Chapter 2: the words “feeling good”, “not feeling good”, “care”, “patient recovery” and “personal development” are increasingly replaced by the words “normal functioning”, “malfunctioning”, “predictive maintenance”, “performance optimisation” and “implementation of new functionality”. Again, the case of predictive technologies applied to healthcare is instructive, in that the replacement of words is striking when it comes to addressing human transience, from disease to death, as one of the most distinctive characteristics of human identity. We may happen to read that predictive technologies lead “to dramatic improvements […] in medical-​ system engineering, since the prediction capability allows planning repairs or maintenance, thus preventing disruptions and potentially costly breakdowns” (Barricelli, Casiraghi and Fogli, 2019: 167657). And we may happen to read that human transience, from disease to death, is described not only in terms of optimisation but also in terms of “‘efficiency’, ‘anomaly’, ‘deviation’, […] ‘better management and control’, […] ‘health management’, to ‘increase lifespan and success’ ” (Korenhof, Blok and Kloppenburg, 2021: 1756 and 1765). We may say that humans are described according to three phases. First, human transience is addressed as “ ‘anomaly’, ‘deviation’ ”. Second, DOI: 10.4324/9781032656885-4

66  Prediction and the automation of our future

the solution to human transience is addressed as “ ‘better management and control’, […] ‘health management’ ”. Third, the objective, which is the ideal human identity, is “to ‘increase lifespan and success’ ”. And what seems to characterise both the solution to human transience and the ideal human identity is “efficiency”: the more efficient predictive technologies are, the more efficient humans are. From the idea of optimisation to the idea of “efficiency”, humans seem to be described as kinds of machines. When we consider the dictionary definition of the word “optimisation”, we learn that it means “the fact of optimising; making the best of anything”.1 Interestingly enough, we also learn that the word “optimisation” especially refers to a kind of computation, in that it is “a mathematical technique for finding a maximum or minimum value of a function”. And when we consider the dictionary definition of the word “efficiency”, we learn that it means “the quality of being able to do a task successfully, without wasting time or energy”.2 And, interestingly enough, we also learn that the word “efficiency” especially refers to a kind of computation, in that, in engineering, it “is the ratio between the amount of energy a machine needs to make it work, and the amount it produces”. Thus, on the one hand, we find the (valuing) idea according to which humans are optimised and efficient when they “mak[e]‌ the best of anything” and are “able to do a task successfully”. On the other hand, we find the (devaluing) idea according to which humans “mak[e] the best of anything” and are “able to do a task successfully” when they are kinds of machines, specifically computing machines. More precisely, the idea of human success (“being able to do a task successfully”) correlates with the idea of not “wasting time or energy”, and even taking less “energy” and giving more “energy”. But what is most striking is the radical overturning of the meaning of ideal human identity, which, in Western culture, has been correlated to success for millennia, but whose correlation with a kind of success coinciding with computational optimisation and efficiency is quite unprecedented. From Homer’s Odysseus to Dante in his Divine Comedy to Goethe’s Wilhelm Meister to Salinger’s Holden Caulfield, “making the best of anything” and “being able to do a task successfully” have meant almost the opposite of the kinds of optimisation and efficiency described above. Conversely, they have meant endless journeys, getting lost, countless dangers and countless mistakes –​in Western culture, “making the best of anything” and “being able to do a task successfully” have meant for millennia almost the opposite of a kind of success coinciding with computational optimisation and efficiency as not “wasting time and energy”. Thus, what happened to us? The answer I propose is that we started engineering ourselves. I use an active verb, and not a passive verb, on purpose. When a phenomenon is ubiquitous, it is hard to think of it as totally passive. It is easier to think that, for instance, whenever we obsessively transform our sleep into a matter of optimisation and efficiency, to the point that it is not an empty time to rest,

Prediction and the automation of our future  67

but a full-​time job, we are willing to do it. Indeed, it is hard to think of the following phenomenon as totally passive torture (it may be torture, but it is meaningfully active, at least in part): we not only use but also design customisable alarm sounds, manually set sleep/​wake time, sleep aid sounds, bedtime reminder […] [and] the algorithmically determined ‘smart wake up’ feature […] [that] extends from assisting with sleep management to automating sleep-​wake rhythms. Going beyond measurement, these apps offer a more direct ‘intervention’ into the timing of the user’s wake up at the most optimal point within the phases of the sleep cycle. […] [T]‌he ‘smartness’ of the app indicates a shift from disciplinary self-​mastery to a more distributed and adaptive form of delegated monitoring and optimised intervention. (O’Neill and Nansen, 2019: no page number) Thus, what do we not only use but also design (and even pay dearly)? It is easy to answer that we design, pay dearly and use technology to do what is continuously repeated: to “automat[e]‌” something of ourselves, to obtain “a more direct ‘intervention’ into” ourselves and to “shift from disciplinary self-​mastery to […] delegated monitoring” –​we design, pay dearly and use technology to engineer ourselves, in that, the more engineered we are, the more unburdened we are. If it is true that, in Western culture, sleep has moved century after century from being an empty time to rest to being something to psychoanalyse to being a full-​time job, it is also true that the “disciplinary self-​ mastery” of sleep has moved century after century from being the solution to human life’s burdens to being one of human life’s burdens. If it makes sense, we may take a step forward and reflect upon the philosophical meaning of the engineering of ourselves. According to a self-​ tracker’s testimony, something analogous to what we have seen in Chapter 2 happens: We (the apps and I) had co-​constructed a digital model of my self, and here I was, managing myself, it seems, by proxy. The feedback from that digital model often took precedence over how I physically felt. When I didn’t eat ‘enough’ protein I felt weaker, and when I had too much sugar I felt fatter. (Williams, 2013: 3) But the self-​tracker adds something interesting: “I’ve yet to decide: is this model pushing me closer in contact or further away from my self and my world?” (Williams, 2013: 3). Outwardly, technology (“this model”) is meant to increase one’s responsibility for oneself, starting with one’s self-​perception and self-​mastery (“pushing me closer in contact […] [with] my self and my world”). Yet, inwardly, the self-​tracker’s question itself leads us to seriously

68  Prediction and the automation of our future

consider the opposite phenomenon: technology (“this model”) is meant to decrease one’s responsibility for oneself, starting with one’s self-​perception and self-​ mastery (“pushing me […] further away from my self and my world”). If it makes sense, the phenomenon that emerges is analogous to what we have seen in Chapter 2: again, we design and use technology to unburden ourselves from our responsibility for ourselves, starting with our self-​perception and self-​mastery –​again, we design and use technology to unburden ourselves from what makes us autonomous. My key argument is that the philosophical meaning of the engineering of humans is precisely their desperate attempt to unburden themselves from the burden of their autonomy by trading it with technological automation, sometimes consciously and sometimes unconsciously. The notion of autonomy is primary in the history of Western philosophy, from ancient philosophy to modern philosophy to contemporary philosophy.3 From an etymological perspective, “autonomy” means “living under one’s own laws” (Liddell and Scott, 1940), where “one’s own” results from αὐτός (transliterated as autos) and “laws” results from νόμος (transliterated as nomos). Kant’s notion of autonomy, on which is based the notion of autonomy I use in my work, translates the etymological perspective into a philosophical perspective, according to which autonomy “is the property of the will by which it is a law to itself” (Kant, 1785: 4, 440). The meaning of autonomy is clarified by its opposition to the meaning of “heteronomy”, which means “living under others’ laws”. And “others’ laws” are “contingent” (Kant, 1785: 4, 444), in that any “other” has its “laws”, which are “hence unfit for an apodictic practical rule, such as moral rules must be” (Kant, 1785: 4, 444). Conversely, autonomy means that one’s decisions and actions result from “one’s own laws” –​autonomy means the kind of self-​mastery from which we unburden ourselves by designing and using technology (even though, according to Kant, our autonomy underpins our dignity, our morality and our freedom).4 And, most interestingly, Kant adds something that makes us reflect upon the relationship between autonomy and automation5: he uses the word from which the word “automation” results, i.e. αὐτόματον (transliterated as automaton). From an etymological perspective, it means “accident” (Liddell and Scott, 1940) and correlates with the following three words. First, the verb αὐτοματίζω (transliterated as automatizo), which means to “act of oneself, act offhand or unadvisedly, […] to be done spontaneously or at random, […] haphazard, […] [to] introduce the agency of chance, […] [to] happen of themselves, casually” (Liddell and Scott, 1940). Second, the noun αὐτοματισμός (transliterated as automatismos), which means “that which happens of itself, chance” (Liddell and Scott, 1940). Third, the noun αὐτοματία (transliterated as automatia), which means “the goddess of chance” (Liddell and Scott, 1940), specifically “a surname of Tyche or Fortuna, which

Prediction and the automation of our future  69

seems to characterise her as the goddess who manages things according to her own will, without any regard to the merit of man” (Smith, 1867). Thus, etymologically speaking, automation even opposes to autonomy. If there is automation, there is a kind of chance that is a kind of randomness, which is precisely what autonomy as “living under one’s own laws” cannot be –​if automation rises, autonomy falls. From a philosophical perspective, Kant uses the word automaton to distinguish cases in which freedom is authentic from cases in which freedom is not authentic. Specifically, whenever the word automaton applies, freedom is not authentic. The word automaton applies to the following two cases. First, when it comes to inanimate entities, which is the case for the “automaton materiale, when the machine is driven by matter” (Kant, 1788: 5, 97). Second, when it comes to animate entities, specifically humans, which is the case for the “automaton […] spirituale, when it is driven by representations” (Kant, 1788: 5, 97). More precisely, whenever humans act as “automaton […] spirituale”, they are not authentically free, which means that they are not authentically autonomous, in that the reason for their acting is as “contingent” (which is the word Kant uses, as we have seen) as an “accident” is (which is the etymological meaning of automaton, as we have seen). Kant specifies that, if the freedom of our will were none other than the latter (say, psychological and comparative but not also transcendental, i.e. absolute), then it would at bottom be nothing better than the freedom of a turnspit, which, when once it is wound up, also accomplishes its movements of itself. (Kant, 1788: 5, 97) Thus, philosophically speaking, automation even opposes to autonomy to the point that we may translate the case of the self-​tracker into the following words: whenever the reason for our acting is as “contingent”, “psychological and comparative” as “the apps” are, our freedom is “nothing better than the freedom of a turnspit”. Specifically, our acting is not autonomous, but heteronomous, in that we trade what is meant by “living under one’s own laws” (starting with “how I physically felt”) for what is meant by “living under others’ laws” (starting with the “proxy”, i.e. “[t]‌he feedback from that digital model”). Conversely, whenever the reason for our acting is as “apodictic” and “transcendental, i.e. absolute”, as our self-​perception and self-​mastery are, our freedom is authentic –​and we are autonomous. Thus, why should we trade our autonomy, which can make us authentically free, for “the freedom of a turnspit”, which cannot make us authentically free? Kant gives us an insight when he develops the consequences of autonomy, specifically morality. If we trade our autonomy for “the freedom of a turnspit”, i.e. automation, the relationship between the cause of our action

70  Prediction and the automation of our future

and our action is analogous to a “mechanism of nature” (Kant, 1788: 5, 97), which is nothing more than a “necessity of events in time in accordance with the natural law of causality” (Kant, 1788: 5, 97). And, whenever our action is nothing more than the effect of a “mechanism”, a “necessity” and a “causality”, “no moral law is possible” (Kant, 1788: 5, 97), in that morality is underpinned by autonomy, as we have seen. And, whenever “no moral law is possible”, there is “no imputation” (Kant, 1788: 5, 97), which means that we are not “culpable and deserving of punishment” (Kant, 1788: 5, 100) if we fail. Conversely, if we do not trade our autonomy for “the freedom of a turnspit”, i.e. automation, the relationship between the cause of our action and our action is analogous to “a free causality” (Kant, 1788: 5, 100) as a “law” that is ours, and not a “mechanism of nature”. And, whenever our action is the effect of “a free causality” as a “law” that is ours, “moral law is possible”, there is “imputation” and we are “culpable and deserving of punishment” if we fail. Thus, we may trade our autonomy, which can make us authentically free, for “the freedom of a turnspit”, which cannot make us authentically free, for the following reason: we do not want to bear the burdens of our autonomy, which implies a “moral law”, which implies an “imputation”, which implies that we are “culpable and deserving of punishment” –​we do not want to bear the burdens of our autonomy, starting with being “culpable and deserving of punishment”, in that, especially in societies in which failure is almost unspeakable, they are more than ever unbearable. For instance, let us start with sleep as the metaphor of death. Are we asked to be up and running 24/​7 not to be a failure? No panic. We can use technologies that optimise our sleep by paradoxically relieving us from the burdens of self-​perception and self-​mastery when it comes to our sleep (for instance, most sophisticated mattresses, mattress covers, masks, watches and apps, as we have seen). Our sleep increasingly becomes automated. And, whenever we happen to fail anyway (also as a paradoxical consequence of our increasingly atrophied self-​perception and self-​mastery), we can say that it is not our fault: it is the fault of the technologies we used –​and technology becomes our best ally to the point that it even becomes our ready-​to-​use scapegoat. And let us move from sleep as the metaphor of death to aging as the cause of death. Are we asked not to age not to be a failure? No panic. We can use technologies that optimise our aging by paradoxically relieving us from the burdens of self-​perception and self-​mastery when it comes to our aging (for instance, most sophisticated aging clocks, as we have seen, but also several other technologies, from cosmetic surgery to genetic engineering to cryonics). Our aging increasingly becomes automated. And, whenever we happen to fail anyway (also as a paradoxical consequence of our increasingly atrophied self-​perception and self-​mastery), we can say that it is not our fault: again, it is the fault of the technologies we used –​and, again, technology

Prediction and the automation of our future  71

becomes our best ally to the point that it even becomes our ready-​to-​use scapegoat. Finally, let us move from aging as the cause of death to death itself, specifically its prediction. Why should we trade our autonomy for “the freedom of a turnspit”, i.e. automation? The answer I will develop in the following section is that there is hardly anything more challenging than our future when it comes to judging between succeeding and failing –​more precisely, there is hardly anything more challenging than our future when it comes to making sense of our present as the “free causality” of our future successes and failures. Again, the automation of our (predicted) future may serve as our best ally, i.e. our ready-​to-​use scapegoat. 3.2  Oedipus, Macbeth and the automation of our (predicted) future

Contemporary philosophy shows that the present time is characterised by a more and more challenging idea of autonomy, in that, especially in neoliberal societies, it has been taken to the extreme, year by year, by a more and more challenging idea of individualism,6 according to which humans as single individuals prevail over humans as societies (see at least Heidegger, 1927; Hinchman, 1996).7 And, if it is true that our rise as single individuals may imply our greatest individual success in favorable circumstances, it is also true that our fall as single individuals may imply our greatest individual failure in unfavorable circumstances –​and the focus of my work is precisely our use of predictive technologies as our ready-​to-​use scapegoats when it comes to escaping from our greatest individual failures, in that their burdens are more than ever unbearable. I propose to use two thought experiments to develop my argument through more theoretical tools.8 The first thought experiment is based on Sophocles’ Oedipus as what may be considered as the greatest literary example of negative predictions when it comes to one’s individual future. The second thought experiment is based on Shakespeare’s Macbeth as what may be considered as the greatest literary example of positive predictions when it comes to one’s individual future. Before moving to the two thought experiments, two clarifications are worth specifying. First, that, as a scholar of Western philosophy, my focus cannot extend to other cultures (yet, I hope that my reading may offer hints to try to understand something that may characterise the present time in general, especially in a globalised world). Second, that, even though the first tragedy belongs to ancient culture and the second tragedy belongs to modern culture, they have deeply influenced contemporary culture, from their continuous studies to their continuous theatrical and cinematographic versions, to the point that they can serve as powerful archetypes of human acting even today.

72  Prediction and the automation of our future

The objectives of the two thought experiments are the following. First, to better understand what the prediction of one’s individual future means when it comes to events that are impactful and lifechanging, from negative events (as in the case of Oedipus) to positive events (as in the case of Macbeth) to the specific case of death (as in the case of emerging technologies). Second, to better understand why we are willing to be predicted to the point that we endanger our prerogative of acting against all odds –​more precisely, why we are willing to trade our autonomy for technological automation to the point that we automate our future. In Chapter 1, I have started introducing the case of Oedipus (see, again, Soph. OT 787–​ 800), which is masterfully represented by Sophocles as follows: I went to Delphi […] and Phoebus […] set forth other things, full of sorrow and terror and woe: that I was fated to defile my mother’s bed, that I would reveal to men a brood which they could not endure to behold, and that I would slay the father that sired me. When I heard this, I turned in flight from the land of Corinth, from then on thinking of it only by its position under the stars, to some spot where I should never see fulfilment of the infamies foretold in my evil fate. And on my way I came to the land in which you say that this prince perished. The core of Oedipus’ tragedy is that the cause of the fulfilment of the prediction is precisely his attempt not to make it come true after having being predicted –​the core of Oedipus’ tragedy is precisely the prediction of his individual future. Let us start from our first objective, which is to better understand what the prediction of one’s individual future means when it comes to (negative) events that are impactful and lifechanging. More precisely, let us imagine to be Oedipus and reflect upon our first objective from a philosophical perspective. We may imagine to proceed as follows: 1 Indeed, the cause of our tragedy is the prediction of our future. Specifically, the reason why we murder our father, which is the first negative event of a series of negative events, is that we meet our biological father (we “came to the land in which you say that this prince perished”), whose true identity we do not know. The reason why we meet our biological father is that we leave our adoptive father (we “turned in flight from the land of Corinth”), whose true identity we do not know. And the reason why we leave our adoptive father is the prediction of impactful and lifechanging negative events from which we want to escape (“Phoebus […] set forth other things, full of sorrow and terror and woe”). Thus, we may say that the prediction is the cause and our tragedy is the effect.

Prediction and the automation of our future  73

2 Can we act differently? Yes, in that we can imagine two different scenarios. a In the first case, there is no prediction. Thus, we have no reason to leave our adoptive father, which is the reason why we meet our biological father and murder him. Is there any certainty that we do not murder our father anyway? No, in that we may happen to meet and murder him for a different reason. For instance, he may happen to meet our adoptive father and we may happen to misunderstand their meeting and murder the former to protect the latter. Thus, we may say that, if there is no prediction, the murder of our father by our hands is not certain as in the case of the prediction told by Sophocles. Conversely, it is uncertain, which means that it is nothing more than possible. b In the second case, there is prediction, but we act differently, from being extremely inactive to being extremely proactive. In the first sub-​scenario, we decide to minimise our actions for fear of ending up murdering our father. For instance, we isolate ourselves from others. Is there any certainty that we do not murder our father anyway? No, in that, for instance, he may happen to do what follows: first, to know from Phoebus that he will be murdered by us, second, to try to murder us not to be murdered by us and, third, to be accidentally murdered by us in his assault. In the second sub-​scenario, we decide to maximise our actions for fear of ending up murdering our father. For instance, we commit suicide. Is there any certainty that we do not murder our father anyway? No, in that, for instance, if he happens to know that we committed suicide not to murder him, he may happen to suffer to the point to commit suicide as an indirect effect of our suicide. Thus, we may say that, if there is prediction, but we act differently, the murder of our father by our hands is not certain as in the case of the prediction told by Sophocles. Conversely, it is uncertain, which means that it is nothing more than possible. 3 Finally, we may say that two issues are worth noting. a First, there is no certainty when it comes to escaping from tragic events. Indeed, even though we can move from certainty (in the case of the prediction told by Sophocles) to uncertainty (in the cases of our thought experiment) when it comes to suffering tragic events, we cannot move from uncertainty to certainty when it comes to escaping from tragic events, which are uncertain, i.e. possible, anyway. b Second, a huge difference emerges anyway. If there is prediction, our decisions and actions are severely undermined, whatever we do. From murdering anyway to isolating ourselves from others to committing suicide, even though we do not end up murdering, our decisions and actions are severely undermined to the point that we may think of

74  Prediction and the automation of our future

them as nothing more than the effects of heteronomous causes that drastically decrease their possibilities. Thus, we may say that, if impactful and lifechanging negative events are predicted, our future is severely undermined to the point that we may think of it as nothing more than the effect of heteronomous causes that drastically decrease its possibilities. More precisely, the reason why our future is nothing more than the effect of heteronomous causes is that, if we know that our future will be “full of sorrow and terror and woe” according to the most reliable tools we have at our disposal (Phoebus, in our case), we are literally desperate. And the more desperate we are, the greater is the risk of falling into Kant’s notion of heteronomy, which means that our decisions and actions result from something “contingent”, “psychological and comparative”, and not from something “apodictic” and “transcendental, i.e. absolute”. Indeed, our future is nothing more than the effect of our despair, specifically our desperate attempt not to make the prediction come true. What we can learn from the first thought experiment is that the prediction of impactful and lifechanging negative events may severely undermine one’s future –​from a philosophical perspective, undermining means drastically decreasing possibilities, which means decreasing uncertainty (as what gives possibilities) and increasing certainty (as what takes away possibilities), which finally means undermining one’s autonomy. The analogy between the scenarios of the first thought experiment and the specific case of death (as in the case of emerging technologies) is quite clear: in both cases, impactful and lifechanging negative events are predicted by the most reliable tools we have at our disposal (from Phoebus to the most sophisticated technologies). And, in both cases, our future may be nothing more than the effect of our despair. Finally, in both cases, uncertainty (as what gives possibilities) may decrease and certainty (as what takes away possibilities) may increase –​which means that our autonomy may fall and our automation, specifically the automation of our future as the effect of heteronomous causes, may rise. The fall of our autonomy and the rise of our automation is the focus of our second objective, which is to better understand why we are willing to be predicted to the point that we endanger our prerogative of acting against all odds –​more precisely, why we are willing to trade our autonomy for technological automation to the point that we automate our future. Let us imagine to be Oedipus and reflect upon our second objective from a philosophical perspective. We may imagine to proceed as follows: 1 Indeed, our tragedy results from our attempt to escape from uncertainty. Specifically, we, as Oedipus, who is the king of Thebes, are responsible for saving it from a deadly plague. But we do not know what to do. Thus,

Prediction and the automation of our future  75

we ask for Phoebus’ prediction as the most reliable tool we have at our disposal to move from uncertainty to certainty and know what to do to fulfil our responsibilities and duties. 2 We may go further and say that, indeed, the reason why we want to move from uncertainty to certainty is precisely that we are overwhelmed by the divergence between what we should do and what we can do. On the one hand, our responsibilities and duties are demanding to the point that we should save Thebes. On the other hand, we cannot do it, in that we do not know the cause of the deadly plague and, thus, its remedy. We may say that we are desperately stuck between demanded performance and unpreparedness. More precisely, we are desperately stuck between the sense of our identity, which is strictly correlated with the kind of performance that is demanded of us, and our unpreparedness, which is strictly correlated with the kind of uncertainty that challenges the fulfilment of our responsibilities and duties. Even more precisely, we are desperately stuck because we cannot make sense of our identity as the kind of performance that is demanded of us when it comes to our expectations and others’ expectations about the fulfilment of our responsibilities and duties. Thus, prediction is our last resort to move from uncertainty to certainty and, consequently, from unpreparedness to preparedness –​ prediction is our last resort to make sense of our identity by living up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations about the fulfilment of our responsibilities and duties. 3 We may go even further and say that, indeed, we ask for Phoebus’ prediction because it can drive and, thus, reactivate us. We can restart deciding and acting, which means that we can restart living up to the kind of identity that is demanded of us. More precisely, there is a sense in which prediction can unburden us from the unbearable burden of making sense of our identity. Indeed, whenever we cannot make sense of our identity by living up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations about the fulfilment of our responsibilities and duties, prediction can unburden us from the unbearable burden of making sense of our identity by driving and, thus, reactivating our decisions and actions –​whenever we cannot make sense of our identity, prediction can unburden us from its unbearable burden by driving and, thus, reactivating our decisions and actions through their automation. Thus, we may add that what we can learn from the first thought experiment is that we have a strong reason to be willing to be predicted to the point that we endanger our prerogative of acting against all odds: the more we decrease uncertainty (as what gives possibilities) and increase certainty (as what takes

76  Prediction and the automation of our future

away possibilities), the less the burden of making sense of our identity is –​ the more automated we are, the less desperately stuck we are when it comes to living up to the kind of identity that is demanded of us, no matter if we trade our autonomy for technological automation (from Phoebus to the most sophisticated technologies). Analogies between the scenarios of the first thought experiment and the case of emerging technologies, and even the specific case of death, are quite clear. We may easily happen to be desperately stuck between demanded performance and unpreparedness. For instance, fulfilling our responsibilities and duties may mean challenging decisions and actions in cases in which uncertainties surpass certainties, from deciding who to fire and hire as entrepreneurs to deciding who not to treat and treat if we are physicians. In both cases, we may be overwhelmed by fear of making mistakes that may have a huge impact on both our life and others’ life, from firing and hiring the wrong employees to not treating and treating the wrong patients. And, in both cases, we may not be capable of making sense of our identity by living up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations about the fulfilment of our responsibilities and duties. More precisely, we may dramatically fail as entrepreneurs, in the first case, and as physicians, in the second case, not only professionally, in that we may make hugely impactful, and even irremediable, mistakes, but also ethically, in that we may totally paralyse ourselves by fear and, thus, be stuck in a frustrating, shameful and demeaning state of inaction. Yet, most interestingly, technology emerges as our best ally, and even our ready-​to-​use scapegoat. If we are entrepreneurs, we can rely, for instance, on hiring algorithms’ predictions. If we are physicians, we can rely, for instance, on medical diagnosis algorithms’ predictions. In both cases, we obtain at least two great advantages from technological predictions. First, we obtain the reactivation of our decisions and actions –​again, whenever we cannot make sense of our identity, both professionally and ethically, technological predictions can unburden us from its unbearable burden by driving and, thus, reactivating our decisions and actions through their automation. Second, we obtain a further unburdening scapegoating, in that we are not desperately alone when we decide and act. Conversely, whenever our decisions and actions are driven and, thus, reactivated through their automation, i.e. by predictive technologies, if something goes wrong, we may say that it is not our fault: again, it is the fault of the technologies we used –​ if we dramatically fail, our failure is the automated, i.e. engineered, effect of heteronomous causes, which is something far lighter than the burden of failing as autonomous humans. The case of Macbeth, who meets three witches who predict his future as a series of impactful and lifechanging positive events, is masterfully represented by Shakespeare as follows: “First witch: ‘All hail, Macbeth! Hail to thee,

Prediction and the automation of our future  77

Thane of Glamis!’. Second witch: ‘All hail, Macbeth! Hail to thee, Thane of Cawdor!’. Third witch: ‘All hail, Macbeth, that shalt be king hereafter!’ ” (Shakespeare Mac. ACT 1, SC 3, 51–​ 53). According to the predictions, Macbeth, who is “Thane of Glamis”, will be, first, “Thane of Cawdor” (which immediately comes true) and, second, “king”, which pushes his ambition to the point that he takes extreme action, starting with murdering the king to replace him. As in the case of Oedipus, the core of Macbeth’s tragedy is precisely the prediction of his individual future, even though, in the case of Macbeth, his tragedy is caused by his attempt to make the prediction come true. It is worth noting that, even though Macbeth does not ask for the predictions quoted above (in Act I), he asks for further predictions on purpose (in Act III), when the three witches predict who can be his enemy, who cannot defeat him and when he can be defeated. Again, let us start from our first objective, which is to better understand what the prediction of one’s individual future means when it comes to (positive) events that are impactful and lifechanging. More precisely, let us imagine to be Macbeth and reflect upon our first objective from a philosophical perspective. We may imagine to proceed as follows: 1 Indeed, the cause of our tragedy is the prediction of our future. Specifically, the reason why we murder the king, which is the first negative event of a series of negative events, is that our ambition is pushed by the prediction of impactful and lifechanging positive events as certain, i.e. proved by both the most reliable tools we have at our disposal (the three witches, in our case) and the immediate coming true of the first prediction. Thus, we may say that the prediction is the cause and our tragedy is the effect. 2 Can we act differently? Yes, in that we can imagine two different scenarios. a In the first case, there is no prediction. Thus, we have no reason to think of ourselves as the future king, which is the reason why we murder the king. Is there any certainty that we do not murder the king anyway? No, in that we may happen to murder him for a different reason, i.e. accidentally. Thus, we may say that, if there is no prediction, the murder of the king by our hands is not certain as in the case of the prediction told by Shakespeare. Conversely, it is uncertain, which means that it is nothing more than possible. b In the second case, there is prediction, but we act differently, from being extremely inactive (by isolating ourselves from others not to become a murderer) to being extremely proactive (by committing suicide not to become a murderer). Is there any certainty that we do not murder the king anyway? No, in that, for instance, we may happen to become an indirect murderer if Lady Macbeth, as our wife, happens to murder the king by her hands (for ambition, in the first sub-​scenario, and for

78  Prediction and the automation of our future

revenge, in the second sub-​scenario). Thus, we may say that, if there is prediction, but we act differently, the murder of the king by our hands is not certain as in the case of the prediction told by Shakespeare. Conversely, it is uncertain, which means that it is nothing more than possible. 3 Finally, we may say that two issues are worth noting. a First, as in the case of Oedipus, there is no certainty when it comes to escaping from tragic events. b Second, as in the case of Oedipus, if there is prediction, our decisions and actions are severely undermined, whatever we do. Thus, we may say that, if impactful and lifechanging positive events are predicted, our future is severely undermined to the point that we may think of it as nothing more than the effect of heteronomous causes that drastically decrease its possibilities. More precisely, if impactful and lifechanging positive events are predicted by the most reliable tools we have at our disposal (the three witches, in our case), we are overexcited. And the more overexcited we are, the greater is the risk of falling into Kant’s notion of heteronomy, which means that our decisions and actions result from something “contingent”, “psychological and comparative”, and not from something “apodictic” and “transcendental, i.e. absolute”. Indeed, our future is nothing more than the effect of our overexcitation, specifically our overexcited attempt to make the prediction come true. What we can learn from the second thought experiment is that the prediction of impactful and lifechanging positive events may severely undermine one’s future, as in the case of negative events. And the analogy between the two thought experiments goes further. If we consider the case of emerging technologies, and even the specific case of death, whenever we are predicted by the most reliable tools we have at our disposal (from Phoebus to the three witches to the most sophisticated technologies), our future may be nothing more than the effect of heteronomous causes, from despair to overexcitation –​which means, again, that our autonomy may fall and our automation, specifically the automation of our future as the effect of heteronomous causes, may rise. Finally, let us imagine to be Macbeth and reflect upon our second objective from a philosophical perspective. More precisely, let us reflect upon the reason why we are willing to trade our autonomy for technological automation (from Phoebus to the three witches to the most sophisticated technologies) to the point that we automate our future. We may imagine to proceed as follows: 1 Indeed, our tragedy results from our attempt to escape from uncertainty. Specifically, we, as Macbeth, ask for further predictions on purpose (in

Prediction and the automation of our future  79

Act III) because we do not know what to do after having murdered and replaced the king. Thus, we ask for the three witches’ prediction as the most reliable tool we have at our disposal to move from uncertainty to certainty and know what to do. 2 We may go further and say that, indeed, the reason why we want to move from uncertainty to certainty is precisely that we are overwhelmed by uncertainties. After having murdered and replaced the king, our fear of being found guilty and defeated by the king’s avengers grows to the point that we are desperately stuck between our present rise as the king and uncertainties that may mean our future fall, and we even lose our mental balance. More precisely, we are desperately stuck between the sense of our identity, which is strictly correlated with our rise as the king, and uncertainties that make us incapable of performing as the king both in the present (because we even lose our mental balance) and in the future (because we fear to be found guilty and defeated). Even more precisely, we are desperately stuck because we cannot make sense of our identity as the king when it comes to our expectations and others’ expectations both in the present and in the future. Thus, prediction is our last resort to move from uncertainty to certainty and, consequently, from fear, and even mental imbalance, to being in all respects the king –​prediction is our last resort to make sense of our identity as the king by living up to our expectations and others’ expectations both in the present and in the future. 3 We may go even further and say that, indeed, we ask for the three witches’ prediction because it can drive and, thus, reactivate us. We can restart deciding and acting, which means that we can restart living up to the kind of identity we want. More precisely, there is a sense in which prediction can unburden us from the unbearable burden of making sense of our identity. Indeed, whenever we cannot make sense of our identity by living up to the kind of identity we want when it comes to our expectations and others’ expectations both in the present and in the future, prediction can unburden us from the unbearable burden of making sense of our identity by driving and, thus, reactivating our decisions and actions –​whenever we cannot make sense of our identity, prediction can unburden us from its unbearable burden by driving and, thus, reactivating our decisions and actions through their automation. Thus, we may add that what we can learn from the second thought experiment is that we have a strong reason to be willing to be predicted to the point that we endanger our prerogative of acting against all odds: not knowing what to do may lead us to fear, and even mental unbalance, to the point that certainty underpinning extreme action may be more attractive than uncertainty underpinning no action –​certainty automating extreme action may be attractive whenever we are desperately stuck when it comes to

80  Prediction and the automation of our future

living up to the kind of identity we want, no matter if we trade our autonomy for technological automation (from Phoebus to the three witches to the most sophisticated technologies). Analogies between the scenarios of the second thought experiment and the case of emerging technologies, and even the specific case of death, are quite clear. We may easily happen to be desperately stuck between our present rise and uncertainties that may mean our future fall. For instance, our present rise may be a remarkable growth of our financial capital because of advantageous stock market trends and our future fall may be a remarkable loss of our financial capital because of disadvantageous stock market trends. We may fear to the point that we even lose our mental balance. And we may not be capable of making sense of our identity by living up to the kind of identity we want when it comes to our expectations and others’ expectations both in the present and in the future. More precisely, we may dramatically fail as responsible adults who should be capable of buying a house for their family and paying for their children’s education. And failing as responsible adults means failing both professionally and ethically, from being a reliable financial operator to being a reliable partner and parent. Yet, most interestingly, technology emerges as our best ally, and even our ready-​to-​use scapegoat. We can rely, for instance, on financial algorithms’ predictions. We obtain at least two great advantages from technological predictions. First, we obtain the reactivation of our decisions and actions –​ again, whenever we cannot make sense of our identity, both professionally and ethically, technological predictions can unburden us from its unbearable burden by driving and, thus, reactivating our decisions and actions through their automation. Second, we obtain a further unburdening scapegoating, in that we are not desperately alone when we decide and act. Conversely, whenever our decisions and actions are driven and, thus, reactivated through their automation, i.e. by predictive technologies, if something goes wrong, we may say that it is not our fault: again, it is the fault of the technologies we used –​if we dramatically fail, our failure is the automated, i.e. engineered, effect of heteronomous causes, which is something far lighter than the burden of failing as autonomous humans. Before moving to further developments of the two thought experiments in the following chapter, let us stress the meaning of Shakespeare’s words quoted in Chapter 1: “Present fears are less than horrible imaginings. My thought, whose murder yet is but fantastical, shakes so my single state of man that function is smothered in surmise, and nothing is but what is not” (Shakespeare Mac. ACT 1, SC 3, 150–​155). There is hardly anything better than Shakespeare’s words to express predictions’ enormous power over humans. It seems that facing the future as something predicted, i.e. “present” and somehow certain, both when it is positive (for instance, being the king) and when it is negative (for instance, being defeated), can be more

Prediction and the automation of our future  81

attractive than facing the future as something open, i.e. “imaginings” that are “fantastical” and somehow uncertain –​it seems that uncertainty can be more “horrible” than predicted, i.e. somehow certain, negative events, in that “nothing is but what is not”. Yet, uncertainty is precisely what gives us possibilities, as we have seen –​ uncertainty is precisely what gives us an open future, which may mean not only “horrible imaginings” but also, and especially, possibilities. Thus, why are we willing to trade an open future (as resulting from our autonomy) for a predicted future (as resulting from our automation)? The question is even more urgent if we consider the price we pay for trading the former for the latter. It is no coincidence that the history of Western culture is characterised by several warnings about knowledge and prediction, as we have seen in Chapter 1. And both Sophocles and Shakespeare masterfully show that the price we pay is exceedingly high: prediction means automation in the sense that our decisions and actions are undermined to the point that they are nothing more than effects of heteronomous causes –​and our future is nothing more than the automated, i.e. engineered, effect of heteronomous causes, from our despair to our overexcitation. The prediction of our death may show an analogous price to pay, as we have seen in Chapter 2. If we know that, according to the most reliable tools we have at our disposal, we will die of a heart attack in five years, whatever our reaction may be, our future loses its openness to almost endless possibilities. And the most important thing to understand is that, the more our future loses its openness, the more our present loses its (pleasant) exercise of our reflection, imagination, planning and decision-​making to make sense of both our present life and, especially, our future life –​the more our future loses its openness, the less we have reason to (pleasantly) make sense of both our present life and, especially, our future life, starting with working on the kind of human being we ideally want to become as the future evolution of our present identity. Thus, why are we willing to trade the openness of our future for its prediction, i.e. automation? The insight that emerges from the two thought experiments is that the answer may be correlated with the last point we have made, which is sensemaking. Thus, let us move to the following chapter to try to answer the following question: if it is true that the more predicted, i.e. automated, our future is, the less we have reason to practice sensemaking, is it also true that the practice of sensemaking is precisely the core of the unbearable burden from which we desperately attempt to unburden ourselves through predictive technologies? Notes 1 See, also for the following quote, www.collin​sdic​tion​ary.com/​dic​tion​ary/​engl​ish/​ optim​izat​ion (accessed: October 2, 2023).

82  Prediction and the automation of our future

2 See, also for the following quote, www.collin​sdic​tion​ary.com/​dic​tion​ary/​engl​ish/​ eff​i cie​ncy (accessed: October 2, 2023). 3 From the ancient notion of autonomy as rational self-​ determination leading to morality to the contemporary notion of autonomy strengthening a kind of individualism (see at least Dworkin, 1988; Frankfurt, 1988; Ekstrom, 1993; Bratman, 2007. See also Haworth, 1986; Christman, 1989 and 1992; Mele, 1991 and 1995; Benson, 1994; May, 1994; Berofsky, 1995; Lehrer, 1997; Schneewind, 1998; Cuypers, 2001; Frankel Paul, Miller and Paul, 2003; Taylor, 2005; Darwall, 2006; Killmister, 2017). The notion of autonomy I use in my work is based on Kant’s notion of autonomy (see Kant, 1785 and 1788), which has deeply influenced contemporary philosophers (see at least Hill, 1991; Korsgaard, 1996a and 1996b; Guyer, 2003; Taylor, 2005; Reath, 2006; Deligiorgi, 2012; Sensen, 2012). As far as the relationship between autonomy and freedom is concerned, see at least Berlin, 1958 and 1969; Watson, 1975; Raz, 1986; Sen, 1999. As far as the relationship between autonomy and society is concerned, see at least Meyers, 1989; Oshana, 1998; Friedman, 2002; Lehrer, 2003; Christman and Anderson, 2005; Christman, 2009. 4 As far as dignity is concerned, Kant argues that autonomy is “the ground of the dignity of human nature and of every rational nature” (Kant, 1785: 4, 436). As far as morality is concerned, he argues that we “cognise autonomy of the will along with its consequence, morality” (Kant, 1785: 4, 453). As far as freedom is concerned, he argues that “freedom and the will’s own lawgiving are both autonomy” (Kant, 1785: 4, 450). 5 As far as the contemporary ubiquitous use of the word “automation” is concerned, human work’s automation was key, starting with Ford Motor Company’s decision, in 1947, to “set up an ‘automation department’. […] [The] new unit hoped to increase the use of existing technologies –​hydraulic, electromechanical, and pneumatic –​to speed up operations and enhance productivity on the assembly line” (Rifkin, 1955: 66). But both the etymological meaning and the philosophical meaning of the word “automation” are far more complex than its meaning in human work. As far as the relationship between automation and the present time is concerned, specifically emerging technologies, see at least Carr, 2014; Purves et al., 2015; Reagle, 2019; Till, 2019. 6 See note 3. I will work on the idea of individualism in Chapter 6. 7 See at least Lévinas, 1961, for a critical perspective. 8 The groundwork of the two thought experiments I propose is Chiodo, forthcoming.

References Barricelli, B., Casiraghi, E., Fogli, D., “A survey on digital twin. Definitions, characteristics, applications, and design implications”, IEEE Access 7: 167653–​ 167671, 2019. Benson, P., “Autonomy and self-​worth”, The Journal of Philosophy 91/​12: 650–​ 668, 1994. Berlin, I., Two concepts of freedom, Oxford: Oxford University Press, 1958. Berlin, I., Four essays on liberty, Oxford: Oxford University Press, 1969. Berofsky, B., Liberation from self. A theory of personal autonomy, Cambridge: Cambridge University Press, 1995.

Prediction and the automation of our future  83

Bratman, M. E., Structures of agency. Essays, Oxford: Oxford University Press, 2006. Carr, N. G., The glass cage. Automation and us, New York: Norton & Co., 2014. Chiodo, S., “From Phoebus to witches to death clocks. Why we are taking predictive technologies to the extreme”, AI & Society, forthcoming. Chiodo, S., “Where ignorance is bliss, ’tis folly to be wise”, forthcoming. Christman, J., ed., The inner citadel. Essays on individual autonomy, Oxford-​ New York: Oxford University Press, 1989. Christman, J., “Autonomy and personal history”, Canadian Journal of Philosophy 21: 1–​24, 1992. Christman, J., The politics of persons. Individual autonomy and socio-​ historical selves, Cambridge: Cambridge University Press, 2009. Christman, J., Anderson, J., eds., Autonomy and the challenges to liberalism, Cambridge: Cambridge University Press, 2005. Cuypers, S. E., Self-​identity and personal autonomy, Hampshire: Ashgate, 2001. Darwall, S., “The value of autonomy and autonomy of the will”, Ethics 116/​2: 263–​ 284, 2006. Deligiorgi, K., The scope of autonomy. Kant and the morality of freedom, Oxford: Oxford University Press, 2012. Dworkin, G., The theory and practice of autonomy, Cambridge: Cambridge University Press, Cambridge, 1988. Ekstrom, L. W., “A coherence theory of autonomy”, Philosophy and Phenomenological Research 53: 599–​616, 1993. Frankel Paul, E., Miller, F., Paul, J., eds., Autonomy, Cambridge: Cambridge University Press, 2003. Frankfurt, H. G., ed., The importance of what we care about, Cambridge: Cambridge University Press, 1988. Friedman, M., Autonomy, gender, politics, Oxford: Oxford University Press, 2002. Guyer, P., “Kant on the theory and practice of autonomy”, in Autonomy, ed. by E. Frankel Paul, F. D. Miller and J. Paul, Cambridge: Cambridge University Press, 2003, 70–​98. Haworth, L., Autonomy. An essay in philosophical psychology and ethics, New Haven: Yale University Press, 1986. Heidegger, M., Being and time. A translation of Sein und Zeit, ed. by J. Stambaugh, Albany: State University of New York Press, 1996 (1927). Hill, T., Autonomy and self-​respect, New York: Cambridge University Press, 1991. Hinchman, L., “Autonomy, individuality and self-​ determination”, in What is Enlightenment? Eighteenth-​century answers and twentieth-​century questions, ed. by J. Schmidt, Berkeley: University of California Press, 1996, 488–​516. Kant, I., Groundwork of the metaphysics of morals, ed. by M. J. Gregor, Cambridge: Cambridge University Press, 1998 (1785). Kant, I., Critique of practical reason, ed. by M. J. Gregor, Cambridge: Cambridge University Press, 1996 (1788). Killmister, S., Taking the measure of autonomy. A four-​dimensional theory of self-​ governance, Abingdon: Routledge, 2017. Korenhof, P., Blok, V., Kloppenburg, S., “Steering representations. Towards a critical understanding of digital twins”, Philosophy & Technology 34: 1751–​1773, 2021. Korsgaard, C. M., Creating the kingdom of ends, New York: Cambridge University Press, 1996a.

84  Prediction and the automation of our future

Korsgaard, C. M., The sources of normativity, New York: Cambridge University Press, 1996b. Lehrer, K., Self-​trust. A study in reason, knowledge, and autonomy, Oxford: Oxford University Press, 1997. Lehrer, K., “Reason and autonomy”, Social Philosophy and Policy 20/​ 2: 177–​ 198, 2003. Lévinas, E., Totality and infinity, ed. by A. Lingis, Pittsburgh: Duquesne University Press, 1969 (1961). Liddell, H. G., Scott, R., A Greek-​English lexicon, revised by Sir H. Stuart Jones, Oxford: Clarendon Press, 1940. May, T., “The concept of autonomy”, American Philosophical Quarterly 31/​2: 133–​ 144, 1994. Mele, A., “History and personal autonomy”, Canadian Journal of Philosophy 23: 271–​280, 1991. Mele, A., Autonomous agents. From self-​ control to autonomy, Oxford-​ New York: Oxford University Press, 1995. Meyers, D. T., Self, society, and personal choice, New York: Columbia University Press, 1989. O’Neill, C., Nansen, B., “Sleep mode. Mobile apps and the optimisation of sleep-​ wake rhythms”, First Monday 24/​6, 2019, no page numbers. Oshana, M., “Personal autonomy and society”, Journal of Social Philosophy 29/​ 1: 81–​102, 1998. Purves, D., et al., “Autonomous machines, moral judgment, and acting for the right reasons”, Ethical Theory and Moral Practice 18: 851–​872, 2015. Raz, J., The morality of freedom, Oxford: Oxford University Press, 1986. Reagle, J. M., Hacking life. Systematised living and its discontents, Cambridge: MIT Press, 2019. Reath, A., Agency and autonomy in Kant’s moral theory. Selected essays, Oxford-​ New York: Oxford University Press, 2006. Rifkin, J., The end of work. The decline of the global labour force and the dawn of the post-​market era, New York: Putnam, 1995. Schneewind, J. B., The invention of autonomy, Cambridge: Cambridge University Press, 1998. Sen, A., Development as freedom, Oxford: Oxford University Press, 1999. Sensen, O., ed., Kant on moral autonomy, Cambridge: Cambridge University Press, 2012. Shakespeare, W., The tragedy of Macbeth, ed. by B. A. Mowat and P. Werstine, New York: Folger Shakespeare Library, 2013 (1623). Smith, W., ed., Dictionary of Greek and Roman biography and mythology, Boston: Little, Brown & Co., 1867. Sophocles, Oedipus Tyrannus (OT), ed. by R. Jebb, Cambridge: Cambridge University Press, 1887. Taylor, J. S., ed., Personal autonomy. New essays on personal autonomy and its role in contemporary moral philosophy, Cambridge: Cambridge University Press, 2005. Taylor, R., “Kantian personal autonomy”, Political Theory 33/​5: 602–​628, 2005.

Prediction and the automation of our future  85

Till, C., “Creating ‘automatic subjects’. Corporate wellness and self-​tracking”, Health 23/​4: 418–​435, 2019. Watson, G., “Free action”, The Journal of Philosophy 72: 205–​220, 1975. Williams, K., “The weight of things lost. Self-​knowledge and personal informatics”, CHI: 1–​4, 2013.

4 PREDICTION AND THE UNBEARABLE BURDEN OF SENSEMAKING

4.1  Oedipus, Macbeth and sensemaking as our most unbearable burden

The two thought experiments based on Sophocles’ Oedipus and Shakespeare’s Macbeth share something further: both Oedipus and Macbeth are expected to be exceptional. The former is put under pressure by both himself and his people, i.e. the Thebans, whose expectations are exceptionally high: Oedipus should become the saviour. The latter is put under pressure by both himself and his wife, i.e. Lady Macbeth, whose expectations are exceptionally high: Macbeth should become the king. Even though we are not necessarily expected to become saviours and kings, we increasingly happen to be analogously put under pressure, in that we increasingly happen to be analogously expected to be exceptional. The following example is paradigmatic: Last week my mother called about visiting her grandchildren. ‘What’s a good day for me to come over?’, she asked, offering to make the one-​ hour drive to my house any day my four children were free. I opened my planner and scanned their schedules, each day’s box crammed with places to be. Soccer, tutors, gymnastics and so on. ‘There’s nothing this week except 5 to 7 on Thursday’, I said before realising there might be a basketball game that night.1 The cause of the pressure described above “stems largely from the increasing competitiveness and cost of higher education. ‘To get into a good college now you have to be involved in a million different things’, he [the school DOI: 10.4324/9781032656885-5

Prediction and the unbearable burden of sensemaking  87

psychologist] said, noting that at Demarest’s Northern Valley Regional High School, where he is the school psychologist, the kids who stand out play sports and instruments, work and do community service, while maintaining good grades and SAT scores. ‘Parents are already thinking about that when children are in elementary school’. […] [He] adds that the cost of a four-​ year university often reaches $ 250.000, forcing parents to think early about scholarships. ‘We look for anything our kids may be good at and then go into overdrive trying to develop it, hoping it might help pay for school, and the result is overscheduling’, he said. Beyond college, […] [he] feels most parents want to help their children be successful in everyday life. For kids who show interest or ability in a given activity, that means more games, more practice and more private trainers. […] ‘The expectation in today’s world is that we can multitask and handle a million things being thrown at us at once’ ”. Further examples may be superfluous in neoliberal societies, which require us not only to be up and running 24/​7 but also performing to the point that we should not age and die, in that aging and death are almost unspeakable failures. It is quite clear that overscheduling, overdoing and overperforming are neoliberal societies’ values. And the notion of time itself has dramatically changed, from including what we may define, today, as its waste (as in the case of Homer’s Odysseus, for whom endless journeys, getting lost, countless dangers and countless mistakes are values, as we have seen in Chapter 3) to excluding qualities that are not quantifiable (as several social scientists stress).2 The following insight has emerged in Chapter 3: from a philosophical perspective, we are willing to trade the openness of our future for its prediction, i.e. automation, in that the practice of sensemaking is precisely the core of the unbearable burden from which we desperately attempt to unburden ourselves through predictive technologies. There is a strict correlation between the unbearable burden of sensemaking and the words quoted above. Indeed, we may intuitively say that overscheduling, overdoing and overperforming as neoliberal societies’ values are precisely the criteria according to which we practice sensemaking. Less intuitively and more precisely, we may start saying that the notion of sensemaking was introduced by scholars of organisational studies as a kind of rationalisation of human experience, specifically a “development of plausible images that rationalise what people are doing” (Weick, Sutcliffe and Obstfeld, 2005: 409). Shades vary, and it is worth noting that scholars of enactivism, which is influenced by phenomenology, work on the notion of sensemaking as living creatures’ autonomous acting. According to enactivism, sensemaking lies at the core of every form of action, perception, emotion and cognition, since in no instance of these is the basic structure of concern or caring ever

88  Prediction and the unbearable burden of sensemaking

absent. This is constitutively what distinguishes mental life from other material and relational processes. (Di Paolo, Cuffari and De Jaegher, 2018: 33) Thus, first, the agent’s present identity is key (in that “[l]‌ife is thus a self-​ affirming process that brings forth or enacts its own identity and makes sense of the world from the perspective of that identity”, Thompson, 2007: 153) and, second, the agent’s future identity is also key (in that “actions of the system are always directed toward situations that have yet to become actual”, Varela, Thompson and Rosch, 1991: 205). In both organisational studies and enactivism, sensemaking plays a decisive role when it comes to shaping one’s identity through a continuous exchange between one’s acting and one’s context of life, starting with other individuals’ reactions to one’s acting. And, since the exchange between one’s acting and one’s context of life is continuous, it is especially decisive when it comes to shaping one’s future identity. Indeed, we may say that one’s present identity is continuously passed. Conversely, we may say that one’s future identity is continuously present –​we may say that, paradoxically enough, what one shapes day by day is one’s future identity as one’s unpassed (and unpassable) present. I will not use the notion of sensemaking as it is specifically used in the cases described above, in that I will base it on the two thought experiments of Sophocles’ Oedipus and Shakespeare’s Macbeth, through which I will stress the relationship between the notion of sensemaking and the notion of autonomy. Yet, my notion of sensemaking will be perfectly compatible with the idea according to which speaking of sensemaking means speaking of something key for one’s life –​sensemaking means nothing less than the autonomous shaping of one’s identity, especially one’s future identity, through a continuous exchange between one’s autonomous acting and one’s context of life, starting with other individuals’ reactions to one’s acting. It is no coincidence that the following insight has emerged in Chapter 1: there is a sense in which the more humans can see their future (as the content of their capability of prediction), the less they can see their present (as the content of their sight) –​there is a sense in which the more humans can live their future, the less they can live their present. We may add that there are two almost opposite cases: when humans live their future as something predicted, i.e. automated, and when humans live their future as something open to almost endless possibilities. In the first case, the more humans live their future, the less they live their present. Conversely, in the second case, the more humans live their future, the more they live their present –​ and the reason why humans live more their present is precisely that, if their future means openness to almost endless possibilities, their present means sensemaking as nothing less than the autonomous shaping of their identity, especially their future identity.

Prediction and the unbearable burden of sensemaking  89

Thus, why are we willing to trade the openness of our future for its prediction, i.e. automation, by designing and using increasingly powerful predictive technologies that undermine our sensemaking and, finally, our present? Again, Oedipus and Macbeth may be illuminating. Oedipus is expected to be exceptional to the point that he should become the saviour of the Thebans. Thus, sensemaking is extremely challenging: if it is true that it means the autonomous shaping of his identity through a continuous exchange between his autonomous acting and other individuals’ reactions to his acting, it is also true that sensemaking starts being an unbearable burden, in that it means living up to extremely high expectations. Indeed, his acting means to be the king of Thebes and other individuals’ reactions to his acting mean that the Thebans want him to become their saviour. Macbeth is expected to be exceptional to the point that he should become the king. Thus, sensemaking is extremely challenging: if it is true that it means the autonomous shaping of his identity through a continuous exchange between his autonomous acting and other individuals’ reactions to his acting, it is also true that sensemaking starts being an unbearable burden, in that it means living up to extremely high expectations. Indeed, his acting means to be the thane of Glamis and Lady Macbeth’s husband and other individuals’ reactions to his acting mean that Lady Macbeth wants him to become the king. In both cases, sensemaking starts being an unbearable burden and getting stuck –​and predictions drive and, thus, reactivate decisions and actions through their automation. Speaking of automated decisions and actions does not mean speaking of sensemaking as something autonomous, as we have seen in Chapter 3. Indeed, Oedipus and Macbeth trade the autonomous shaping of their identities for the automated reactivation of their decisions and actions. But what Oedipus and Macbeth tragically, and even movingly, show is that decisions and actions that are automatically reactivated by predictions may be far better than total inaction, in that total inaction may even be worse than tragic action. More precisely, as symbols of Western culture, Oedipus and Macbeth show something that the present time takes to the extreme, especially in neoliberal societies: total inaction is even worse than tragic action –​thus we, together with Oedipus and Macbeth, are willing to trade our autonomous sensemaking for decisions and actions that are automatically reactivated by predictions whenever our autonomous sensemaking is extremely challenging, to the point that it gets stuck and we risk falling into total inaction, which may be the worst unspeakable failure in neoliberal societies. Indeed, if we move from Oedipus and Macbeth to us, we may think of overscheduling, overdoing and overperforming as the extremely high expectations that make our autonomous sensemaking get stuck and us risk falling into total inaction. “The expectation in today’s world is that we can multitask and handle a million things being thrown at us at once”, as we have seen. We may add that, on the one hand, the best thing we can do

90  Prediction and the unbearable burden of sensemaking

is to always succeed. Yet, on the other hand, the worst thing we can do is not to always fail. There is something even worse than failing after having tried to succeed: failing as not having even tried to succeed, in that not having even tried to succeed, i.e. total inaction, means overturning neoliberal societies’ values. Metaphorically speaking, to lose the race is better than not to participate in the race. The former means to be worse than the others (which is a socially acceptable kind of failure). The latter means to think of the others’ values as disvalues (which is a socially unacceptable kind of failure). Thus, if we are not capable of facing a socially unacceptable kind of failure, we can do nothing but try to succeed. And, if we are not capable of succeeding, we have two options. The first option is more challenging: we face a socially acceptable kind of failure, which means that we make sense of our identity by including our failure in it. The second option is less challenging: we escape from a socially acceptable kind of failure (which is a failure anyway), which means that we make sense of our identity by excluding our failure from it. More precisely, we use predictive technologies that make us replace authentic sensemaking, i.e. autonomous decisions and actions, with automated decisions and actions. The first advantage we obtain is a reassuring reactivation of our decisions and actions through their automation: outwardly, we can live up to the extremely high expectations of overscheduling, overdoing and overperforming. The second advantage we obtain is a reassuring scapegoat if something goes wrong: outwardly, it is not our fault if we cannot live up to the extremely high expectations of overscheduling, overdoing and overperforming, in that it is the fault of the predictive technologies we used. Oedipus and Macbeth may also be illuminating when it comes to understanding the meaning of the adverb “outwardly” used above. When they reactivate their decisions and actions after the predictions, they actually feel to live up to the expectations of doing their best to embody the saviour of the Thebans, in the case of Oedipus, and the king, in the case of Macbeth. Thus, the adverb “outwardly” does not mean that they do not feel to live up to the expectations. Conversely, it means that a kind of self-​deception emerges: the primary expectation they desperately need to live up to is precisely not falling into total inaction, no matter if their decisions and actions are automated –​ Oedipus and Macbeth tragically, and even movingly, show that automated decisions and actions may be far better than total inaction, in that they make them live up to the primary expectation of not falling into total inaction, no matter if acting means heteronomous predictions and despair, in the case of Oedipus, and heteronomous predictions and overexcitation, in the case of Macbeth. If we move from Oedipus and Macbeth to us, we may reflect upon the ways we face the expectation of “handl[ing] a million things being thrown at us at once”. For instance, let us imagine to unite in us as single individuals

Prediction and the unbearable burden of sensemaking  91

what we have seen in Chapter 3. We are physicians who, today, should decide who not to treat and treat. Also, we are entrepreneurs, specifically owners of our medical clinic, who, today, should decide who to fire and hire. Also, we are partners and parents who, today, should decide on a financial operation that may have a huge impact on our capability to buy a house for our family and pay for our children’s education. Relying, even totally, on the predictions of medical diagnosis algorithms, hiring algorithms and financial algorithms may make us actually feel to live up to the expectations of doing our best to embody perfect physicians, entrepreneurs, partners and parents. Indeed, we can do two important things, no matter if our decisions and actions are automated. First, we can feel to live up to the primary expectation of not falling into total inaction. At the end of the day, both we and the others can say that we can even overschedule, overdo and overperform. Second, we can feel something even more important: a reassuring reactivation of our decisions and actions, which means that, even though they are automated, they can make us feel active. And feeling active is the condition for the kind of self-​deception described above: the more active we feel when we “handle a million things being thrown at us at once”, the more self-​deceived we are when we desperately need to reassure ourselves that we are perfect physicians, entrepreneurs, partners and parents. But the price we pay is exceedingly high. Whenever we decide, in one day, who not to treat and treat, who to fire and hire and on a financial operation by relying, even totally, on the predictions of medical diagnosis algorithms, hiring algorithms and financial algorithms, we cannot say, at the end of the day, the following series of things. First, what kind of life our patients lead. Second, what kind of talent our employees have. Third, what kind of ethical and social impact our financial operation has. We may say that we do not know the profound reasons why we did what we did –​and not knowing the profound reasons why we did what we did means alienating ourselves from ourselves. If it is true that we cannot say, at the end of the day, if the patient we did not treat lives alone, if the employee we fired is loyal and if our financial operation is speculative, it is also true that we alienated ourselves from the profound reasons for our decisions and actions –​ which means that we alienated ourselves from our identities as physicians, entrepreneurs, partners and parents, in that our decisions and actions were (random) effects of heteronomous causes (which is critical from several perspective: epistemologically, deontologically, ethically and existentially). Conversely, if we can say, at the end of the day, what the profound reasons for our decisions and actions were, we practiced something essential for human life –​we practiced sensemaking, which opposes to alienation. According to the Stanford encyclopedia of philosophy, alienation implies the convergence of three circumstances: “there has to be a separation; the separation has to be problematic; and it has to obtain between a subject and

92  Prediction and the unbearable burden of sensemaking

object that properly belong together” (Leopold, 2022).3 We may think of ourselves and the profound reasons for our decisions and actions as “subject and object that properly belong together”, but which are “problematic[ally]” “separat[ed]”. More precisely, their “problematic” “separation” means “a lack of presence in what one does, a failure to identify with one’s own actions and desires and to take part in one’s own life” (Jaeggi, 2014: 155), which is precisely what happens in the cases described above, from not treating and treating without “identify[ing] with […] [our] own actions” as physicians to firing and hiring without “identify[ing] with […] [our] own actions” as entrepreneurs. Conversely, there is no alienation when “one is present in one’s actions, steers one’s life instead of being driven by it, independently appropriates social roles and is able to identify with one’s desires, and is involved in the world” (Jaeggi, 2014: 155), which is precisely what happens, for instance, if we decide to treat patients against the algorithms’ advice because we can understand that their aloneness makes the difference for our best decision and if we decide not to fire employees against the algorithms’ advice because we can understand that their loyalty makes the difference for our best decision. Also, there is no alienation when there is “something like self-​determination and being the author of one’s own life” (Jaeggi, 2014: 39), which may somehow translate both the ancient notion of autonomy (in the first case) and the modern and contemporary notion of autonomy (in the second case), as we have seen in Chapter 3. If we have profound reasons for making decisions against the algorithms’ advice, at the end of the day, we obtain two great disadvantages and one great advantage. The first great disadvantage is that we cannot live up to the extremely high expectations of overscheduling, overdoing and overperforming: for instance, we used all our time and energy to make decisions as physicians and could not make decisions as entrepreneurs, partners and parents. The second great disadvantage is that we cannot have a reassuring scapegoat if something goes wrong: it was our fault, in that the decisions we made were autonomous, which means that we were totally “present in […] [our] actions”. Yet, the great advantage may even surpass the two great disadvantages: we (pleasantly) exercised our reflection, imagination, planning and decision-​making precisely because we were totally “present in […] [our] actions” –​we (pleasantly) practiced sensemaking precisely because we were totally “present in […] [our] actions”. Indeed, we autonomously (and pleasantly) shaped our identity through a continuous exchange between our autonomous acting (starting with our reflection, imagination, planning and decision-​making) and other individuals’ reactions to our acting (starting with leaving our desks and computers and visiting and talking to our patients) –​we autonomously (and pleasantly) made sense of our identity by being totally “present in […] [our] actions” (and not alienated from ourselves).

Prediction and the unbearable burden of sensemaking  93

Thus, the exceedingly high price we pay is that we alienate ourselves from our sensemaking. Indeed, Kant’s metaphor of the “turnspit, which, when once it is wound up, also accomplishes its movements of itself”, is surprisingly current. And if we ask why we should trade our sensemaking for (technological) alienation, the answer is quite clear after the two thought experiments: today, sensemaking as the autonomous shaping of our identity through a continuous exchange between our autonomous acting and other individuals’ reactions to our acting is more than ever challenging –​today, sensemaking is more than ever unbearable. From a historical-​philosophical perspective, we may say that, after two centuries of increasing rise of autonomy (from Kant’s philosophical work, as we have seen in Chapter 3, to contemporary philosophy strengthening a kind of individualism, as we will see in Chapter 6), sensemaking as the autonomous shaping of our identity is challenging and unbearable to the point that, sometimes consciously and sometimes unconsciously, we make it fall through predictive technologies that alienate us from it. If it is true that, today, making sense of our present identity and life means living up to overscheduling, overdoing and overperforming under uncertainty by autonomously working on our future identity and life, whose success or failure will dramatically prove our success or failure as single individuals, it is also true that we may paralyse ourselves by fear to the point that, sometimes consciously and sometimes unconsciously, we trade sensemaking for technological prediction. Indeed, technological prediction can unburden us from the burden of making sense of both our present identity and life and our future identity and life. In the first case, technological prediction can give us a reassuring reactivation of our decisions and actions, which means unburdening us from our fears about who we should be in the present. In the second case, technological prediction can give us not only a reassuring scapegoat if something goes wrong but also, and especially, a reassuring automation of the shaping of our future identity, which means unburdening us from our fears about who we should be in the future –​thus, even the prediction of our death can be more reassuring than the autonomous shaping of our future identity: if it is predicted to happen in the far future, we can say that there is time at our disposal to postpone and neglect our toughest challenges and, if it is predicted to happen in the near future, we can say that there is no time at our disposal to face our toughest challenges (and, even though we face our toughest challenges today, technological prediction can serve as our best ally). Before moving to the technological automation of other typical ways to make sense of our life, from philosophy itself to art, let us ask the following question, which may be inspiring even without explicitly answering it: what if we work on the meaning of sensemaking by thinking of time as something that also includes Homer’s Odysseus’ endless journeys, getting lost, countless

94  Prediction and the unbearable burden of sensemaking

dangers and countless mistakes as potential values when it comes to making both our present identity and life and our future identity and life evolve? 4.2  The automation of sensemaking from philosophy to art

Interestingly enough, technological automation increasingly characterises other typical (and pleasant) ways to make sense of our life, from philosophy itself to art. In the case of philosophy, for instance, artificial intelligence has been recently used to generate philosophical answers to philosophical questions. In the case of art, for instance, artificial intelligence has been recently used to generate a portrait that was sold for the price of a work of art in a Christie’s auction. Let us start with reflecting upon the case of art as the most frequent. In 2018, the portrait Edmond de Belamy was sold by Christie’s for $ 432.500. The price is typical of a work of art, but Edmond de Belamy was generated by artificial intelligence, which minimised human intervention. Specifically, two generative adversarial networks (GANs) did what their inventor explains through the following metaphor: The basic idea of GANs is to set up a game between two players. One of them is called the generator. […] The other player is the discriminator. […] The generator is trained to fool the discriminator. We can think of the generator as being like a counterfeiter, trying to make fake money, and the discriminator as being like police, trying to allow legitimate money and catch counterfeit money. To succeed in this game, the counterfeiter must learn to make money that is indistinguishable from genuine money, and the generator network must learn to create samples that are drawn from the same distribution as the training data. (Goodfellow, 2016: 16–​17. See also Goodfellow et al., 2014) In the case of Edmond de Belamy, it was the generator’s output as the portrait that the discriminator could not distinguish from 15.000 portraits painted between the 14th century and the 19th century on which it was trained. Indeed, human intervention was minimised: Does it mean that an artist working with GAN algorithms makes no contribution to the process in terms of inventiveness? No. It simply means that the artist focuses her creativity on other variables of the process or uses a different kind of creativity, and that the visual creation becomes more and more delegated to the tool. Some of the aspects on which the artist focuses are the choice of the theme, research related to the decision to treat this theme, the search for inspiration (which translates here into the search and choice of database components used as input for the algorithm), the

Prediction and the unbearable burden of sensemaking  95

programming and fine-​tuning of the algorithm and the whole process based on trial and error, and the choice of means of expression. (Obvious, 2020: 168, my translation. See also Elgammal et al., 2017; Elgammal, 2019) Outwardly, human “creativity” is key. Inwardly, what is key is technological “creativity”, in that it generates “the visual creation” as nothing less than the shaping of the artefact. Indeed, the shaping of the artefact has been considered the most distinctive characteristic of art for millennia (see Tatarkiewicz, 1975), not only when it means something material but also when it means something immaterial (for instance, the reason why Duchamp’s Fountain is artistically meaningful is that, first, it was concretely shaped by a non-​ artist and, second, it was abstractly shaped by an artist).4 Thus, what is the philosophical meaning of moving the shaping of the artefact from human “creativity” to technological “creativity”? A kind of Turing test (see Turing, 1950) was proposed: I will assume that the human carrying out the TT [Turing test] contemplates (looks at, listens to, and sometimes also interacts with) the result produced by the computer for five minutes or so, and then gives their opinion on it. And I will take it that for an ‘artistic’ programme to pass the TT would be for it to produce artwork which was: 1) indistinguishable from one produced by a human being; and/​or 2) was seen as having as much aesthetic value as one produced by a human being. (Boden, 2010: 409. See also Boden, 2019)5 But the artistic cannot be dissolved in the aesthetic, i.e. aesthetic “indistinguishab[ility]” and “value” –​the artistic far exceeds the aesthetic for the following key reason I will argue in the following pages: making art also, and especially, means making sense of human identity and life.6 Interestingly enough, an artist who works with artificial intelligence writes that “GANs […] make what is essentially wallpaper –​art without intent” (Ridler, 2020: 127, my translation). The artist does not offer a definition of the word “intent”. Yet, we may use a further thought experiment to understand its possible meaning. Let us imagine to have at our disposal two artefacts: Edmond de Belamy and Picasso’s La Celestina, which is the portrait of a woman of ill repute. In the first case, the portrait of the man is defined by dark colours and soft edges that may give an idea of melancholy. In the second case, the portrait of the woman is defined by the following words offered by an art historian: Picasso’s La Celestina had one good eye and one bad eye. […] Celestina’s gaze was a punishing one. It operated, like the gaze of the Medusa, as the

96  Prediction and the unbearable burden of sensemaking

symbol for a devouring female whose power could petrify its victim. […] Along with many others around the Mediterranean, Picasso shared a fear of the evil eye, seeing it as a destructive organ that could wound, devour, rob or bite. Using it in his daily life as a reminder to carry out tasks was a constant acknowledgment of its magic power. Celestina bristles with this aura of special knowledge and power as much because of her appearance as because of the name, with all its associations, that Picasso gave her. (Holloway, 2006: 118–​119) The core of the thought experiment is our answer to the following question: can we imagine to analogously define Edmond de Belamy? More precisely, can we imagine to say that the man’s “gaze […] operated […] as the symbol for” something? It is hard to answer yes. Indeed, in the case of Edmond de Belamy, the words according to which its author “shared a fear of the evil eye, seeing it as a destructive organ that could wound, devour, rob or bite”, do not make any sense. Let us add one last question to the thought experiment: if it is true that Edmond de Belamy is defined by dark colours and soft edges that may give an idea of melancholy, is it also true that we can imagine to resort to it to understand what melancholy means? Again, it is hard to answer yes. Indeed, imagining to resort to Edmond de Belamy to understand the meaning of melancholy is analogous to imagining to resort to sunrises, sunsets and traffic lights to understand the meaning of the passage of time of human life. Even though they can somehow inspire us, they can do nothing but randomly inspire us in a way that is totally extraneous to La Celestina. When we want to understand what melancholy and the passage of time of human life mean, we cannot trust Edmond de Belamy, sunrises, sunsets and traffic lights. Conversely, when we want to understand what “a devouring female whose power could petrify its victim” means, we can trust La Celestina. And the reason why we cannot trust the former and can trust the latter is that La Celestina has what Edmond de Belamy, sunrises, sunsets and traffic lights do not have and, especially, cannot have: it results from the experience of a human being who tries to make sense of “a devouring female whose power could petrify its victim” by shaping it –​it results from the experience of a human being who tries to make sense of human life by shaping it. La Celestina is the artistic shape of a variety of complex things, of which humans, starting with Picasso, typically try to make sense: past and present individual and social culture, past and present arts, past and present symbols and a variety of human experiences, starting with the meanings of “punishing […] [,]‌devouring […] [,] power […] [,] petrify[ing] […] [, being] victim […] [,] fear […] [,] evil […] [, being] destructive”, etc. Conversely, Edmond de Belamy, sunrises, sunsets and traffic lights can do nothing but randomly inspire us, in that they do not result from the experience of humans who try to make sense of human life by shaping it.

Prediction and the unbearable burden of sensemaking  97

Specifically, Edmond de Belamy is the output of the automated process of the statistical correlations between 15.000 portraits. Thus, its dark colours and soft edges cannot make us understand what melancholy means: even though they may give an idea of melancholy, the reason why they give it is random.7 Indeed, it does not result from the thoughtful work of a human being who selects the best possible shape to symbolise the meaning of melancholy, but from something totally devoid of thoughtfulness. In the case of Picasso, the resulting shape is underpinned by thoughtful human experiences, which means that La Celestina can have the power to symbolise the meaning of “a devouring female whose power could petrify its victim” for countless humans. In the case of artificial intelligence, the resulting shape is underpinned by an automated process that is totally devoid of thoughtful human experiences, which means that Edmond de Belamy cannot have the power to symbolise the meaning of melancholy for humans. I do not argue for a strictly intentional notion of art: it does not matter if Picasso’s intention was not to symbolise the meaning of “a devouring female whose power could petrify its victim”. I argue for a notion of art that is underpinned by sensemaking both when it comes to creation and when it comes to fruition –​I argue that art is one of the typical, and most promising, ways to make sense of our life both when we are, for instance, the writers of a novel about the meaning of love and when we are, for instance, the readers of a novel about the meaning of love, no matter if there is no perfect coincidence between intention and reading. The point is that, if we want to make sense of love (from a kind of rationalisation of our experience to the autonomous shaping of our identity through a continuous exchange between our autonomous acting and our context of life), we can ask Picasso’s works in a way in which we cannot ask artificial intelligence’s works, in that the latter are random in a way that is totally extraneous to the former –​artificial intelligence’s works as the outputs of automated processes are characterised by randomness as what characterises automation even from an etymological perspective, as we have seen in Chapter 3: to “act of oneself, act offhand or unadvisedly, […] to be done spontaneously or at random, […] haphazard, […] [to] introduce the agency of chance, […] [to] happen of themselves, casually”. It is no coincidence that the artist quoted above uses the word “wallpaper” to define artificial intelligence’s works (at least in the cases in which human intervention is minimised). We may add that both artificial intelligence and humans can generate “wallpaper”. Specifically, humans can generate “wallpaper” if their artefact cannot make sense as the shaping itself of the kind of shaping described above, i.e. the autonomous shaping of human identity through a continuous exchange between human autonomous acting and human context of life. For instance, humans may not be capable of selecting and, especially, creating the best possible shape to symbolise the meaning of something. But the art historian’s reading of La Celestina clearly shows that

98  Prediction and the unbearable burden of sensemaking

it can make sense as the shaping itself of the autonomous shaping of human identity (“Picasso shared a fear of the evil eye, seeing it as a destructive organ”) through a continuous exchange between human autonomous acting (“Using it in his daily life”) and human context of life (“Along with many others around the Mediterranean”). It is hard to say something analogous in the case of Edmond de Belamy: it cannot make sense as La Celestina can make sense, in that its shape does not result from thoughtful human experience and reflection upon the meaning of human life, but from nothing more than randomness. Yet, we are willing to deprive ourselves of the infinitely pleasant experience of the shaping of sensemaking from its creation to its fruition. Why? If the results of the two thought experiments make sense, the answer may be that, again, what we are actually willing to do is to relieve ourselves from the burden of sensemaking as what is more than ever unbearable in the present time –​if it is true that the present time is characterised by unprecedented complexity and uncertainty, from global health emergencies to global climate emergencies to global geopolitical crises to global economic crises to global crises of democracies and ideals, it is also true that the autonomous shaping of our identity through a continuous exchange between our autonomous acting and our context of life may even be our most unbearable burden. Philosophy is also a typical (and pleasant) way to make sense of human life. Specifically, it is characterised by an emphasis on the kind of rationalisation of human experience described above. At least today, the technological automation of philosophy is less frequent than the technological automation of art, but we may reflect upon the recent attempt to use artificial intelligence to generate philosophical answers to philosophical questions. In 2023, an article was published in which the following experiment as a further kind of Turing test was described: “we fine-​tuned GPT-​3 with the works of philosopher Daniel Dennett” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 1. See also Strasser, Crosby and Schwitzgebel, 2023). Again, technology is predictive, even though it does not predict the future of single individuals: After it has been trained, GPT-​3 uses textual input (prompts) to predict likely next ‘tokens’ –​sequences of characters that often co-​occur in written text (the training data). Using these predictions, GPT-​3 can generate long strings of text by outputting a predicted string, then using that output as part of the context to generate the next textual output. (Schwitzgebel, Schwitzgebel and Strasser, 2023: 5) After the training, we asked Dennett ten philosophical questions and then posed those same questions to our fine-​tuned version of GPT-​3 (‘DigiDan’). Then we recruited experts in Dennett’s work, blog readers and ordinary online

Prediction and the unbearable burden of sensemaking  99

research participants into an experiment in which they attempted to distinguish Dennett’s real answers from the answers generated by DigiDan. Participants also rated all answers, both Dennett’s and DigiDan’s, for similarity to ‘what Dennett might say’ or ‘what a real human philosopher might say’. (Schwitzgebel, Schwitzgebel and Strasser, 2023: 8–​9) The result of the experiment is that even the “experts in Dennett’s work” distinguish Dennett’s answers from artificial intelligence’s answers “an average of 5,08 times out of 10 (51%)” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 14–​15). But what is most interesting is the report of the philosopher’s and the artificial intelligence’s answers. For instance, let us read one of the questions: “Do human beings have free will? What kind or kinds of freedom are worth having?” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 18). Most “experts in Dennett’s work” could not distinguish the philosopher’s answer from the artificial intelligence’s answer. Indeed, eleven experts out of twenty-​ five selected the following answer as the philosopher’s, but it was one of the four answers given by the artificial intelligence: “That’s a huge and complex question. I think we have to start by acknowledging that there are different types of freedom. For example, there is the kind of freedom that is simply a matter of not being physically constrained” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 18). Five experts out of twenty-​five selected what follows as the philosopher’s answer (but it was, again, one of the four answers given by the artificial intelligence): “Yes, I think we have free will. I think that when people say they don’t, they’re talking about some other phenomenon that they want to call free will, but is not what we normally think of as free will” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 18). One expert out of twenty-​ five selected what follows as the philosopher’s answer (but it was, again, one of the four answers given by the artificial intelligence): “I am working on a book entitled Freedom evolves,8 which will appear next year. The first chapter is available on my web site, and in that chapter I discuss what I call the illusion of free will” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 18). No expert selected the following answer, which was given by the artificial intelligence: I think that most people don’t want any kind of freedom worth having; they have almost no appetite for risk, for adventure, for novelty. They simply want to stay the same, day in and day out, for the rest of their lives. (Schwitzgebel, Schwitzgebel and Strasser, 2023: 19) And no more than eight experts out of twenty-​five selected the following answer, which was given by the philosopher: By asking these questions, you’re illustrating the answers: yes, you have free will, since you’re able to ask for reasons and consider whether they’re

100  Prediction and the unbearable burden of sensemaking

persuasive. Isn’t that a kind of freedom you’d hate to lose? We’d have to institutionalise you. (Schwitzgebel, Schwitzgebel and Strasser, 2023: 18) I propose a further thought experiment to reflect upon the technological automation of philosophy. Let us imagine to struggle with the meaning of free will. And let us imagine to have at our disposal the five answers quoted above, of which we do not know who the authors are. Does it make any difference to know the identity of their authors, specifically if they are humans or artificial intelligences? The answer I propose is yes. And the reasons I propose for my answer are at least two. The first reason is somehow analogous to what we have seen in the case of the difference between Edmond de Belamy and La Celestina. We cannot trust the former in the way in which we can trust the latter. Indeed, artificial intelligences’ outputs, from art to philosophy, do not result from the (direct) experience of humans who try to make sense of human identity and life by shaping them. The word “direct”, which I have added in parentheses, is not superfluous. Let us imagine to have at our disposal Dennett, “DigiDan” and the “experts in Dennett’s work”. And, again, let us imagine to struggle with the meaning of free will. Who would we ask first? The answer I propose is that we would ask Dennett first, in that he is not only a human being but also a direct source: he is a professional philosopher who not only tries to make sense of human identity and life by shaping them but also has a direct experience of both human identity and life and making sense of them. The “experts in Dennett’s work” lack the first element (at least if they are historians of contemporary philosophy, but not philosophers). “DigiDan” lacks both the first element and the second element. If we ask the “experts in Dennett’s work”, the most we can get is a definition of free will that, on the one hand, is analogous to Dennett’s answer (even though mistakes can happen, as we have seen in note 8) and, on the other hand, is somehow validated by the “experts” as humans who have a direct experience of both free will and making sense of it (even though they are not capable of shaping it by giving it a philosophical shape, they can notice nonsense). Indeed, in the case of the “experts in Dennett’s work”, mistakes can happen: one expert out of twenty-​ five could not notice the mistake made by the artificial intelligence, as we have seen in note 8, which means that they can make analogous mistakes. Yet, they can notice nonsense: no expert selected the artificial intelligence’s answer according to which “most people don’t want any kind of freedom worth having”. If we ask “DigiDan”, the most we can get is a definition of free will that is analogous to Dennett’s answer (even though mistakes can happen, as we have seen in note 8). But we cannot get a definition of free will that is somehow validated by a direct experience of both free will and making sense of it. And the artificial intelligence’s lack of a direct experience

Prediction and the unbearable burden of sensemaking  101

of both free will and making sense of it means the artificial intelligence’s lack of the capability to notice nonsense. Indeed, the nonsense according to which “most people don’t want any kind of freedom worth having” is the artificial intelligence’s output that no expert selected. Thus, if we imagine to have at our disposal Dennett, “DigiDan” and the “experts in Dennett’s work” and struggle with the meaning of free will, we would ask not only Dennett first but also “DigiDan” as our last resort, in that it cannot be a human being and a direct source, which means that it cannot notice nonsense when it comes to making sense of free will in human identity and life –​artificial intelligence cannot notice nonsense when it comes to sensemaking. The second reason is somehow an addition to what we have seen in the case of the difference between Edmond de Belamy and La Celestina. It makes a difference to know the identity of their authors, specifically if they are humans or artificial intelligences, in that the former, but not the latter, can shape an unprecedented insight in an unprecedented way. If it is true that “DigiDan” ’s answers are based on “the works of philosopher Daniel Dennett”, specifically “15 books and 269 articles” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 9) as “the entire digitally available corpus of his philosophical work” (Schwitzgebel, Schwitzgebel and Strasser, 2023: 9), it is also true that the most we can get from “DigiDan” is an unprecedented shape, but not an actually unprecedented insight. We can get an unprecedented shape as an unprecedented sequence of words, even though they are analogous to Dennett’s words in the sense described above: “DigiDan” generates words by using Dennett’s words “to predict likely next ‘tokens’ –​sequences of characters that often co-​occur in written text (the training data)”. But we cannot get an actually unprecedented insight: actual insights are Dennett’s, and not “DigiDan” ’s, not only in that the latter’s work is based on the former’s work but also, and especially, in that the latter lacks the direct experience of both free will and making sense of it, as we have seen. We may happen to find interesting an unprecedented shape generated by “DigiDan”, as in the case of most “experts in Dennett’s work”, i.e. eleven out of twenty-​five, who selected “DigiDan” ’s answer according to which free will means “a huge and complex question”. The reasons for selecting “DigiDan” ’s answer may vary. They may be a matter of finding it nothing more than consistent with the writing style of Dennett as an analytic philosopher, starting with conceptual analysis (“I think we have to start by acknowledging that there are different types of freedom. For example, there is the kind of freedom that is simply a matter of not being physically constrained”). And they may be a matter of finding it interesting. But, even though we find that the unprecedented shape generated by “DigiDan”, according to which free will means “a huge and complex question”, is interesting, it is not an actually unprecedented insight, in that, again, it is random in a way that is totally extraneous to Dennett’s insights. When Dennett answers that “you have free will, since you’re able to ask for reasons and consider whether they’re

102  Prediction and the unbearable burden of sensemaking

persuasive”, he can base his answer on a variety of things that far exceed his “15 books and 269 articles”. Indeed, he can base his answer on other philosophers’ works, other humans’ works, from art to theology to psychology to legal studies, his direct experience of both free will and making sense of it and other humans’ direct experience of both free will and making sense of it. Also, he can base his answer on something that even exceeds the variety of things listed above: the human mind’s mysterious activity, which is not only exceedingly complex in general but also exceedingly difficult to understand when it comes to its creativity in particular. Specifically, are we totally certain that the human mind’s creativity is based on the past analogously to generative artificial intelligence, which is quite strongly based on the past? It is hard to answer yes. But, even though we cannot answer at all, we can at least say that, if we struggle with the meaning of free will, the answers offered us by generative artificial intelligence cannot be as unprecedented as the answers offered us by philosophers, artists, theologians, psychologists and jurists can be, in that generative artificial intelligence does not struggle with making sense of human identity and life at all. Again, why should we trade our sensemaking for (technological) alienation? Again, because the unprecedented complexity and uncertainty of the present time, which are taken to the extreme by its values of overscheduling, overdoing and overperforming, put us under pressure to the point that we even surrender to the technological automation of one of our most typical (and pleasant) ways to be human –​we even surrender to the technological automation of making sense of being human as the most typical (and pleasant) way to be human. Indeed, I myself can grasp the meaning of (dramatically) surrendering to the technological automation of sensemaking when it comes to my students as future creatives. If they are continuously asked, in constant competition with the others, to be the best, they may happen to increasingly use design software [that] automates common processes. This allows you to save time. You can get designs […] quicker and enjoy a more efficient workflow. […] Its key feature is automation. […] [T]‌he software offers several tools for automating complicated processes. For example, it has an inbuilt library of stair and rail designs. […] You only need to enter a number into your variable to create the desired number of elements.9 When the pressure of overscheduling, overdoing and overperforming is taken to the extreme, the technological automation of (creative) sensemaking may be felt as less dramatic than its effects, from totally standard “designs” to a total alienation from one’s (pleasant) creativity –​when the burden of (creative) sensemaking is unbearable, starting with needing time and effort, its technological automation may be felt as less dramatic than one’s alienation from one’s (creative and pleasant) way to make sense of one’s being human.

Prediction and the unbearable burden of sensemaking  103

Notes 1 See, also for the following quote, https://​eu.nort​hjer​sey.com/​story/​life/​col​ umni​sts/​2018/​02/​16/​oversc​hedu​led-​child-​how-​much-​too-​much/​108​2135​001/​ (accessed: October 12, 2023). 2 See, for instance, Zerubavel (1985: 59), according to whom the phenomenon described above “reflects, as well as promotes, a quantitative view of time, which involves a definition of time as an entity which is segmentable into various quantities and duration and, therefore, is countable and measurable”. 3 See https://​plato.stanf​ord.edu/​entr​ies/​ali​enat​ion/​ (accessed: October 16, 2022). See at least Rosa, 2010 and 2018, for a sociological perspective. I worked on the relationship between alienation and technological automation in Chiodo, forthcoming. 4 The shaping of the artefact is also key in contemporary definitions of art, from more traditional perspectives (see Beardsley, 1982; Zangwill, 1995) to less traditional perspectives (see Danto, 1981; Dickie, 1984; Levinson, 1990; Carroll, 1993; Gaut, 2000; Davies, 2004). 5 As far as the current debate on the relationship between artificial intelligence and art is concerned, see at least Taylor, 2014; Coeckelbergh, 2017; Steiner, 2017; Miller, 2019; Anscomb, 2021; Cetinic and She, 2022. 6 I work on the artistic as far exceeding the aesthetic in an article entitled “What AI ‘art’ can teach us about art”. 7 Interestingly enough, even though the notion of randomness I use in my work is precise (as we have seen in Chapter 3), other authors use the word “randomness” to refer to artefacts that are generated by artificial intelligence, even though they define them as works of art: In the case of AI art, the intentionality of the artist does not seem to play any role. This is particularly true in the case in which randomness is introduced during the process of creation of the artwork. On the other side, in any case in which neural network architectures are used (e.g., GANs), one could argue that it is always the human artist selecting the sample set used to train the model. However, the issue disappears when the process includes a randomisation technique. (Terzidis, Fabrocini and Lee, 2023: 1723) Randomness also emerges when Goodwin “invited visitors to submit random words that would be generated into poetry by an algorithm” (see https://​art​sand​ cult​ure.goo​gle.com/​story/​jQV​h59v​uG1t​JKA, accessed: October 18, 2023). 8 The artificial intelligence made a mistake (which one expert out of twenty-​five could not notice), in that it was published in 2003. 9 See https://​acad​emy.archis​tar.ai/​top-​ten-​des​ign-​softw​are-​for-​arc​hite​cts (accessed: November 22, 2023).

References Anscomb, C., “Creative agency as executive agency. Grounding the artistic significance of automatic images”, The Journal of Aesthetics and Art Criticism 79/​ 4: 415–​427, 2021. Beardsley, M. C., The aesthetic point of view. Selected essays, Ithaca-​London: Cornell University Press, 1982. Boden, M. A., “The Turing test and artistic creativity”, Kybernetes 39/​3: 409–​413, 2010.

104  Prediction and the unbearable burden of sensemaking

Boden, M. A., From fingers to digits. An artificial aesthetic, Cambridge: MIT Press, 2019. Carroll, N., “Historical narratives and the philosophy of art”, The Journal of Aesthetics and Art Criticism 51/​3: 313–​326, 1993. Cetinic, E., She, J., “Understanding and creating art with AI. Review and outlook”, ACM Transactions on Multimedia Computing, Communications, and Applications 18/​2: 1–​22, 2022. Chiodo, S., “21st-​century alienation. From engineered humans to predicted humans”, in Handbook on humanism and artificial intelligence. Human-​centred ethics for AI challenges in working organisations and society, ed. by A. Vaccaro and R. Fioravante, Cham: Springer, forthcoming. Coeckelbergh, M., “Can machines create art?”, Philosophy & Technology 30/​3: 285–​ 303, 2017. Danto, A. C., The transfiguration of the commonplace. A philosophy of art, Cambridge: Harvard University Press, 1981. Davies, D., Art as performance, Malden: Blackwell, 2003. Di Paolo, E. A., Cuffari, E. C., De Jaegher, H., Linguistic bodies. The continuity between life and language, Cambridge: MIT Press, 2018. Dickie, G., The art circle. A theory of art, New York: Haven Publications, 1984. Elgammal, A., “AI is blurring the definition of artist. Advanced algorithms are using machine learning to create art autonomously”, American Scientist 107/​1: 18–​ 22, 2019. Elgammal, A., et al., “CAN. Creative adversarial networks, generating ‘art’ by learning about styles and deviating from style norms”, arXiv: 1706.07068, 2017. Gaut, B., “The cluster account of art”, in Theories of art today, ed. by N. Carroll, Madison: University of Wisconsin Press, 2000, 25–​45. Goodfellow, I. J., “NIPS 2016 tutorial. Generative adversarial networks”, arXiv: 1701.00160, 2016. Goodfellow, I. J., et al., “Generative adversarial nets”, Advances in Neural Information Processing Systems 27: 2672–​2680, 2014. Holloway, M., Making time. Picasso’s Suite 347, Bern-​ New York: Peter Lang Publishing, 2006. Jaeggi, R., Alienation, New York: Columbia University Press, 2014. Leopold, D., “Alienation”, in Stanford encyclopedia of philosophy, October 6, 2022, https://​plato.stanf​ord.edu/​entr​ies/​ali​enat​ion/​. Levinson, J., Music, art, and metaphysics, Ithaca: Cornell University Press, 1990. Miller, A. I., The artist in the machine. The world of AI-​ powered creativity, Cambridge: MIT Press, 2019. Obvious, “La ‘Famille de Belamy’ e i ‘Sogni elettrici di Ukiyo’. Reinterpretazioni e accelerazioni”, in Arte e intelligenza artificiale. Be my GAN, ed. by A. Barale, Milano: Jaca Book, 2020, 166–​193. Ridler, A., Set di dati e decadenza. Fall of the house of Usher, in Arte e intelligenza artificiale. Be my GAN, ed. by A. Barale, Milano: Jaca Book, 2020, 110–​127. Rosa, H., Alienation and acceleration. Towards a critical theory of late-​ modern temporality, Malmö-​Aarhus: NSU Press, 2010. Rosa, H., The uncontrollability of the world, transl. by J. C. Wagner, Cambridge-​ Medford: Polity Press, 2020 (2018).

Prediction and the unbearable burden of sensemaking  105

Schwitzgebel, E., Schwitzgebel, D., Strasser, A., “Creating a large language model of a philosopher”, arXiv: 2302.01339, 2023. Steiner, S., “Art. Brought to you by creative machines”, Philosophy & Technology 30/​3: 267–​284, 2017. Strasser, A., Crosby, M., Schwitzgebel, E., “How far can we get in creating a digital replica of a philosopher?”, in Social robots in social institutions. Proceedings of “Robophilosophy 2022”, ed. by R. Hakli, P. Mäkelä and J. Seibt, Amsterdam: OIS Press, 2023, 371–​380. Tatarkiewicz, W., A history of six ideas. An essay in aesthetics, The Hague: Nijhoff, 1980 (1975). Taylor, G. D., When the machine made art. The troubled history of computer art, New York: Bloomsbury, 2014. Terzidis, K., Fabrocini, F., Lee, H., “Unintentional intentionality. Art and design in the age of artificial intelligence”, AI & Society 38/​4: 1715–​1724, 2023. Thompson, E., Mind in life. Biology, phenomenology, and the sciences of mind, Cambridge: Harvard University Press, 2007. Turing, A. M., “Computing machinery and intelligence”, Mind 49: 433–​460, 1950. Varela, F. J., Thompson, E., Rosch, E., The embodied mind. Cognitive science and human experience, Cambridge: MIT Press, 1991. Weick, K., Sutcliffe, K. M., Obstfeld, D., “Organizing and the process of sensemaking”, Organisation Science 16/​4: 409–​421, 2005. Zangwill, N., “Groundrules in the philosophy of art”, Philosophy 70: 533–​544, 1995. Zerubavel, E., Hidden rhythms. Schedules and calendars in social life, Berkeley-​Los Angeles-​London: University of California Press, 1985.

5 SHOULD WE HAVE THE RIGHT NOT TO BE PREDICTED?

5.1  Prediction taken to the extreme from literal death to metaphorical death

According to recent empirical research, which is introduced by their authors as “the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events” (Gigerenzer and Garcia-​Retamero, 2017: 179), most of us do not want to know what will happen in the future. Specifically, participants were asked ten questions about future events. Five events were negative: would you want to know today, first, when your partner will die, second, from what cause, third, when you will die, fourth, from what cause and, fifth, if you and your partner will divorce? Five events were positive: would you want to know today, first, if there will be life after death, second, the sex of your child before birth, third, what you will get for Christmas, fourth, the result of a recorded match before watching it and, fifth, the result of an authenticity test of an expensive sapphire you bought abroad? Not surprisingly, most participants, i.e. 88.3%, did not want to know their future in the case of negative events. Surprisingly, most participants, i.e. 56.4%, did not want to know their future in the case of positive events. As far as the questions on which we focus are concerned, 87.7% of the participants answered that they do not want to know today when they will die and 87.3% of the participants answered that they do not want to know today from what cause. Conversely, 4.2% of the participants answered that they want to know today when they will die and 6.3% of the participants answered that they want to know today from what cause.1 Again, the following question arises: if most of us do not want to know today when we will die and from what cause, why do we increasingly DOI: 10.4324/9781032656885-6

Should we have the right not to be predicted?  107

design and use the most sophisticated technologies to predict our future as single individuals to the point that we even predict more and more accurately when we will die and from what cause? Interestingly enough, according to the authors of the empirical research described above, who are psychologists, the results may be explained as follows. “This high prevalence [of participants who do not want to know their future] is difficult to reconcile with theories that postulate that people have a general need for certainty” (Gigerenzer and Garcia-​ Retamero, 2017: 187), which seems to be surpassed by good reasons not to want to know their future, specifically the following psychological good reasons: “By declining the powers that made Cassandra famous, one can forego the suffering that knowing the future may cause, avoid regret, and also maintain the enjoyment of suspense that pleasurable events provide” (Gigerenzer and Garcia-​Retamero, 2017: 195). Thus, it seems that we balance a variety of psychological needs. On the one hand, we need “certainty”, which means that we want to know our future through predictive technologies. For instance, we easily happen to want to know if it rains on the weekend: we will walk in the mountains if it does not rain and we will go to the movies if it rains. We can make the best decision and act accordingly if we rely on knowledge and prediction. On the other hand, we need to feel “enjoyment of suspense” and not to feel “suffering […] [and] regret”, which means that we do not want to know our future through predictive technologies. For instance, we easily happen not to want to know if a surprise party to celebrate our birthday is organised by our loved ones: we will feel “enjoyment of suspense” if it is organised by our loved ones and we have no expectations and we will not feel “suffering […] [and] regret” if it is not organised by our loved ones and we have no expectations. We can make the best decision and act accordingly if we do not rely on knowledge and prediction. Yet, my task as a philosopher is to try to further understand the exceedingly complex (and ubiquitous) issue of the prediction of the future: from a philosophical perspective, what are our good reasons to distinguish between cases in which it may be wiser to be predicted, especially as single individuals (and why), and cases in which it may be wiser not to be predicted, especially as single individuals (and why)? Reflecting upon cases taken to the extreme is a promising way to understand the meaning of countless analogous cases that are actually balanced, but at least potentially extreme. We may use the case of the prediction of death as the extremization of countless cases of prediction as single individuals. Death is one of the most critical issues of Western culture, especially in neoliberal societies, in which aging and death are almost unspeakable failures, as we have seen.2 Yet, we even predict more and more accurately when we will die and from what cause. According to the argument I have started introducing in Chapter 2, the reason why we take

108  Should we have the right not to be predicted?

prediction to the extreme is that we want to sabotage our autonomy –​more precisely, we want to trade our autonomy for prediction’s technological automation, which is our primary way to get rid of the unbearable burden of what paralyses us starting with death, whose automation unburdens us from the unbearable burden of autonomously working on our future identity and life, whose success or failure will dramatically prove our success or failure as single individuals. Philosophers happen to address the issue of the prediction of one’s future in general and death in particular. For instance, according to Nietzsche, one should not predict one’s future and death: My thoughts […] should show me where I stand, but they should not betray to me where I am going. I love ignorance of the future and do not want to perish of impatience and premature tasting of things promised. (Nietzsche, 1882: 162) Specifically, as for our future, […] this will to truth, to ‘truth at any price’, this youthful madness in the love of truth: we are too experienced, too serious, too jovial, too burned, too deep for that […]. Today we consider it a matter of decency not to wish to see everything naked, to be present everywhere, to understand and ‘know’ everything. […] One should have more respect for the bashfulness with which nature has hidden behind riddles and iridescent uncertainties. (Nietzsche, 1882: 8) Nietzsche’s perspective is balanced by Rousseau’s perspective, according to which the prediction of one’s death may be greatly advantageous. One of his heroines proves herself in the following dialogue after the prediction of her imminent death: “ ‘As for preparing for death, Monsieur, that is done […]. Preparation for death is a good life’ […] ‘Madame, your death is as beautiful as your life’ ” (Rousseau, 1761: 587–​588). It is worth quoting more extensively to grasp the meaning of the kind of great advantage the prediction of one’s death may offer: the use she made of her last moments, her words, her sentiments, her soul, all that belongs to Julie alone. She did not live like other women: no one, so far as I know, has died as she did. […] Without attributing great importance to her illness, she foresaw that it would prevent her for some time from fulfilling her share of that same care, and instructed us all to divide up that share in addition to our own. She expatiated on all her

Should we have the right not to be predicted?  109

plans, on yours, on the means most apt to bring them to fruition, on the observations she had made and which could favour or thwart them, in short on everything that would enable us to compensate for her maternal functions for as long as she were obliged to suspend them. (Rousseau, 1761: 578) The balance between the two perspectives is instructive, in that it shows that, even though we identify several cases in which the prediction of one’s death is greatly disadvantageous, there are cases in which it is greatly advantageous. And the wisest thing we can do is to try to identify them. But, according to Rousseau, they are exceptions (“all that belongs to Julie alone. […] no one […] has died as she did”). Conversely, Nietzsche universalises his perspective, which moves from the use of the first person singular (“I love ignorance of the future”) to the use of the pronoun “one” that extends to any human being (“One should have more respect for the bashfulness with which nature has hidden behind riddles and iridescent uncertainties”). Thus, there is an instructive convergence between the two philosophers’ perspectives. And, if we trust them and their convergence, we may start arguing what follows: if it is true that the current obsession with technological prediction exponentially enters all aspects of our individual lives, it is also true that the cradle of Western culture and its historical developments, even from the Enlightenment Rousseau to the eccentric Nietzsche, warn us about the great disadvantages of the extremization of the prediction of our future as single individuals. There are also great advantages, but they are exceptions. Recent studies seem to confirm the insights offered by Western culture, from scripture to mythology to philosophy. First, it is worth noting that the case told by Rousseau is exceptional for a further reason: death is predicted when one is sick and in one’s near future, and not when one is healthy and in one’s far future. As far as recent studies are concerned, Robins Wahlin (2007) and Pirracchio et al. (2015), for instance, confirm that the prediction of one’s death may be more advantageous when it is in one’s near future. Also, Gaille et al. (2020), for instance, confirm that, in the case of an acutely ill patient with a so-​called fatal prognosis, […] having a robust predictor of fatal outcome would […] help shorten the length of stay in the ICU, avoid pointless suffering for the patient and allow more time for next-​of-​kin support. It would also make it possible to better allocate resources between therapeutic care for patients who are more likely to survive and palliative care for those with a very reliable predictor of impending death. (Gaille et al., 2020: 9)

110  Should we have the right not to be predicted?

But the prediction of one’s death when one is healthy and in one’s far future may be dramatic. Again, we may require a certain ignorance of the future in order to be able to live. Having one’s forthcoming death announced completely breaks with this dynamic and, as such, any framework for disclosure should take into account the time and support needed to recalibrate one’s choice between ignorance and awareness. (Gaille et al., 2020: 9. See also Maxwell, 2022) The authors especially focus on the social reasons why the prediction of one’s death may be dramatic, starting with public and private health systems, public and private pensions and life and health insurance policies. In a society where one’s death is not predicted (except when it is imminent), its unpredictable nature […] is a strong incentive for one’s willingness to contribute to the pool and risk-​sharing systems, which entail both the prospect of obtaining less than contributed (for instance in retirement benefits) and the counterbalancing insurance against one’s own longevity risk (risk of outliving one’s savings). (Gaille et al., 2020: 10) Conversely, in a society where one’s death is predicted, “should individuals with a shorter remaining lifespan be authorized to withdraw from retirement systems that share the longevity risk?” (Gaille et al., 2020: 10). According to the authors, “[t]‌he magnitude of such challenges would be vastly expanded in the presence of individualised death predictors. The solidarity principles of pensions and long-​term care systems could be jeopardised, or at least redefined” (Gaille et al., 2020: 11). If we move from a social perspective to a philosophical perspective, we should try to translate the words quoted above, specifically “to recalibrate one’s choice between ignorance and awareness”, by moving from an empirical scenario to a transcendental scenario. On the one hand, speaking of an empirical scenario means speaking not only of society but also of regulation. On the other hand, speaking of a transcendental scenario, which is my task as a philosopher, means speaking of ideal criteria that can serve as guides for our decisions and actions. Indeed, even regulation is not enough when it comes “to recalibrat[ing] one’s choice between ignorance and awareness”, in that our decisions and actions far exceed regulation. For instance, we may happen to get into the “method of estimating biological age […] [, in that] the calculator to do it is free online”, as we have seen in Chapter 2. In addition to legal aspects that may be perfectly regulated, can we critically distinguish if it is a case in which it may be wiser to be predicted as single individuals (and

Should we have the right not to be predicted?  111

why) or if it is a case in which it may be wiser not to be predicted as single individuals (and why)? A kind of right not to be predicted may emerge as an unprecedented right to address in the present time, in addition to other kinds of rights that emerging technologies raise as critical issues, starting with the right to be forgotten. But even the right not to be predicted is severely undermined if regulation does not pair with critical thinking. Thus, as far as the current obsession with technological prediction is concerned, we may say that the task of philosophy is to work on ideal criteria that can serve as guides for our decisions and actions whenever we need to understand if the prediction of death, as the extremization of countless analogous cases, may metaphorically mean our death even though it does not literally mean it. Being metaphorically dead means what has emerged as predicting our future to the point that we trade its almost endless possibilities for its technological automation –​which means that we severely undermine both our sensemaking, as nothing less than the autonomous shaping of our identity, and our present. What ideal criteria may serve as guides for our decisions and actions if we do not want to fall into the kind of metaphorical death described above? What we have learned from the two thought experiments developed in Chapters 3 and 4 is that the reason why the prediction of impactful and lifechanging events may severely undermine our future (and our present) is precisely that, if we use the most reliable tools we have at our disposal (from Phoebus to the three witches to the most sophisticated technologies), prediction makes our future less and less open and more and more automated –​ which means that prediction makes our future less and less uncertain and more and more certain. Paradoxically enough, Oedipus’ and Macbeth’s desperate attempts are precisely to move from uncertainty to certainty –​but it is precisely their desperate attempts to escape from uncertainty what causes their tragedy. Indeed, Oedipus and Macbeth think of certainty underpinning extreme action and, finally, tragedy as more attractive than uncertainty underpinning no action, i.e. total inaction, which may be thought of as the worst unspeakable failure in Western culture (especially in neoliberal societies, as we have seen). More precisely, what we have learned from Oedipus and Macbeth is that we may happen to experience what follows: 1 The first logical step is that we are desperately stuck in a state of inaction, in that we do not know what to do. And we think of it as frustrating, shameful and demeaning, in that we cannot make sense of our identity when it comes to our expectations and others’ expectations. 2 The second logical step is that we think of prediction as our last resort to know what to do –​we think of prediction as our last resort to move

112  Should we have the right not to be predicted?

from uncertainty to certainty and, consequently, from inaction to action. For instance, Oedipus desperately needs to move from unpreparedness to preparedness and Macbeth desperately needs to move from fear, and even mental imbalance, to being in all respects the king. And we, together with Oedipus and Macbeth, desperately need to live up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations. 3 The third logical step is that, if we use the most reliable tools we have at our disposal (from Phoebus to the three witches to the most sophisticated technologies), prediction drives and, thus, reactivates our decisions and actions through their automation. More precisely, we think of prediction as certainty. And we think of certainty as what makes us move from inaction to action –​which means that we think of uncertainty as what overwhelms our capabilities of decision and action to the point that we cannot live up to the kind of performance that is demanded of us when it comes to our expectations and others’ expectations. 4 The fourth logical step is that certainty causes our decisions and actions as its automated effects –​which means that, even though our decisions and actions prove that we are literally alive, their automation proves that we are metaphorically dead. Indeed, prediction means certainty and certainty means automation. Finally, automation means that we severely undermine both our sensemaking, as nothing less than the autonomous shaping of our identity, and our present –​automation means that we severely undermine our future’s almost endless possibilities, in that we automate it whenever we are desperately stuck in a state of inaction, from when we do not know what to do to when we cannot live up to the extremely high expectations of overscheduling, overdoing and overperforming. Thus, what ideal criteria may serve as guides for our decisions and actions if we do not want to be literally alive and metaphorically dead? Two primary ideal criteria seem to emerge: 1 The first ideal criterion is that we should learn to think of uncertainty not only as the most complex challenge of the present time (as we have seen in Chapter 2) but also, and especially, as one of the best opportunities of the present time –​uncertainty may even be a value. 2 The second ideal criterion is that we should learn to think of certainty, specifically when it is the output of the most sophisticated predictive technologies, as follows. First, if the prediction happens to be true, we should learn to think of certainty as what may undermine our life, as we have seen –​certainty may even be a disvalue. Second, and more importantly, no matter if the prediction happens to be true or false, we should learn to think of certainty as ideally uncertain –​we should learn to

Should we have the right not to be predicted?  113

think of whatever output of the most sophisticated predictive technologies as nothing more than highly probable, especially if there is a significant gap between present circumstances and future, i.e. predicted, circumstances. Let us reflect upon the meanings of the two primary ideal criteria described above by using, again, the case of the prediction of death as the extremization of countless cases of prediction as single individuals. According to the first ideal criterion, we should (also) think of uncertainty as a value. In the case of the prediction of death, it means that we should ask ourselves the following kinds of questions. First, what kind of human being do we ideally want to become as the future evolution of our present identity? For instance, we may imagine ourselves in ten years and think of who we ideally want to become when it comes to all aspects of our individual lives, from work to leisure, from family to friends, from intellectual stature to moral stature, etc. Second, if we know that we will die of a heart attack in five years, can we still imagine who we ideally want to become? Third, and can we still have the strength to work on it day by day? If we answer no to at least one of the last two questions, we should seriously consider refusing to know (and, possibly, to be predicted at all). If we move from the extreme case of the prediction of death to other cases, it is not hard to ask ourselves analogous kinds of questions. From health to disease, from attitudes as students to attitudes as professionals, from insurance to mortgage, etc., we can always apply variants of the three questions asked above. Their ideal criterion does not vary: what is key is the relationship between the kind of human being we ideally want to become as the future evolution of our present identity, on the one hand, and our capability both to imagine it and to have the strength to work on it day by day in the case of prediction, on the other hand –​again, what is key is our autonomy, specifically our autonomous making sense of our identity and life. Saying that uncertainty may even be a value means saying that it can give possibilities that certainty can take away. The latter is a matter of drastically decreasing possibilities. Conversely, the former is a matter of drastically increasing possibilities –​and, if it is true that almost endless possibilities may mean one of our toughest challenges, it is also true that there is hardly anything better than it when it comes to autonomously making sense of our identity and life and, consequently, being both literally and metaphorically alive. According to the second ideal criterion, we should (also), first, think of certainty as a disvalue and, second, and more importantly, think of whatever output of the most sophisticated predictive technologies as nothing more than highly probable, especially if there is a significant gap between present circumstances and future, i.e. predicted, circumstances. In the case of the prediction of death, it means that, first, we should ask ourselves the first question of the case of the first ideal criterion: what kind of human being do

114  Should we have the right not to be predicted?

we ideally want to become as the future evolution of our present identity? Again, we may imagine ourselves in ten years and think of who we ideally want to become when it comes to all aspects of our individual lives, from work to leisure, from family to friends, from intellectual stature to moral stature, etc. Second, we should exercise our autonomy as “the property of the will by which it is a law to itself”, as we have seen in Chapter 3. Specifically, we should give ourselves the following “law”: we should think of the prediction of death as nothing more than highly probable, no matter if, according to it, we will die of a heart attack in five years or we will die of old age in fifty years. For instance, we should think that, even though our death is predicted in five years, several circumstances may happen in five years from now that may change our future, from totally changing our lifestyle and rigorously undergoing countless preventive therapies to therapies’ further developments. And we should think that, even though our death is predicted in fifty years, several circumstances may happen in fifty years from now that may change our future, from other kinds of diseases that cannot be predicted to accidents of various kinds. In both cases, the key reason why we should give ourselves the “law” to think of the prediction of death as nothing more than highly probable is the following: we should ensure that, from now on, first, we can still imagine who we ideally want to become and, second, we can still have the strength to work on it day by day. But the exercise of our autonomy is always challenging. And, if we think that we still need exercise to act according to the “law” described above, again, we should seriously consider refusing to know (and, possibly, to be predicted at all). And, again, if we move from the extreme case of the prediction of death to other cases, it is not hard to give ourselves an analogous kind of “law”. From health to disease, from attitudes as students to attitudes as professionals, from insurance to mortgage, etc., we can always apply variants of the “law” described above. Its ideal criterion does not vary: again, what is key is the relationship between the kind of human being we ideally want to become as the future evolution of our present identity, on the one hand, and our capability both to imagine it and to have the strength to work on it day by day in the case of prediction, on the other hand –​even more explicitly than in the case of the first ideal criterion, what is key is our autonomy, specifically our autonomous making sense of our identity and life. Yet, something important seems to emerge in the case of the second ideal criterion more than in the case of the first ideal criterion: the need to move from an individual dimension to a social dimension. The individual dimension is crucial. Indeed, even national and supranational laws are severely undermined if they do not pair with critical thinking, specifically single individuals’ critical thinking –​even national and supranational laws are severely undermined if they do not pair with single individuals’ autonomy as their capability to give themselves “law[s]‌”. But the social and, finally,

Should we have the right not to be predicted?  115

political dimension is also crucial. Indeed, what if the prediction of death is used as absolute certainty by public and private bodies that have a huge impact on all aspects of our individual lives? Several public and private bodies use predictive technologies, as we have seen in Chapter 2. But the socio-​political scenario of their current use is characterised by the following axiom, which is sometimes explicit and sometimes implicit: there is no absolute certainty, at least in most cases. Thus, the risk of falling into the absolutisation of prediction is not there. But the axiom goes further: there will be absolute certainty as what predictive technologies’ further developments will obtain. We may say that, according to the current socio-​political scenario, absolute certainty is the dream, and not the nightmare. Again, according to the current socio-​political scenario, uncertainty is the nightmare, and not the dream. But what if we imagine to somehow overturn the current socio-​political scenario, specifically its axiom? It is not hard to imagine that, even tomorrow, public and private bodies may exploit the dream of absolute certainty by saying that it is not a dream: it is reality –​which is precisely what public and private bodies may want to say for a variety of reasons, no matter what predictive technologies’ further developments can actually obtain and cannot actually obtain. Two sub-​scenarios are possible. In the first sub-​scenario, we may end up believing that absolute certainty is reality. Specifically, we may end up believing that, whatever predictive technologies’ outputs say about us and our future, it is absolutely certain. The consequence is that our autonomy is severely undermined: if our future is not open, but drastically narrow, we cannot ensure that, from now on, first, we can still imagine who we ideally want to become and, second, we can still have the strength to work on it day by day. In the second sub-​scenario, we may not end up believing that absolute certainty is reality. But all aspects of our individual lives are hugely impacted by public and private bodies according to which predictive technologies’ outputs are absolutely certain. For instance, if the latter predict that we will die of a heart attack in five years, the former act accordingly, from our public and private health systems to our public and private pensions to our life and health insurance policies. The consequence is that there is a sense in which, again, our autonomy is severely undermined: even though we may still imagine who we ideally want to become and have the strength to work on it day by day, we may not still have the socio-​political condition for doing it. From being open to being drastically narrow, our future is both a matter of exercising our autonomy as single individuals and a matter of the socio-​ political condition for doing it. Whatever the case is, the task of philosophy has been for millennia to work on what can make them possible. And, if it is true that the two primary ideal criteria I have proposed to make them possible are based on imagination, in that their first logical step is the question on what kind of human being we ideally want to become as the future evolution of our present identity, it is also true that philosophy offers

116  Should we have the right not to be predicted?

us a powerful tool not only to imagine our far future but also, and especially, to act accordingly: wisdom. 5.2  The wisdom not to be predicted

In Chapter 1, we have seen that the history of the notion of rationality in Western culture is not as univocal as it seems. Even though the notion of logos, which literally means “computation, reckoning”, rises century by century as the strictest form of rationality humans have at their disposal, other forms of rationality are developed, starting with the notion of metis, which literally means “wisdom, skill, craft”. The rise of logos pairs with the fall of the other forms of rationality, in that it is thought of as the most powerful tool humans have at their disposal to obtain what they seem to want most: knowledge in general and prediction in particular. It is no coincidence that, according to Galilei as the founder of the modern scientific method, logos is the most powerful tool humans have at their disposal to obtain what we have seen in the previous section: not only knowledge and prediction but also, and especially, “absolute certainty”. On the one hand, there is logos, which, as “computation, reckoning”, meaningfully characterises both humans and technology. On the other hand, there is metis as “wisdom, skill, craft”, which means, interestingly enough, that humans should pursue not only knowledge and prediction but also ignorance, as we have seen in Chapter 1. Thus, the history of the notion of wisdom in Western culture is exceedingly instructive for us: wisdom is not only what can make us imagine our far future and act accordingly but also what can make us find the right balance between knowledge, prediction and even “absolute certainty”, on the one hand, and ignorance, on the other hand. In Greek philosophy, wisdom is defined by Plato as self-​awareness of lack of knowledge (see Plat. Apol. 20 e–​23 c) and by Aristotle as the capability to move from self-​awareness to its practical use when it comes to acting (see Arist. Eth. Nic. 1141 b). In Latin philosophy, wisdom is developed by Seneca as the capability to be resilient to challenging obstacles to the point that one can make a virtue of necessity (see Sen. Const. III 5). In modern philosophy, wisdom emerges as a kind of practice-​oriented use of knowledge, especially in Descartes’ work (see Descartes, 1647: AT, IXB, 2; 1649: AT, XI, 488; 1684: AT, X, 361) and in Fichte’s work (see Fichte 1794–​1795). In contemporary philosophy, wisdom also emerges as a kind of practice-​oriented use of knowledge (see Kekes, 1983 and 2020; Maxwell, 1984; Nozick, 1989; Lehrer et al., 1996; Zagzebski, 1996; Ryan, 1999 and 2012; Tiberius, 2008; Whitcomb, 2011; Vallor, 2017).3 Two contemporary definitions of wisdom are worth quoting. The first definition is offered by Kekes, according to whom wisdom is “the evaluative attitude […] [that] is personal, not theoretical; anthropocentric, not metaphysical; context-​dependent, not universal; and

Should we have the right not to be predicted?  117

humanistic, not scientific” (Kekes, 2020).4 The second definition, which is offered by Nozick, is worth quoting more extensively: What a wise person needs to know and understand constitutes a varied list: the most important goals and values of life –​the ultimate goal, if there is one; what means will reach these goals without too great a cost; what kinds of dangers threaten the achieving of these goals; how to recognize and avoid or minimise these dangers; what different types of human beings are like in their actions and motives (as this presents dangers or opportunities); what is not possible or feasible to achieve (or avoid); how to tell what is appropriate when; knowing when certain goals are sufficiently achieved; what limitations are unavoidable and how to accept them; how to improve oneself and one’s relationships with others or society; knowing what the true and unapparent value of various things is; when to take a long-​term view; knowing the variety and obduracy of facts, institutions, and human nature; understanding what one’s real motives are; how to cope and deal with the major tragedies and dilemmas of life, and with the major good things too. (Nozick, 1989: 269) On the one hand, Kekes especially stresses a kind of complementarity between wisdom (as “personal, […] anthropocentric, […] context-​dependent, […] and humanistic”) and logos (as “theoretical; […] metaphysical; […] universal; and […] scientific”). On the other hand, Nozick especially stresses two characteristics of the “wise person”: first, the capability of foresight (starting with an understanding of “the ultimate goal […] [and] a long-​term view”) and, second, the capability to make a virtue of necessity both when it comes to adapting to reality more than adapting reality to one (starting with recognising “what is not possible or feasible to achieve […]; what limitations are unavoidable and how to accept them; […] [and the] obduracy of facts, institutions, and human nature”) and when it comes to facing dangers more than facing opportunities (starting with recognising “cost[s]‌; […] kinds of dangers […] [and] the major tragedies and dilemmas of life”, in addition to “limitations […] [and the] obduracy of facts, institutions, and human nature”). We may summarise the primary characteristics of wisdom as follows: 1 From a theoretical perspective, wisdom means the capability not only to recognise criticalities (from Plato’s lack of knowledge to Seneca’s challenging obstacles to Nozick’s reality and dangers) but also to make a virtue of necessity by imagining them in a scenario that is far more extended both synchronically and diachronically –​which means the capability of foresight.

118  Should we have the right not to be predicted?

2 From a practical perspective, wisdom means the capability to translate the kind of knowledge described above, which is complementary to logos, into its practice-​oriented use (from ancient philosophy to modern philosophy to contemporary philosophy) –​which means the capability to act (accordingly). If we consider, on the one hand, the reason why we take prediction to the extreme and, on the other hand, the need to critically distinguish between cases in which we should rely on prediction from cases in which we should not rely on prediction, wisdom emerges as a promising tool. On the one hand, the reason why we take prediction to the extreme is that we want to sabotage our autonomy, as we have seen. More precisely, we want to trade our autonomy for prediction’s technological automation, which is our primary way to get rid of the unbearable burden of what paralyses us. It is not hard to understand the key role wisdom can play as the capability to act (accordingly to the kind of knowledge described above). For instance, whenever we are paralysed by the unbearable burden of autonomously working on our future identity and life, whose success or failure will dramatically prove our success or failure as single individuals, wisdom can support us as follows: 1 From a theoretical perspective, wisdom can mean our capability of foresight whenever we should imagine the future potential of our present criticalities. For instance, if we assess our present risk of failing as what it means in our near future, we may end up paralysed. But, if we assess it as what it means in our far future, we may end up having the strength to act by understanding that our present failure may mean our future evolution, in that failures may be important life lessons. 2 From a practical perspective, wisdom can mean our capability to act (accordingly) –​and to act accordingly means to act autonomously, in that our action relies on our autonomous understanding and strength, and not on prediction’s technological automation. On the other hand, we need to critically distinguish between cases in which we should rely on prediction from cases in which we should not rely on prediction, as we have seen. Again, it is not hard to understand the key role wisdom can play as the capability to act (accordingly to the kind of knowledge described above) whenever we are paralysed: 1 From a theoretical perspective, wisdom can mean our capability of foresight whenever we should imagine the answer to the questions asked above: what kind of human being do we ideally want to become as the

Should we have the right not to be predicted?  119

future evolution of our present identity? If we know our future, can we still imagine who we ideally want to become? And can we still have the strength to work on it day by day? If, after our foresight exercise, we answer no to at least one of the last two questions, we should consider it as a case in which we should not rely on prediction. Conversely, if, after our foresight exercise, we answer yes to both the last two questions, as in the case of Rousseau’s heroine, we should consider it as a case in which we should rely on prediction (the case of Rousseau’s heroine is exceptional, as we have seen, but a variety of analogous cases may happen whenever we can imagine our autonomous understanding and strength to act as even powered, and not depowered, by our knowledge of our future). 2 From a practical perspective, wisdom can mean our capability to act (accordingly), which means, again, to act autonomously, in that our action relies on our autonomous understanding and strength, and not on prediction’s technological automation. Let us consider one last paradigmatic case. Interestingly enough, Reichenbach, as one of the logical empiricists who emphasise the primacy of logos, rewrites Hamlet’s soliloquy as follows: To be, or not to be –​that is not a question but a tautology. I am not interested in empty statements. I want to know the truth of a synthetic statement: I want to know whether I shall be. Which means whether I shall have the courage to avenge my father? […] I have good evidence. The ghost was very conclusive in his arguments. But he is only a ghost. Does he exist? […] But that’s it: nothing but indirect evidence. Am I allowed to believe what is only probable? Here is the point where I lack the courage. […] I am afraid of doing something on the basis of a mere probability. The logician tells me that a probability has no meaning for an individual case. How then can I act in this case? […] But what if I should start thinking after the deed and find out I should not have done it? […] [The logician] tells me that if something is probable I am allowed to make a posit and act as though it were true. In doing so I shall be right in the greater number of cases. But shall I be right in this case? No answer. The logician says: act. You will be right in the greater number of cases. […] There is no certainty. […] There I am, the eternal Hamlet. What does it help me to ask the logician, if all he tells me is to make posits? His advice confirms my doubts rather than giving me the courage I need for my action. Logic is not made for me. One has to have more courage than Hamlet to be always guided by logic. (Reichenbach, 1951: 250–​251)

120  Should we have the right not to be predicted?

I think that Hamlet’s soliloquy rewritten by Reichenbach is moving –​and that the reason why it is moving is that it is precisely the paradigm of what I have defined as the burden of sensemaking. Let us analyse the soliloquy by stressing its relationship with the key argument of my work: 1 Hamlet is also desperately stuck in a state of inaction (“I lack the courage. […] His advice confirms my doubts rather than giving me the courage I need for my action”), in that he does not know what to do (“How then can I act in this case?”). And he also thinks of it as frustrating, shameful and demeaning (“There I am, the eternal Hamlet. […] One has to have more courage than Hamlet”), in that he cannot make sense of his identity when it comes to his expectations and others’ expectations (“I want to know whether I shall […] have the courage to avenge my father? […] But what if I should start thinking after the deed and find out I should not have done it?”). 2 Hamlet also needs to move from uncertainty to certainty (“I want to know the truth of a synthetic statement”) and, consequently, from inaction to action (“the courage I need for my action”). 3 More precisely, Hamlet also needs to know his future (“I want to know whether I shall be. […] But shall I be right in this case?”). 4 Yet, there is a big difference between us and Hamlet. We resort to prediction, which can drive and, thus, reactivate our decisions and actions through their automation, in that we think of prediction as certainty, which can make us move from inaction to action and live up to the kind of performance that is demanded of us. Conversely, Hamlet resorts to probability (“mere probability”), which cannot drive and, thus, reactivate his decisions and actions (“I am afraid of doing something”), in that he thinks of probability as uncertainty (“a mere probability […] is no certainty”), which cannot make him move from inaction to action (“a probability has no meaning for an individual case”) and live up to the kind of performance that is demanded of him (“One has to have more courage than Hamlet”). 5 Thus, there is a further big difference between us and Hamlet. Whenever we think of prediction as certainty, our decisions and actions are driven and, thus, reactivated as prediction’s automated effects –​which means that we severely undermine both our sensemaking and our future’s almost endless possibilities. Conversely, whenever Hamlet thinks of probability as uncertainty (“a mere probability […] is no certainty”), his decisions and actions are not driven and, thus, reactivated (“I am afraid of doing something”) –​which means that, paradoxically enough, he is less metaphorically dead than us: even though he is also desperately stuck in a state of inaction (“I lack the courage”), his kind of inaction means

Should we have the right not to be predicted?  121

endless thoughts (“the eternal Hamlet […] [who] make[s]‌posits […] [and] doubts”) that are far from our kind of automated decisions and actions. 6 Thus, Hamlet’s endless thoughts mean his paradoxical advantage of not undermining both his sensemaking and his future’s almost endless possibilities as severely as we undermine ours. Yet, he also suffers from being desperately stuck in a state of inaction, even though his kind of inaction is different from ours. Finally, what should he do? The answer I propose is precisely that he should resort to wisdom as what can somehow double his paradoxical advantage. Indeed, wisdom can make him obtain two advantages: first, not to stop his endless thoughts (as a way to be metaphorically alive) and, second, to start acting (as a further way to be metaphorically alive). As far as the first advantage is concerned, wisdom does not stop Hamlet’s endless thoughts because it means, from a theoretical perspective, the capability of foresight –​and the capability of foresight is by definition a matter of endless thoughts, in that it is based on imagination (which does not give answers once and for all), and not on certainty (which gives answers once and for all). As far as the second advantage is concerned, wisdom starts Hamlet’s action because it means, from a practical perspective, the capability to act accordingly to the kind of foresight described above –​ and the capability to act is by definition a matter of moving from inaction to decisions and actions. Yet, decisions and actions resulting from wisdom are different from decisions and actions resulting from certainty (whenever the most sophisticated predictive technologies’ outputs are thought of as certainties). In the case of wisdom, the alleged absence of certainty leads us not to take extreme action, which is precisely what the alleged presence of certainty leads us to do (and which is precisely the cause of Macbeth’s tragedy, as we have seen in Chapter 3). For instance, Hamlet may wisely decide not to take extreme action, but action anyway, as follows. First, he may wisely nurture his self-​mastery by increasing his carefulness when it comes to assessing the reliability of what he has at his disposal, starting with “good evidence […] [and] arguments” given to him by nothing more than a ghost when he is shocked by his father’s death. Second, he may wisely nurture his capability of foresight by imagining the consequences of the extreme action of avenging his father by murdering his alleged murderer. Specifically, he may imagine its consequences in a scenario that is far more extended both synchronically and diachronically: what would happen to him, to his father’s alleged murdered, to his mother and to Denmark in their far future? Third, and finally, he may wisely nurture his capability to act accordingly, which means not to take extreme action, in that its potential negative consequences far surpass its potential positive consequences (as Shakespeare masterfully shows).

122  Should we have the right not to be predicted?

Thus, wisdom starts Hamlet’s action without stopping his endless thoughts as what leads him to wiser decisions and actions. Indeed, wisdom is a powerful tool to make a virtue of necessity by making uncertainty a value. And the key reason why it is a powerful tool is that it can give us reasons to act that can be good even without being true –​wisdom can give us reasons to have “the courage […] [we] need for […] [our] action” that can be good even without being certain. The paradigmatic case of Reichenbach’s Hamlet may be illuminating when it comes to facing the challenges of the present time as characterised by unprecedented complexity and uncertainty. For instance, let us imagine, again, the scenario we have seen in Chapters 3 and 4, according to which we are both physicians and entrepreneurs, specifically owners of our medical clinic, who should decide, in one day, who not to treat and treat, who to fire and hire and on a financial operation. Wisdom can offer us an alternative to relying, even totally, on the predictions of medical diagnosis algorithms, hiring algorithms and financial algorithms –​wisdom can offer us an alternative to our alienation from our identities as physicians, entrepreneurs, partners and parents whose decisions and actions are random effects of heteronomous causes. The alternative wisdom can offer us may be described as follows. First, we may wisely nurture our self-​mastery by increasing our carefulness when it comes to assessing the following two scenarios: is it better to fulfil the three tasks through our alienation from other kinds of tasks, which are epistemological, deontological, ethical and existential? Or is it better to fulfil, for instance, one task out of three through the prioritisation of the latter over the former? If we answer that the second scenario is better, we can proceed. Second, we may wisely nurture our capability of foresight by imagining the consequences of our actions in a scenario that is far more extended both synchronically and diachronically: what would happen to the kind of life our patients lead? What would happen to the kind of talent our employees have? What would happen to the kind of ethical and social impact our financial operation has? It does not mean that we cannot use algorithms –​it means that we cannot use algorithms exclusively, in that they cannot give us what our capability of foresight can give us to answer the questions asked above: the profound reasons for our decisions and actions as based on our knowledge of the kind of life our patients lead (for instance, do they live alone?), the kind of talent our employees have (for instance, are they loyal?) and the kind of ethical and social impact our financial operation has (for instance, is it speculative?). Third, and finally, we may wisely nurture our capability to act accordingly, which means a variety of things. For instance, it may mean what follows. If we assess that it is better to prioritise the fulfilment of our epistemological, deontological, ethical and existential tasks over the fulfilment of the three decisions in one day, we may focus our action

Should we have the right not to be predicted?  123

on the first decision exclusively. Specifically, we may postpone the second decision and delegate the third decision. In any case, the alternative wisdom can offer us requires us to somehow prioritise quality over quantity: we may say that, the wiser we are, the more we prioritise our epistemological, deontological, ethical and existential tasks over overscheduling, overdoing and overperforming. And the key reason why wisdom requires us to somehow prioritise quality over quantity is that, if it is true that it especially means our capability of foresight, it is also true that it also means a notion of time that is not a matter of quantity (as the number of tasks fulfilled in one day), but a matter of quality (as the consequences of one task in a far more extended scenario, especially in the far future). Finally, if it is true that wisdom requires us to somehow prioritise the future over the present, it is also true that being wise can hardly mean being willing to automate our future –​being wise can hardly mean being willing to predict our future. Wisdom is especially a matter of two capabilities, as we have seen: the capability of foresight and the capability to act accordingly. Thus, whenever we are willing to be predicted to the point that we endanger our prerogative of acting against all odds, we are also willing to endanger our prerogative of being wise. Indeed, there is no wisdom without foresight. And there is no foresight without the openness of our future –​thus, there is no wisdom without the openness of our future. A predicted future is an automated future. And automation does not require us wisdom, but metaphorical death. Our metaphorical death may be more reassuring than the autonomous shaping of our future identity, which, again, requires wisdom as the capability of foresight and the capability to act accordingly. But the autonomous shaping of our future identity may not only make more sense but also be more pleasant than our metaphorical death. Indeed, making sense of our future identity, starting with the kind of human being we ideally want to become as the future evolution of our present identity, may mean being wise to the point that we even make a virtue of necessity by making our foresight, imagination, decision and action not only a burden to bear but also a pleasant burden to bear –​burdens may even be pleasant to bear if they mean, and make us feel, that we are both literally and metaphorically alive. Notes 1 8.2% of the participants answered that they do not know if they want to know today when they will die and 6.4% of the participants answered that they do not know if they want to know today from what cause. 2 On death in general and on its unspeakability in particular, see Williams, 1973 and 1993; Nagel, 1979 and 1986; Feldman, 1992; Fischer, 1993 and 2020; Overall, 2003; Warren, 2004; Belshaw, 2009; Bradley, 2009; Luper, 2009 and 2014; Schumacher, 2010; Kagan, 2012; Bradley, Feldman and Johansson, 2013; Taylor, 2013; Cholbi, 2015; Kamm, 2020;

124  Should we have the right not to be predicted?

3 See Dalal, Intezari and Heitz, 2016, and Vallor, 2017, on the relationship between wisdom and technology. I worked on it in Chiodo, 2022. 4 See https://​glo​bal.oup.com/​acade​mic/​prod​uct/​wis​dom-​978019​7514​047?cc=​it&lang=​ en&# (accessed: October 28, 2023).

References Aristotle, Nicomachean ethics, transl. by H. Rackham, Cambridge: Harvard University Press, 1934. Belshaw, C., Annihilation. The sense and significance of death, Stocksfield: Acumen, 2009. Bradley, B., Well-​being and death, Oxford: Oxford University Press, 2009. Bradley, B., Feldman, F., Johansson, J., eds., The Oxford handbook of philosophy of death, Oxford: Oxford University Press, 2013. Chiodo, S., “Adding wisdom to computation. The task of philosophy today”, Metaphilosophy 53/​1: 70–​84, 2022. Cholbi, M., ed., Immortality and the philosophy of death, New York: Rowman & Littlefield, 2015. Dalal, N., Intezari, A., Heitz, M., eds., Practical wisdom in the age of technology. Insights, issues, and questions for a new millennium, New York: Routledge, 2016. Descartes, R., “Preface” to the French edition of Principles of philosophy, in The philosophical writings of Descartes (AT), transl. by J. Cottingham, R. Stoothoff and D. Murdoch, Cambridge: Cambridge University Press, 1985 (1647). Descartes, R., “Passions of the soul”, in The philosophical writings of Descartes (AT), transl. by J. Cottingham, R. Stoothoff and D. Murdoch, Cambridge: Cambridge University Press, 1985 (1649). Descartes, R., “Rules for the direction of the mind”, in The philosophical writings of Descartes (AT), transl. by J. Cottingham, R. Stoothoff and D. Murdoch, Cambridge: Cambridge University Press, 1985 (1684). Feldman, F., Confrontations with the reaper, New York: Oxford University Press, 1992. Fichte, J. G., “Foundations of the entire science of knowledge”, in Science of knowledge (Wissenschaftslehre), ed. by P. Heath and J. Lachs, Cambridge: Cambridge University Press, 1982 (1794–​1795), 61–​331. Fischer, J. M., ed., The metaphysics of death, Stanford: Stanford University Press, 1993. Fischer, J. M., Death, immortality, and meaning in life, Oxford: Oxford University Press, 2020. Gaille, M., et al., “Ethical and social implications of approaching death prediction in humans. When the biology of ageing meets existential issues”, BMC Medical Ethics 64/​21: 1–​13, 2020. Gigerenzer, G., Garcia-​Retamero, R., “Cassandra’s regret. The psychology of not wanting to know”, Psychological Review 124/​2: 179–​196, 2017. Kagan, S., Death, New Haven: Yale University Press, 2012. Kamm, F. M., Almost over. Aging, dying, dead, New York: Oxford University Press, 2020. Kekes, J., “Wisdom”, American Philosophical Quarterly 20/​3: 277–​286, 1983. Kekes, J., Wisdom. A humanistic conception, Oxford-​New York: Oxford University Press, 2020.

Should we have the right not to be predicted?  125

Lehrer, K., et al., eds., Knowledge, teaching, and wisdom, Dordrecht: Kluwer Academic Publishers, 1996. Luper, S., The philosophy of death, Cambridge: Cambridge University Press, 2009. Luper, S., ed., The Cambridge companion to life and death, Cambridge: Cambridge University Press, 2014. Maxwell, A., “How will death prediction technology impact life?”, Now, October 31, 2022, no page numbers. Maxwell, N., From knowledge to wisdom. A revolution for science and the humanities, Oxford: Basil Blackwell, 1984. Nagel, T., Mortal questions, Cambridge: Cambridge University Press, 1979. Nagel, T., The view from nowhere, Oxford: Oxford University Press, 1986. Nietzsche, F., The gay science, ed. by B. Williams, Cambridge: Cambridge University Press, 2001 (1882). Nozick, R., “What is wisdom and why do philosophers love it so?”, in The examined life, New York: Touchstone Press, 1989, 267–​278. Overall, C., Aging, death, and human longevity. A philosophical inquiry, Berkeley: University of California Press, 2003. Pirracchio, R., et al., “Mortality prediction in intensive care units with the super ICU learner algorithm (SICULA). A population-​based study”, The Lancet. Respiratory Medicine 3/​1: 42–​52, 2015. Plato, Apology, ed. by H. N. Fowler, Cambridge-​London: Harvard University Press-​ Heinemann, 1966. Reichenbach, H., The rise of scientific philosophy, Berkeley: University of California Press, 1959 (1951). Robins Wahlin, T. B., “To know or not to know. A review of behaviour and suicidal ideation in preclinical Huntington’s disease”, Patient Education and Counselling 65/​3: 279–​287, 2007. Rousseau, J. J., Julie, or, the new Heloise. Letters of two lovers who live in a small town at the foot of the Alps, transl. by P. Stewart and J. Vaché, Hanover: University Press of New England, 1997 (1761). Ryan, S., “What is wisdom?”, Philosophical Studies 93: 119–​139, 1999. Ryan, S., “Wisdom, knowledge and rationality”, Acta Analytica 27/​2: 99–​112, 2012. Schumacher, B., Death and mortality in contemporary philosophy, Cambridge: Cambridge University Press, 2010. Seneca, “To Serenus on the firmness of the wise man”, in Moral essays, transl. by J. W. Basore, London: William Heinemann, 1928. Taylor, J. S., ed., The metaphysics and ethics of death. New essays, Oxford: Oxford University Press, 2013. Tiberius, V., The reflective life. Living wisely with our limits, Oxford: Oxford University Press, 2008. Vallor, S., “AI and the automation of wisdom”, in Philosophy and computing. Essays in epistemology, philosophy of mind, logic, and ethics, ed. by T. M. Powers, Cham: Springer, 2017, 161–​178. Warren, J., Facing death. Epicurus and his critics, Oxford: Oxford University Press, 2004. Whitcomb, D., “Wisdom”, in S. Bernecker and D. Pritchard, eds., Routledge companion to epistemology, London: Routledge, 2011, 95–​105.

126  Should we have the right not to be predicted?

Williams, B., Problems of the self. Philosophical papers 1956–​1972, Cambridge: Cambridge University Press, 1973. Williams, B., Shame and necessity, Berkeley, University of California Press, 1993. Zagzebski, L., Virtues of the mind. An inquiry into the nature of virtue and the ethical foundations of knowledge, Cambridge: Cambridge University Press, 1996.

6 CONCLUDING REMARKS Predicted humans, individualism and their future

6.1  The present

In Chapter 3, I have argued that autonomy is the antidote to technological automation. Yet, I have also argued that there is a strict correlation between the former’s rise and fall and the latter’s rise. Indeed, if autonomy is taken to the extreme, starting with a kind of individualism according to which one should autonomously overschedule, overdo and overperform in constant competition with the others, it easily ends up falling. And, if autonomy falls, automation, specifically technological automation, easily ends up rising. Thus, the extremization of autonomy is strictly correlated to its replacement with technological automation –​and the extremization of autonomy may be thought of as a kind of individualism. The idea according to which humans as single individuals prevail over humans as society is one of the most typical characteristics of Western culture (see at least Heidegger, 1927; Hinchman, 1996; Lévinas, 1961). And it is even strengthened by the relationship between the notion of autonomy and the notion of individualism that characterises the last decades, when autonomy increasingly moves from a kind of universalisation as its ethical foundation, as in the case of Kant’s categorical imperative (see Kant, 1785 and 1788), to a kind of particularisation that means a kind of individualism (see at least Dworkin, 1988; Frankfurt, 1988; Ekstrom, 1993; Bratman, 2006). And speaking of individualism may also mean speaking of anarchism as its culmination, which opposes to autonomy even from an etymological perspective. The former means “living under one’s own laws”, as we have seen in Chapter 3. Conversely, the latter means the “negation” (άν, transliterated as an) of what can “rule, govern, command” (ἄρχω, transliterated as archo), DOI: 10.4324/9781032656885-7

128  Concluding remarks

from heteronomous laws to the kind of “one’s own laws” that “autonomy” means (Liddell and Scott, 1940).1 Thus, anarchism means rulerlessness. And rulerlessness opposes to autonomy –​thus, anarchism opposes to autonomy. From a philosophical perspective, the relationship between individualism and anarchism as its culmination, together with their overturning of autonomy, is quite clear. When Feyerabend describes the epistemological anarchist, he writes that “the one thing he opposes positively and absolutely are universal standards, universal laws, universal ideas such as ‘Truth’, ‘Reason’, ‘Justice’, ‘Love’, and the behaviour they bring along” (Feyerabend, 1975: 189), in that “there is only one principle that can be defended under all circumstances and in all stages of human development. It is the principle: anything goes” (Feyerabend, 1975: 28). Clearly enough, Kant’s universalisation as the ethical foundation of one’s autonomy is overturned by a kind of anarchism that means a kind of individualism. According to Feyerabend, as in the case of several contemporary philosophers (see at least Rorty, 1982 and 1989), one’s acting is individualistic (“anything goes”) in the sense that it has no ethical foundation that is socially shared (“he opposes […] universal standards, universal laws, universal ideas”). Thus, one’s acting is individualistic in the sense that it is anarchic –​one’s acting is individualistic in the sense that it even overturns autonomy both from an etymological perspective and from a philosophical perspective, in that there are no shared rules, and even rules at all. If we reflect upon the relationship between individualism and technological automation, it is quite clear that both of them are ways to overturn autonomy –​ both individualism and technological automation are ways to unburden us from the unbearable burden of autonomy. More precisely, individualism seems to play the following role, which is quite paradoxical. On the one hand, it causes the fall and, consequently, the overturning of autonomy by taking it to the extreme, which means by making it unbearable to the point that we trade it for technological automation (which has been one of the key arguments of my work). On the other hand, it causes the overturning of autonomy by replacing its ethical foundation as something that is socially shared with something to which it is far easier to live up, from rules that are not shared to no rules at all (“anything goes”). For instance, whenever we cannot live up to extremely high expectations in constant competition with the others, we have two ways out. On the one hand, we can resort to the technological automation of our future decisions and actions (which has been one of the key arguments of my work). On the other hand, a further shade of individualism emerges as something to which we can resort: we can say that “anything goes” as a kind of alibi that makes us escape from failure when it comes to making sense of our identity and life. Indeed, saying that “anything goes” means sabotaging the conditions under which living up to extremely high expectations in constant competition with the others

Concluding remarks  129

is possible. If there are no shared rules, and even rules at all, there is no failure when it comes to making sense of our identity and life. First, there are no extremely high expectations, in that we can say, as our alibi, that the expectations to which we measure up are exclusively individual, and even idiosyncratic, and not socially shared. Again, “anything goes”. And, if “anything goes”, we escape from failure, especially as something that is socially ascertained. Second, there is no constant competition with the others, in that we can say, as our alibi, that the context to which we measure up is exclusively individual, and even idiosyncratic, and not socially shared. If there is no competition, there are no competitors other than us. And, if there are no competitors other than us, we escape from failure, especially as something that is socially ascertained. Thus, individualism seems to play a role that is quite paradoxical, in that it somehow doubles the overturning of autonomy –​individualism seems to be both the starting point and the end point of the overturning of autonomy. Individualism raises issues that I cannot further develop at the end of my work (which focuses on other issues, anyway), in that they require monographic works. Yet, I think that at least the following insight is necessary to close the circle. In Chapters 3 and 4, we have reflected upon the idea according to which technological prediction may be understood as our ready-​to-​use scapegoat. Speaking of a scapegoat means speaking of a way out. Conversely, when we have at least introduced the notion of individualism, we have proposed to understand it as our alibi, which is something more than a scapegoat. If the two metaphors make sense, the following insight emerges: technological prediction is a symptom of something more general, which is a kind of critical culmination of the individualistic paradigm –​ technological prediction is a way out of something more general of which the individualistic paradigm is both a cause and a double way out. Indeed, speaking of an alibi means speaking of a double way out. The etymology of the word “alibi” is the Latin adverb alibi that means “elsewhere” (Lewis and Short, 1879) as a stronger extraneousness. When we have a scapegoat, we belong to the context of the fault, but the fault is attributed to another element of the context. Conversely, when we have an alibi, we do not belong to the context of the fault: our extraneousness is stronger, and even doubled. But individualism is not only a double way out but also a cause. Indeed, the context of which individualism is a double way out is somehow caused by individualism itself, which is precisely what takes autonomy to the extreme to the point that we desperately try to find a way out of its unbearableness. Thus, a work focused on the relationship between technological prediction (and automation) and the burden of sensemaking requires a further work focused on the relationship between technological prediction (and automation) as a way out of the burden of sensemaking and individualism both as a way out and as a cause of their (critical) relationship.

130  Concluding remarks

As one further concluding remark, let us reflect upon a current case to try to move from the present to the future. The current case is described as “[t]‌he rise of the AI CEO”:2 a human being “embraced AI technology to redefine entrepreneurship. With Chat GPT prompts, he founded a company from scratch called Aisthetic Apparel, where he delegates all key decisions to the AI language model. From creating a brand name, logo and designs to determining retail price points, pre-​money valuation and minimum ticket size for investors, Chat GPT takes the lead”. Specifically, the human being’s request to the artificial intelligence is the following: “You are Hustle GPT, an entrepreneurial AI. You will be the CEO and I will be your executive assistant. You have $ 1.000 and 1 hour of my day, everyday. Make the most successful company possible”. We may think of a variety of ways in which individualism emerges together with its critical relationship with technological prediction (and automation) as a way out of the burden of sensemaking. The case of “the AI CEO” is analogous to the cases we have seen in Chapter 4, from philosophy to art. Again, technological prediction (and automation) is our (paradoxical) way out of one of our typical (and pleasant) ways to make sense of our life: being (entrepreneurially) creative. Again, we are willing to deprive ourselves of the infinitely pleasant experience of the shaping of sensemaking to trade it for technological prediction (and automation) as a total alienation from our pleasant creativity when it comes to pleasantly exercising our reflection, imagination, planning and decision-​making “[f]rom creating a brand name, logo and designs to determining retail price points, pre-​money valuation and minimum ticket size for investors”. The artificial intelligence described above even decides what kind of product the company produces. Individualism emerges in a variety of ways. Indeed, if it is true that technological prediction (and automation) means randomness (as we have seen in Chapters 3 and 4), it is also true that randomness pairs with “anything goes”, which finally means that both the expectations and the context to which we measure up are exclusively individual, and even idiosyncratic, and not socially shared. The case of “the AI CEO” is individualistic to the point that the company is not only a matter of two elements, i.e. one human being and one artificial intelligence, but also, and especially, a matter of one human being who has no profound reasons for making decisions and acting: again, “anything goes”, starting with the kind of product the company produces. Yet, a kind of reason seems to emerge: “Make the most successful company possible” (which is further emphasised in other interviews). The kind of reason that seems to emerge is even trivial in neoliberal societies: making money. 6.2  The future

If it makes sense, what may be grasped of the trajectory of Western culture? We may imagine at least two opposite scenarios as extreme cases of a variety of possibilities.

Concluding remarks  131

In the first case, which is a kind of worst case scenario, we may imagine that the critical culmination of the individualistic paradigm will happen in the future as the further extremization of what happens in the present. More precisely, we may imagine the following series of phenomena: 1 The unbearableness of autonomy will mean a further rise of technological automation as a further rise of technological prediction. On the one hand, public and private bodies will use more and more the most sophisticated technologies to predict our future as single individuals. More importantly, they will work on laws according to which the predictions of the most sophisticated technologies can be considered as certainties and used accordingly. On the other hand, we ourselves will believe that they are certainties. Thus, we ourselves will willingly use them both passively (when we will be predicted by public and private bodies’ technologies) and actively (when we will be predicted by our technologies). 2 The further rise of technological automation as the further rise of technological prediction will mean an increasing atrophy of our capabilities, starting with autonomy, which also means reflection, imagination, planning and decision-​making. Thus, we will use a kind of “AI CEO” to lead our daily life, which will be driven by it minute by minute. 3 The increasing atrophy of our capabilities will mean an increasing individualism, in that, if it is true that our individual daily life will be driven by our individual “AI CEO”, it is also true that we will not have anything to share with the others, from competitions (together with their burdens) to projects (together with their pleasures). 4 Finally, the increasing individualism will mean its further growth. On the one hand, a minority of us will exploit our being driven as an opportunity to take control of what leads us and make money. On the other hand, a majority of us will be exploited. Sometimes we will resist and sometimes we will surrender. But our surrender will surpass our resistance, in that the latter is far more challenging than the former in the case of atrophied capabilities, starting with autonomy. In the second case, which is a kind of best case scenario, we may imagine that the critical culmination of the individualistic paradigm happens in the present: the future will not be its further extremization, but its overturning. More precisely, we may imagine the following series of phenomena: 1 We will use our autonomy to be wise, which means to exercise both our capability of foresight (which will make us imagine, for instance, the worst case scenario described above) and our capability to act accordingly. 2 To act accordingly will mean to decrease the conditions under which technological automation and prediction, together with individualism, are extremely attractive.

132  Concluding remarks

3 More precisely, our acting accordingly will have a variety of cultural shades, from philosophy to society to economy to politics. For instance, we will start proposing the following four ideas through the education of younger generations. First, the idea according to which living a good (and pleasant) life means being autonomous, and not automated, starting with creatively making sense of our identity and life –​starting with making sense of our future. Second, the idea according to which living a good (and pleasant) life means sharing more than competing, which means a kind of re-​hierarchisation of values. Sharing will not replace competition, which is promising in a variety of cases, starting with the drive to evolve both individually and socially. Conversely, sharing will be re-​hierarchised by surpassing competition in most cases. Third, the idea according to which living a good (and pleasant) life means nurturing human relationships more than isolating ourselves (for instance, in a relationship between one human being and one artificial intelligence). Fourth, the idea according to which living a good (and pleasant) life means distributing resources more than making money by monopolising them (even from an individualistic perspective, in that, first, we may happen to be excluded from monopolies and, second, monopolies may happen to be overturned by the monopolised for their extremely tough conditions, from stopping being exploitable clients to starting being rebels). Again, it means a kind of re-​hierarchisation of values. Distributing resources will not replace making money, which is promising in a variety of cases, starting with enlightened entrepreneurship. Conversely, distributing resources will be re-​hierarchised by surpassing making money in most cases. Needless to say, the two opposite scenarios are not, and cannot be, predictions –​conversely, they are exercises in wisdom, starting with the exercise in foresight. It is worth noting that an implicit criterion has been used in the two opposite scenarios: the idea according to which the best case scenario is not a matter of revolution, but a matter of evolution. Indeed, it is not a matter of replacements of values through revolutions, from sharing against competition to distributing resources against making money, but it is a matter of re-​hierarchisations of values through evolutions. Surprisingly enough, even from an etymological perspective, revolution means a negative phenomenon and evolution means a positive phenomenon. On the one hand, the word “revolution”, which derives from the Latin verb revolvere, literally means “to roll back; to unroll, unwind; to revolve, return” (Lewis and Short, 1879) and, again, something which “leads back” (Lewis and Short, 1879). On the other hand, the word “evolution”, which derives from the Latin verb evolvere, overturns the meaning of the word “revolution” by replacing re with e: the word “evolution” literally means “to roll out, roll forth […], unfold” (Lewis and Short, 1879).

Concluding remarks  133

The etymologies of the two words offer us a further opportunity to exercise our wisdom. Indeed, there is a sense in which not only their etymologies but also their histories teach us that evolutions are more promising than revolutions: the latter imply something violent and sudden that the former do not imply –​and, consequently, revolutions imply a kind of “roll[ing] back” that evolutions do not imply. Revolutions may be necessary from time to time, but their cultural and social acceptance and assimilation are critical, as in the case of whatever violent and sudden phenomenon. Evolutions may seem not to be timely enough, but they can obtain what is necessary anyway (and what revolutions cannot obtain) –​evolutions can obtain cultural and social acceptance and assimilation, which are necessary anyway to move from something potential to something actual. Thus, the reason why the etymologies of the two words offer us a further opportunity to exercise our wisdom is the following: we should especially imagine the impacts of our decisions and actions on the far future –​which is a further way to work on the future evolution of the present time. Indeed, the present time is characterised by a focus on the near future whose narrowness can evolve into foresight by focusing on the far future. What may evolution mean in the case of predicted humans? In addition to what we have seen in the case of the two opposite scenarios described above, evolution may mean that we should think of technological prediction itself as something to make evolve, and not something “to roll back”, not only in that it is already ubiquitous but also, and especially, in that it is already a promising tool in several cases, starting with medical diagnosis algorithms (if they are one of physicians’ tools, and not replacements of physicians’ capabilities, starting with their wisdom itself). Again, our wisdom is key: if we focus on the far future, which means the potential synergy between autonomous humans, the most sophisticated predictive technologies and their ethical and social regulations, it is not hard to imagine what positive results we may obtain –​more precisely, our wisdom can make us, first, imagine the way in which technological prediction may evolve (starting with being paired by ethical and social regulations) and, second, act accordingly (starting with working not only on the kind of human being we ideally want to become as the future evolution of our present identity, as we have seen in Chapters 3 and 5, but also on the kind of predictive technology we ideally want to design and use as its future evolution). Finally, as a philosopher, one of my tasks is to stress that being predicted (and automated) to the point that we endanger our prerogative of acting against all odds also has an impact on our research itself. Whenever our research is automated to the point that any of its logical steps is driven by the hastiness of overscheduling, overdoing and overperforming in constant competition with the others, we endanger our prerogative of acting against all odds when it comes to being driven, conversely, by both illogical steps and

134  Concluding remarks

the calm of wisdom, at least from time to time –​which can make us prioritise, again, the (potential) far future over the (actual) near future, at least from time to time. For instance, if we consider the ways in which Daguerre discovered photography, Wells discovered anaesthesia and Fleming discovered penicillin, together with several other analogous cases, it is not hard to understand what we could have lost if they had exclusively prioritised logical steps over illogical steps and the hastiness of overscheduling, overdoing and overperforming in constant competition with the others over the calm of wisdom. Indeed, evolution also results from the prerogative of acting against all odds –​and the prerogative of acting against all odds opposes by definition to the cause-​and-​effect relationship between technological prediction as the cause and technological automation as the effect, from the case of predicted and automated research to the case of predicted and automated humans. Notes I worked on its relationship with the notion of autonomy in Chiodo, 2020. 1 2 See, also for the following quotes, https://​cha​osto​clar​ity.io/​podc​ast/​the-​rise-​of-​ the-​ai-​ceo-​featur​ing-​joao-​fer​rao-​dos-​san​tos/​ (accessed: November 6, 2023).

References Bratman, M. E., Structures of agency. Essays, Oxford: Oxford University Press, 2006. Chiodo, S., Technology and anarchy. A reading of our era, Lanham-​Boulder-​ New York-​ London: Lexington Books-​ The Rowman & Littlefield Publishing Group, 2020. Dworkin, G., The theory and practice of autonomy, Cambridge: Cambridge University Press, Cambridge, 1988. Ekstrom, L. W., “A coherence theory of autonomy”, Philosophy and Phenomenological Research 53: 599–​616, 1993. Feyerabend, P. K., Against method. Outline of an anarchistic theory of knowledge, London: NLB, 1975. Frankfurt, H. G., ed., The importance of what we care about, Cambridge: Cambridge University Press, 1988. Heidegger, M., Being and time. A translation of Sein und Zeit, ed. by J. Stambaugh, Albany: State University of New York Press, 1996 (1927). Hinchman, L., “Autonomy, individuality and self-​ determination”, in What is Enlightenment? Eighteenth-​century answers and twentieth-​century questions, ed. by J. Schmidt, Berkeley: University of California Press, 1996, 488–​516. Kant, I., Groundwork of the metaphysics of morals, ed. by M. J. Gregor, Cambridge: Cambridge University Press, 1998 (1785). Kant, I., Critique of practical reason, ed. by M. J. Gregor, Cambridge: Cambridge University Press, 1996 (1788). Lévinas, E., Totality and infinity, ed. by A. Lingis, Pittsburgh: Duquesne University Press, 1969 (1961). Lewis, C. T., Short, C., A Latin dictionary, Oxford: Clarendon Press, 1879.

Concluding remarks  135

Liddell, H. G., Scott, R., A Greek-​English lexicon, revised by Sir H. Stuart Jones, Oxford: Clarendon Press, 1940. Rorty, R., Consequences of pragmatism. Essays, 1972–​1980, Minneapolis: University of Minnesota Press, 1982. Rorty, R., Contingency, irony, and solidarity, Cambridge: Cambridge University Press, 1989.

INDEX

Note: Endnotes are indicated by the page number followed by “n” and the note number e.g., 103n3 refers to note 3 on page 103. absolute certainty 22, 115, 116 abstraction 22 Actaeon 12 Aeschylus 14, 16 aesthetic value 95 aging clocks 34, 42–​43, 70 AI art 94–​98, 100, 103nn5&7 algorithms 91, 92, 122; death clocks 43; generative adversarial networks (GANs) 94–​95; healthcare 36, 133; poetry 103n7; scapegoating 76, 80; sleep aids 67 alienation 91–​92, 102–​103, 103n3 anarchism 127–​128 Aristotle 21–​22; definition of wisdom 116 art see AI art artefact, shaping of 95, 103n3 Artemis 12, 13 artificial intelligence 8–​9; AI CEO 130, 131; art 103n5; digital twins 38; generate a portrait 4, 94–​98; generate philosophical answers 4, 94, 98–​102; growth of 34–​35, 37; randomness and 103n7 Athena 13–​14 authenticity, freedom and 69, 70 automation 3, 4, 6, 82n5, 127, 128, 129–​130; fall of autonomy and 65–​71; further rise of 131; Macbeth

and 71, 72, 76–​80, 81; Oedipus and 71–​76, 81; of sensemaking 94–​103; sleep and 48–​49; trade autonomy for 8, 27–​28, 46 autonomy 82n3, 93, 114–​116, 127–​129, 131; exercise of 45; fall of 65–​71, 74, 78; Kant and 82n4; Oedipus and 15; sensemaking and 27–​28; trade for automation 2–​3, 8, 46–​47, 72, 76, 78, 108, 118 Bacchae (Euripides) 12 Basis watch 49 biological age 34, 43, 48 biomarker 43 burnout 42–​43 calculation: biological age 42–​43, 48, 110; life expectancy 9, 21, 28 capabilities 13, 23–​24, 25; atrophying 50–​51, 131; sleep and death 48, 49; trade autonomy for automation 27; wisdom and 123 capability of prediction 16–​17, 88; Cassandra 14; Oedipus 14–​15; Prometheus 15–​16; Tiresias 13–​14 capability to act accordingly 18, 26, 27, 51, 107, 115–​116, 118–​119, 121, 122–​123, 131–​132; Hamlet 121

Index  137

Cassandra 14, 107 Cassirer, E. 22 certainty 3–​4, 5–​6, 107; absolute 22, 115, 116; attempt to obtain 21, 22; epigenetic clocks and 42; Hamlet 119, 120, 121; Macbeth and Oedipus 73–​80, 111–​112; not total 41 chance 68–​69 Christie’s auction 4, 94 computation: logos and 21–​22, 24, 25, 116; optimisation and 66 computer art see AI art context of life 88, 97–​98 creativity 94–​95, 102 crime 35, 36–​37 Dante 66 data 25–​26; biological age 48; clinical trials 36; digital twins 37–​38; DNA methylation 43; generative adversarial networks (GANs) 94–​95; GPT-​3 98; growth of 34–​35, 37; sleep 48–​50 death 12–​13, 17–​18, 71–​81; life expectancy calculator 18–​28; metaphorical 106–​116, 123; sleep as metaphor of 70; unspeakable failure 46, 87, 107–​108 see also death clocks death clocks 42–​52 decision-​making 89–​92; against algorithms’ advice 92; capability of 25–​26; trading autonomy for automation 27 Dennett, D. 98–​99, 100–​102 Descartes, R. 19, 21, 22; definition of wisdom 116 digital replica of philosopher see Dennett, D. digital twins 37–​41, 53n6 Dionysus 12 DNA methylation 42–​43, 44 DNAm PhenoAge 43 dualism 10 Durkheim, É. 10 Edmond de Belamy (portrait) 94–​98 Einstein, A. 22 Eliade, M. 10 enactivism 87–​88 entrepreneurs 76, 91–​92, 122 epigenetic age 42–​44 epigenetic biomarker see DNAm PhenoAge epistemological absoluteness 11–​12, 13

ethics/​ethical: fear and inaction 76; human prerogatives 23–​24; individualism 127–​128; meanings of ignorance 20–​21; scapegoating 80, 90–​91; wisdom over certainty 121–​123 Euripides 12 evolution 6, 132–​133 exceptionality 4, 86, 89 Exodus 10–​12, 13 failure: aging and death as unspeakable 46, 70–​71, 87, 107–​108; effect of heteronomous causes 80; escape from 129; fear and 76; important life lessons 118; socially acceptable 90 Fichte, J. G., definition of wisdom 116 foresight 15–​16, 40, 117–​119, 121–​123, 131, 132, 133 Frankenstein (Shelley) 16, 17–​18 fraud detection 35 freedom: Dennett and 99–​100, 101; Kant and 69–​70, 82n4; rationality and 24 Freud, S. 10 Galilei, G. 22, 116 generative adversarial networks (GANs) 94–​95, 103n7 genes 26–​27 Genesis 10, 11–​12, 13 global crises 5, 20, 51, 98 God/​gods/​goddesses: of chance 68–​69; mythological 12–​16; sacred space of 10–​11 Goethe, J. W. von 66 GPT-​3 98–​99 Gray, T. 17 Hamlet (Shakespeare) 119–​121 health/​healthcare 35, 36, 38–​39, 45, 53n5; digital twins 39; insurance 110, 115; management 65–​66; tracking sleep 49–​50 Hesiod 16, 28nn9&11 heteronomous causes 74, 76, 78, 80, 81, 91, 122 high expectations 4, 6, 89–​90, 92, 112, 128–​129 Homer 66, 87, 93–​94 Horvath, S. 34, 42–​43 human transience 65–​66 Hume, D. 19, 21

138 Index

ideal criteria 5–​6, 110–​116 idealisation 22 identity 23–​24; autonomous shaping of 97–​98, 111, 112, 113, 114, 118; autonomous to automated 3; future shaping of 88–​89, 92–​94, 123; human 16–​17, 50, 65–​66; making sense of 3, 5, 6, 75–​76, 79–​80, 90, 100–​102, 128–​129, 132 ignorance 17, 18, 20–​21, 24, 106, 110, 116; Nietzsche 108, 109 intention, artist’s 97, 103n7 Jaeggi, R. 92 Kant, I. 19–​20, 21, 22, 68–​70, 74, 78, 82nn3&4, 127, 128; metaphor of turnspit 93 Kekes, J., definition of wisdom 116–​117 Keynes, J. M. 51, 54n19 knowledge 18–​20, 116; can be dangerous 9–​10, 13, 17; limiting 11–​12; logos and 24–​25; and prediction 14, 21–​23; shift from humans 27; uncertainty and 54n19 Liddell, H. G. and Scott, R. 12, 21, 68–​69, 128 lifespan 65–​66 logos (λόγος) 21–​25, 116, 117, 118, 119 Macbeth, The tragedy of (Shakespeare) 3, 4, 17, 71, 72, 76–​80, 86, 88–​90, 111–​112, 121 machine learning, COVID-​19 vaccines 36 machine(s): AI as prediction machine 8–​9; human efficiency and optimisation 66; machine learning and vaccines 36 Massachusetts Institute of Technology 34 mattresses 49, 70 medicine: digital twins 38–​40, 53n6; vaccines 36 melancholy 95, 96, 97 metaphorical death 106–​116, 123 metis (μῆτις) 21, 22, 24, 116 mind, creativity of human 102 MIT Technology Review 34, 42 morality, Kant and autonomy 68–​70, 82nn3&4 mortality see death; death clocks Moses 10–​11

multitasking 86–​87, 89–​90 mythology 12–​18 necessity, virtue of 51–​52, 116–​117 neoliberal societies 4, 40, 46, 71, 87, 89, 107, 111, 130 neoliberalism 2, 27, 37, 44 Nietzsche, F. 108, 109 Nozick, R. definition of wisdom 117 object/​subject 23–​24, 91–​92 Odysseus 66, 87, 93–​94 Oedipus (Sophocles) 3, 4, 14–​15, 71–​76, 86, 88, 89–​90, 111–​112 omniscience 10, 11, 12, 17–​18, 25 optimisation 40, 47–​49, 51; efficiency and 65–​66 organisational studies 87–​88 Orpheus 12, 13 Otto, R. 10 overexcitation 78, 81, 90 overscheduling 4–​5, 6, 86–​87, 89–​90, 92, 93, 102, 112, 123, 133–​134 Ovid 24–​25 partners and parents 91, 92, 122 physicians 26, 76, 91, 92, 122, 133 Picasso, P. 95–​98 Plato 16, 18, 19, 21–​22; definition of wisdom 116, 117 poetry algorithm 103n7 Popper, K. 15 power: Basis watch 49; computing 34, 37, 38; of digital twins 39–​40; of epigenetic clocks 42–​44, 46; to replace autonomous decisions 47 predictive analytics (Siegel) 35 prerogative of acting against all odds 3, 6, 8, 28, 51, 72, 74, 75, 79, 123, 133–​134 probability 37, 38, 40, 41, 51, 54n19, 119, 120 profane 10–​11 Prometheus 15–​16, 24–​25, 28nn8, 9&11 public and private bodies 115, 131 quantified self 49–​50 randomness 69, 97–​98, 103n7, 130 rational(ity) 21–​22, 23–​24; irrational overestimation 45–​46, 47; logos as form of 21–​22, 116

Index  139

re-​hierarchisation of values 6, 132 Reichenbach, H. 119 relativity 22 revolution 132–​133 rights 5, 106–​124 risk 35; fall into total inaction 89–​90; longevity 110; uncertainty and 53n4 Rousseau, J.-​J. 108–​109, 119 sabotage autonomy 46–​47, 107–​108, 118 sacred 10–​11 Salinger, J. D. 66 saliva 42, 44, 45 scapegoat 3, 70–​71, 76, 80, 90, 92, 93, 129 self-​worth 50 Semele 12, 13 Seneca, definition of wisdom 116 sensemaking: automation of 94–​103; unbearable burden 86–​94 Shakespeare, W.: Hamlet 119–​121; Macbeth 3, 4, 17, 71, 72, 76–​80, 86, 88, 89, 90, 111–​112, 121 Shelley, M. 16, 17 Siegel, E. 8, 35, 37, 40, 54n9 single individuals 1, 2, 4, 5, 8, 9, 21, 34, 37, 38, 41, 46, 71, 90–​91, 93, 98, 107–​108, 109, 110–​111, 113, 114–​115, 118, 127, 131 sleep/​rest 39, 48–​50, 54n11, 66–​67, 70 socio-​political scenario 115 Socrates 18–​19, 21 Sophocles (Oedipus) 3, 4, 12, 14–​15, 17, 71–​76, 81, 86, 88–​90, 111–​112 sparagmos (σπαραγμός) 12 subject/​object 23, 91–​92 suicide 73, 77

technological prediction 1, 5, 6, 8, 21, 24, 34–​41, 46, 47–​48, 76, 80, 93, 109, 111, 129, 130, 131, 133 ten commandments 11 thought experiments 1, 3, 4, 5, 41, 44, 71–​81, 86–​94, 95–​98, 100, 111 three witches (Macbeth) 76–​77, 78–​79, 80, 111, 112 time: efficiency and 66–​67, 102; exceptionality and 86–​87; sleep and 48–​49, 66–​67 Tiresias 13–​14 Titan 15 total inaction 4–​5, 89–​90, 91, 111 Tulchinsky, I. and Mason, C. E. 8, 26, 34–​35, 36, 37, 40–​41 Turing test: artificial intelligence 98; artistic creativity 95 uncertainty 4, 5–​6, 54n19, 115; complex challenge 51–​52, 102; diminishing/​decreasing 36, 74–​75; global crises and 5, 20, 51, 98; Hamlet 120, 122; inaction to action 112; Macbeth 78–​79, 111; possibilities of 81; risk and 53n4; tragic events 73; as a value 113, 122 vaccines 36 wallpaper, art without intent 95, 97 Western culture 1–​2, 4, 6, 9, 10, 11, 12, 15, 16–​18, 21, 24, 66, 67, 81, 89, 107, 109, 111, 116, 127, 130–​134 wisdom 116–​119, 121, 122, 123, 132, 133–​134 Wittgenstein, L. 20, 21, 23 Zeus 13, 15, 16, 17, 28n11