233 60 5MB
English Pages 264 [265] Year 2023
Nudging Choices Through Media Ethical and Philosophical Implications for Humanity Edited by James Katz Katie Schiepers Juliet Floyd
Nudging Choices Through Media “This rich, crisscrossing, multidisciplinary collection succeeds in grasping and conveying the cumulative and potentially pernicious effect of nudging, which operates below the waterline of individual awareness as we are prompted and cajoled, but also reined in and hindered, always ever so gently. One is reminded of William S. Burroughs’ remark in Ah Pook is Here!: “Question: is control controlled by our need to control? Answer: Yes.” The volume does a terrific job of raising the bar on pressing ethical questions about this deeply troubling topic.” —Dr. Eran Guter, Senior Lecturer in Philosophy, The Max Stern Yezreel Valley College, Israel “This is a very stimulating and timely collection of essays addressing the ever more common social practice of “nudging” (to influence without forcing people’s decision-making and actions) in an increasingly automated society. Drawing from philosophy, sociology, history, behavioural psychology, and empirical studies, this brilliant compilation helps to envisage a new hybridity, a deeper fusion between human and artificial intelligence, that debunks our anthropocentric fantasies of privilege, and may prompt greater awareness of the digital revolution afoot.” —Dr. Victor J. Krebs, Professor of Philosophy, Pontifical Catholic University, Peru
James Katz • Katie Schiepers • Juliet Floyd Editors
Nudging Choices Through Media Ethical and philosophical implications for humanity
Editors James Katz Boston University Boston, MA, USA
Katie Schiepers Boston, MA, USA
Juliet Floyd Department of Philosophy Boston University Boston, MA, USA
ISBN 978-3-031-26567-9 ISBN 978-3-031-26568-6 (eBook) https://doi.org/10.1007/978-3-031-26568-6 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Anastasiia Monastyrskaya / Alamy Stock Photo This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Acknowledgements
This book explores nudging in the context of media and communication technology, with a special focus on the philosophical, ethical, and humane perspectives. Drawing on experts from a variety of nations and disciplinary cultures, it seeks with their contributions to be far-ranging in content, bold in vision, and provocative in analysis. Our enduring gratitude goes to our chapter authors for their contributions. Their gift of time, energy, and insight, as manifested in their sagacious chapters, stands as tangible exemplars of scholarly excellence and collegial collaboration in the twenty-first century. We especially thank Lauriane Piette of Palgrave Macmillan for her insightful advice about the book’s focus and patient shepherding of this project—through the throes of the global COVID-19 pandemic—to help bring it to fruition. This book project itself originally arose out of a conference held at Boston University’s Hillel House with the title of Preserving individual freedom in an age of socio-technical control via algorithmic rewards and punishments, held on September 11, 2019. The conference was sponsored by the Center for Mobile Communication Studies and the Division of Emerging Media Studies of Boston University’s College of Communication. The Consulate General of France in Boston was a major co-sponsor. Our other conference co-sponsors were Boston University’s Center for Humanities, directed by Susan L. Mizruchi, and BU’s Artificial Intelligence Research Initiative, directed by Margaret Betke. The Feld Professorship in Emerging Media also provided funding. We remain deeply grateful to the above for their steadfast support. v
vi
ACKNOWLEDGEMENTS
In both the mounting of the conference and in developing this volume, the co-editors benefited from the advice of many wise scholars. In particular, we thank Colin Agur, Michael Ananny, Emílio José Montero Arruda Filho, Joseph Bayer, Michael Beam, András Benedek, Scott W. Campbell, Benita Dederichs, Benjamin Detenber, Michael Elasmar, Seth Lewis, Nicolas M. Mattis, Kate K. Mays, Arnaud Mentré, Benjamin Merrick, Rachel Merrick, Judith E. Möller, Ekaterina Novozhilova, Kristóf Nyíri, Roy L. Pea, Nicolas Prevelakis, Ronald E. Rice, Edson Tandoc, Mina Tsay-Vogel, Michaël Vallée, Joseph Walther, and Brian Weeks. Additionally, this work was made stronger due to the thoughtful critiques of anonymous chapter reviewers. To all of our colleagues, we offer our heartfelt thanks.
Contents
Introduction 1 James Katz, Katie Schiepers, and Juliet Floyd Part I First Axis: Philosophy 17 Nudging and Freedom: Why Scale Matters 19 Jens Kipper Metaphors We Nudge By: Reflections on the Impact of Predictive Algorithms on our Self-understanding 33 Jos de Mul Can Nudges Be Democratic? Paternalism vs Perfectionism 59 Sandra Laugier Revisiting the Turing Test: Humans, Machines, and Phraseology 75 Juliet Floyd Part II Second Axis: Praxis 115 Interview with Stephen Wolfram117 Juliet Floyd and James Katz vii
viii
Contents
Means vs. Outcomes: Leveraging Psychological Insights for Media-Based Behavior Change Interventions137 James Cummings Nudging, Positive and Negative, on China’s Internet159 Lei Guo Nudging Choices through Media: User Experiences and Their Ethical and Philosophical Implications for Humanity173 James Katz and Elizabeth Crocker Building Compliance, Manufacturing Nudges: The Complicated Trade-offs of Advertising Professionals Facing the GDPR195 Thomas Beauvisage and Kevin Mellet The Emergence of the ‘Cy-Mind’ through Human-Computer Interaction207 Richard Harper Saying Things with Facts, Or—Sending Messages Through Regulation. The Indirect Power of Norms233 Peppino Ortoleva Conclusion: The Troubling Future of Nudging Choices Through Media for Humanity245 James Katz, Katie Schiepers, and Juliet Floyd Index257
Notes on Contributors
Thomas Beauvisage is a sociologist and web scientist at the Social Sciences Department of Orange Labs (Sense). His early works and PhD focused on web usage mining and browsing behavior characterization. Currently, his activities address both internet research and market studies; more specifically, his research covers online market devices and platforms, digital advertising, and uses of online media. He is also involved in methodological investigations on the use of quantitative behavioral material for social science. Elizabeth Thomas Crocker is a communications professional at a large scientific society in Washington, DC. She holds a PhD in anthropology from Boston University and an MA from Louisiana State University. Her work focuses on cultural understandings of identity and belonging and how emerging media impacts engagement with science, science identity, and science communication. She lives in Maryland where she enjoys spending time with her family and taking breaks by hiking in the Shenandoah National Park far away from emerging media and cell phone service. James Cummings is a faculty within the Division of Emerging Media Studies at Boston University, where he researches and teaches courses on issues related to human-computer interaction and the psychological processing and effects of media. Cummings’ research on emerging media, games, and virtual reality has been published in the Journal of Communication, Media Psychology, Human-Computer Interaction, New Media & Society, Computers in Human Behavior, proceedings of the ACM ix
x
NOTES ON CONTRIBUTORS
CHI Conference, and the White House Office of Science and Technology Policy’s Occasional Papers series. His professional experience includes consultancy related to virtual reality, social gaming, and media-based behavior change interventions, as well as user experience research at Google X. Jos de Mul is Full Professor of Philosophical Anthropology and Its History at the Erasmus School of Philosophy, Erasmus University Rotterdam. His research is on the interface of philosophical anthropology, philosophy of culture, philosophy of (bio)technology, aesthetics, and history of nineteenth and twentieth century continental philosophy. He has also taught at the University of Michigan (Ann Arbor, 2007–2008), Fudan University (Shanghai, 2008), and Ritsumeikan University (Kyoto, 2016). In 2012 he became a visiting fellow at the IAS in Princeton. From 2005 to 2011 he was vice-president of the Helmuth Plessner Gesellschaft, and from 2007 to 2010 president of the International Association for Aesthetics. His monographs include: Romantic Desire in (Post)Modern Art and Philosophy (1999), The Tragedy of Finitude. Dilthey’s Hermeneutics of Life (2004, 2010), Cyberspace Odyssey. Towards a Virtual Ontology and Anthropology (2010), and Destiny Domesticated. The Rebirth of Tragedy out of the Spirit of Technology (2014). His full CV and a list of downloadable publications can be found at www.demul.nl. Email: demul@ esphil.eur.nl. Juliet Floyd teaches Philosophy at Boston University, researching twentieth century and American philosophy, philosophy of logic, mathematics, language, symbolism, aesthetics, and new media. Her recent books include Wittgenstein’s Philosophy of Mathematics (2021, with Felix Mühlhölzer) and Wittgenstein’s Annotations to Hardy’s Course of Pure Mathematics: A Non-Extensionalist Conception of the Real Numbers (2020), as well as Philosophy of Emerging Media (with James E. Katz, 2016), Philosophical Explorations of the Legacy of Alan Turing (with A. Bokulich, 2017), Perceiving the Future Through New Communication Technologies (with James E. Katz and Katie Schiepers, 2021), and Stanley Cavell’s Must We Mean What We Say? at Fifty (with Greg Chase and Sandra Laugier, 2021). She has authored over 85 articles, most recently on philosophical issues surrounding everyday life in a computational world (see https://www. mellonphilemerge.com/).
NOTES ON CONTRIBUTORS
xi
Lei Guo (PhD, The University of Texas at Austin) is an associate professor in the Division of Emerging Media Studies at College of Communication, Boston University. Her research focuses mainly on the development of media effects theories, emerging media and information flow, and computational social science methodologies. Guo’s research has been published widely in leading peer-reviewed journals including the Journal of Communication, Communication Research, and New Media & Society. Her co-edited book The Power of Information Networks: New Directions for Agenda Setting introduces a new theoretical perspective to understand media effects in this emerging media landscape. Richard Harper is the author of fourteen books and collections, including Texture: Human Expression in the Age of Communications Overload (2011), Trust, Computing and Society, Harper, Ed, 2014, and Skyping the Family, Harper et al., Eds (2019). He is currently completing The Shape of Thought: Reasoning in the Age of AI. He is the director of the Material Social Futures Centre at the University of Lancaster and lives in Cambridge, England. James E. Katz, PhD, Dr.h.c. is the Feld Professor of Emerging Media and also directs the College of Communication’s Division of Emerging Media Studies at Boston University. Katz’s core interests revolve around societal and interpersonal aspects of communication technology. His pioneering publications on artificial intelligence (AI) and society, social media, mobile communication, and robot-human interaction have been internationally recognized and widely translated. Prior to his Boston University appointment, he was Board of Governors Distinguished Professor of Communication at Rutgers University. In 2021, he received the prestigious Frederick Williams Prize for Contributions to the Study of Communication Technology by the International Communication Association, which recognizes annually “an outstanding scholar whose works and cumulative achievements have significantly advanced the study of communication technology.” Jens Kipper is an assistant professor in the Department of Philosophy at the University of Rochester. Much of his work is in the philosophy of language and mind, including the philosophy of artificial intelligence. He also has a background in applied ethics, having worked in a research ethics institute in Germany for many years.
xii
NOTES ON CONTRIBUTORS
Sandra Laugier is Professor of Philosophy at the Université Paris 1 Panthéon-Sorbonne and a senior member of the Institut Universitaire de France and director of the Institut des sciences juridique et philosophique de la Sorbonne (UMR 8103, CNRS Paris 1). She has been visiting researcher at the Max Planck Institute, visiting professor at Boston University, visiting professor at University Roma La Sapienza, distinguished visiting professor at Johns Hopkins University, visiting professor at Pontifical University in Lima, and visiting lecturer at the Facultés SaintLouis. She has also acted (2010–2017) as Scientific Deputy Director of the Institut des Sciences Humaines et Sociales (Division of Human and Social Science) of the CNRS, in charge of Interdisciplinarity. She has extensively published on ordinary language philosophy (Wittgenstein, Austin), and moral philosophy (moral perfectionism, the ethics of care), American philosophy (Cavell, Thoreau, Emerson), popular culture (Film and TV series), gender studies, democracy, and civil disobedience. She is the translator of Stanley Cavell’s work into French. Kevin Mellet is Assistant Professor of Sociology at Sciences Po and a member of the CSO (Centre de Sociologie des Organisations). His research, at the crossroads of economic sociology and science and technology studies, focuses on the digital economy. He is particularly interested in issues related to online visibility and reputation. This includes the following areas: online advertising and marketing, social media, the different forms of participation and expression of consumers on the internet and, more recently, issues related to personal data protection. Peppino Ortoleva (b. Naples 1948) has been active for more than forty years as a scholar, critic, curator, at the crossroads of history, media studies, TV and radio authoring, museums, and exhibits. He has been Full Professor of Media History and Theory at the Università di Torino. He is also Profesor Adjunto at the Universidad de los Andes in Bogota, Colombia. The Paris 2 University (Panthéon-Assas) has conferred upon him an honoris causa PhD in communication in recognition of his innovative research. His most recent books are Miti a bassa intensità (Low Intensity Myths), 2019, a thorough analysis of contemporary mythologies and their presence in modern media, and Sulla viltà (On Cowardice), 2021, a study of the history of a “common evil” which is a historical research on the history of values in the Western world. He has published books on the history of the media system, on the youth movements of the sixties, on private television in Italy and its cultural and political role, and
NOTES ON CONTRIBUTORS
xiii
on cinema and history. He is now working on a new study about the role of misunderstanding in communication.His activity as a curator of exhibitions and museums started in the early 1980s. Among the most recent exhibitions he has curated: Rappresentare l’Italia on the history of Italian Parliament, 2011, I mondi di Primo Levi, 2015, and Sulle tracce del crimine on the history of TV crime series. He is now curating, among other projects, the city museum of Catania, in Sicily. Katie Schiepers is an academic administrator and former Division Administrator of Emerging Media Studies at Boston University. She has co-edited Perceiving the Future through New Communication Technologies with Katz and Floyd (2021). She holds a Master of Education and has also completed graduate studies in Classics and World Heritage Conservation.
List of Figures
Revisiting the Turing Test: Humans, Machines, and Phraseology Fig. 1.
Fig. 2 Fig. 3
Blythe House London, 1930, used as a Post Office. (https:// upload.wikimedia.org/wikipedia/commons/8/89/Blythe_ House_preparing_totals_for_daily_balance_1930s.JPG, accessed 7/30/2022)79 Gender as a Control Test. (Constructed from extension of https://commons.wikimedia.org/wiki/File:Turing_Test_ Version_1.svg, accessed 7/25/2022) 95 The Turing Test as an Evolving Social Test, constructed from (https://commons.wikimedia.org/wiki/ Category:Turing_test#/media/File:Turing_Test_version_3. png, accessed 7/25/2022) 96
Nudging, Positive and Negative, on China’s Internet Fig. 1 Fig. 2
The dual-path online political participation model in China 167 Democratic political participation as a function of using social media for alternative news and democratic citizenship norms (Note. All the three variables—democratic political participation, using social media for alternative news, and democratic citizenship norms—are composite variables measured on a five-point scale) 168
Building Compliance, Manufacturing Nudges: The Complicated Trade-offs of Advertising Facing the GDPR Fig. 1 Fig. 2
Simplified online advertising value chain Banner visibility coding (highlighted in red)
197 200 xv
Introduction James Katz, Katie Schiepers, and Juliet Floyd
Although the term “nudging” is in common usage in academic circles and elsewhere, nudging can mean many different things to various audiences. In Thaler and Sunstein’s 2008 popularization of the behavioral management concept, they define it as “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid” (Thaler and Sunstein 2008, p. 6). A “nudge” then is, in short, a way to regulate or direct behavior in a way that encourages but does not force people to take specific actions. (This will be the first of many mentions of Thaler and Sunstein’s theories, as several chapter authors examine their arguments and their implications in depth.) Nudging as a concept can be further refined in the context of our volume’s explorations as the process of tilting or limiting the choices presented to an individual in a way that increases the likelihood that the
J. Katz • J. Floyd (*) Boston University, Boston, MA, USA e-mail: [email protected]; [email protected] K. Schiepers Boston, MA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_1
1
2
J. KATZ ET AL.
individual will make a choice or take an action than they may have otherwise. In any such case the actual experience of the individual is different because of the intervention made by those in control of the user’s situation. Certainly there’s much more to the word “nudge” than can be captured in a few simple sentences, and in what follows our able contributors explore these processes, implications, and ramifications in detail. But beyond this there exists a substantial literature on the subject to which the reader may be referred, such as Puaschunder (2020), Tagliabue (2022) and Straßheim (2017). We aim to contribute to this literature and provoke our readers to consider what may otherwise slip past their attention. More specifically, drawing on multiple disciplinary perspectives—including sociology, behavioral psychology, history, ethics and philosophy—this book places front and center what it means to be human in an age of ever- expanding realms of nudging. Questions that are addressed throughout this volume are engaged with through a variety of themes. Nudging, we are told, is “only” the suggestion of alternatives to the individual which are deemed to be in someone’s best interest. But who is this someone? Is it an individual faced with an algorithmically delivered set of options, some of which are designed to have more appeal than others? Or is it in the interest of the unit doing the suggesting? Or in the interest of society itself, or groups within it? Perhaps nudging consists in some subset of all of the above, but this is seldom made explicit in the design of nudging, or “choice architecture,” as it has been more technically dubbed (Hertwig and Grüne-Yanoff 2017). Another aspect of a nudge is that it should come at little or no cost. Importantly, the nudges are not ostensibly mandatory or difficult to give in to, but rather “greasing the skids” towards decisions that have been predetermined by being made easy. Yet what happens to us as unique individuals as we become ever more funneled into directions set by seldom-seen if not entirely unknown others? Are we still ourselves, or are we becoming transformed into a product of massive social engineering? And who determines the overall architecture of this massive engineering product? Presumably it will be designed, and evolve, far from democratically. Of course people have always tried to influence one another. Sometimes this has been done through blandishments or discussion, sometimes through commercial sponsorship, highway billboards or religious sermons. And yet at other times, it is done both literally and metaphorically at the point of a bayonet. The latter “point” (!) may be what occurs when the more limited forms of governmental nudging fail to have their desired
INTRODUCTION
3
effect, as in securing tax payer compliance, initially through nudging on tax forms to reduce cheating (Fonseca and Grimshaw 2017). What is different about this book is that it takes a multi-disciplinary view on the situation. The papers included here address the philosophical and operational consequences of these nudging schemes, tracing their implications in terms of ethics and humanity, deeply conceived. The implications of the analyses contained in the book should be of significant interest for readers in sociology, communication, media studies, philosophy, and ethics. As well, the sophisticated generalist or the concerned specialist should find much food for thought in what follows. Before delving deeper, we must make a distinction between types of nudging, which as mentioned can also be thought of as a form of choice architecture. Nudging that uses computer algorithms are different in degree than those that are inert or momentary (leaving candy bars at the checkout counter for children to see). Algorithmic nudging, as its name implies, uses algorithms to collect information and guide choices of the user audience members, generally in a dynamic way. Defenders of nudging processes, particularly when the nudges are deployed through increasingly precision-targeted algorithmic means, can be articulate about the benefits that may be forthcoming from well-designed systems, which are after all partly designed by way of the data contributed, often voluntarily, by human users themselves (we like the convenience of the app or browser, and allow data to be collected). “Who does not want to have a happier, more satisfying, more convenient and richer life?” experts ask. Certainly algorithmically guided nudges can achieve these ends, at least to a degree. Critics are quick to point out the paternalistic and surveillance aspects of nudging and the morally unhappy situation that this process can lead to. Yet, even more, they often contest the claims of special insight and unadulterated beneficence that the technocratic designers of the systems claim for themselves. In riposte to these critics, algorithmic nudging advocates ask what is wrong with helping people make choices from which the choosing party benefits in practical or psychic terms. Moreover, they point out that they can relieve the fatigue that comes from having to make many sequential choices, especially under stressful conditions. Again, such decisions, made under the stress of unwieldly complexity of choice sets, are likely to be poor ones (Pignatiello et al. 2020). We should remember that the process of “nudging” people to make decisions or act in a certain way has a long and hoary history that continues into the present. We have records from ancient Greece explaining how
4
J. KATZ ET AL.
talented orators moved crowds to vote in support of certain policies or politicians and on occasion persuaded retreating soldiers to turn about and face destruction at the hands of enemies. Placards, billboards and broadsheets have continued this tradition. This applies to enduring issues, such as environmental awareness. On a recent trip to China, one of the co-editors found on Beijing restaurant tables paper placards (also called “table-tents”) with cute cartoon-characters nudging customers not to waste food or water (Katz, personal observation, November 29, 2019). Nudges are as current as today’s headlines: Microsoft Office computer splash screens may unbidden suggest to viewers that they click on a box in order to take steps to become “a better ally in the fight for racial equity and justice” (Katz, personal observation, August 30, 2022). So a long tradition of nudging people to take action continues today in both older forms of communication media as well as in those of the latest digital technologies. In this volume we are interested in the way in which nudging takes place via communication technologies and, even more so, what these ways imply for people’s lives and our joint future. Fears of racial, gender, ethnic or other types of discrimination arising from biases in algorithms are an important topic, both in terms of perception and reality. This topic is not a central topic of this volume though it is addressed at various points, and the literature on this issue is already large and impressive (see Belenguer 2022, for a high-level overview). There is instead another topic which is not given sufficient attention. These are cases of “nudges gone bad.” As a 2022 article in Science magazine pointed out, nudges can provoke the very behaviors that they are intended to prevent. In this case, nudges to drivers to be more cautious caused an increase in car accidents (Hall and Madsen 2022). Likewise, the precise opposite of a nudging project’s goal of increased tax compliance was achieved in Fonseca and Grimshaw’s (2017) study. Organ donation sign-ups plunged after the Netherlands’ government introduced a national nudging program designed to increase donations (Gill 2018). Humans are capable, as machines are not, of reflection on the meaning and implications for their own self-understandings of the choice to implement choice architecture. Our intelligence and ability to discuss the architecture and our feelings about it must therefore be factored in to our understanding of its character. A higher level of criticism is that nudging, even if successfully practiced, is a mere band-aid that depletes public pressure for larger changes to
INTRODUCTION
5
improve society (Gal and Rucker 2022), substituting channeling of individual responses for communal, public decisions. This argument -- in effect that success is its own worst enemy -- too easily invites an invidious comparison with those Russian revolutionaries who sought to make the populace yet more miserable in order to prompt them to rise up against the Tsarist exploiters. Nonetheless, such criticism should be considered by those engaged with fundamental aspects of societal reconstruction and social engineering. Returning now to the main focus of this volume, we believe that our emphasis on mediation is particularly worthy of exploration given the rapidly evolving technologies of communication, the most prominent of which are those that fall under the umbrella of social media, apps and internet usage. We wish to explore forms of life under ever-expanding media tools that are becoming extensions of ourselves, but equally extensions of remote others, including algorithms, which wield increasing degrees and intensity of influence and power over us. To what extent do these tools become part of our decision-making processes and routines, both explicitly and implicitly, and with what consequences? To what extent do the choice architectures designed by those who set up the choices we encounter through our communication media lead us to act in a certain ways? And what do these processes say about our humanity and meaning- making? To address these questions, we draw on philosophical, ethical and empirical studies. Advocates of behavioral choice modeling are paternalistic in that they want us to make choices and engage in cognitive processes that they believe are in our own best interests (or at least someone’s best interests). We obviously have no objection on the face of it to users of digital media making choices that lead to better, healthier and otherwise improved lives. It is often hard to disagree with some of the objectives that are set forth. But one can appreciate certain end-states without endorsing the means to achieve them in toto, and one may disagree with some of the values that are being designed into the architecture of choices that are then presented to users. Just to name two: consider drinking alcohol and buying a lottery ticket. Well-meaning choice architects certainly would try to discourage both of these, given the utilitarian costs of such activities to the total societal group. But many people, for a variety of reasons, prefer what might be termed affordable luxuries or cheap escapes. Whose “choice architectures” should prevail? Another area of disagreement would be about the
6
J. KATZ ET AL.
particular means that are used to achieve the results desired by the choice architects. Our chapter authors explore these and related topics in depth.
Organization of Book and Overview of Chapter Contents Conceptually, the chapters of this volume aggregate around two overlapping axes. The first axis is philosophical. Essays along this axis address what it means to nudge people and pose the question of how doing so affects their status as autonomous human beings. What is the ethical significance of such nudges, both for the subject and the controller of choice architecture structures? The second axis is praxis: what are the underlying values that inform the theories and thence the actions that are involved in the nudging process? What are some of the mental maps people have during their interplay with algorithmic nudging? How do these interplays effect peoples’ responses and self-understandings? What may some of the longer- term implications be for social actors and the larger society? Both of these axes draw on the role of media as the mechanism through which nudges are put into action. Put differently, these axes address how media-based nudging programs have played out in terms of effectiveness, meaning and effect.
First Axis: Philosophy This first axis of the book brings into focus philosophical questions relating to the ethics of nudging within a communication and media environment. Dimensions of analysis include the obligation to care, and the need to speak, for others (Laugier); freedom to choose as self-control and the role of paternalism in control (Kipper); and the relationship between nudging and physical autonomy as moderated both through the imagination and through hardware and software instantiations (de Mul). Taken together, these chapters also offer a critical perspective on longer-term implications for the normative order and epistemological understandings of the massive social engineering project encapsulated by the term nudging.
INTRODUCTION
7
Jens Kipper With concision and precision, Jens Kipper discusses large-scale nudging. More specifically, he examines the use of nudges that are highly individualized, highly prevalent, and highly effective in their targeting of behavior. He begins his argument by demonstrating the ways in which large-scale nudging has the potential to compromise our freedom. From there, he focuses in the topic of digital nudging. He explains how digital environments are ideally suitable for large-scale nudging. After exploring the rationale for his analysis, Kipper concludes by offering a vision of the steps that could be taken to short-circuit the erosion of freedom by these large- scale nudging systems. He urges that we attempt to find ways within the increasingly technological environment to preserve our precious freedoms. Jos de Mul By drawing on the philosophy of science as well as the philosophy of machinery (though he does not call it that), Jos de Mul describes how we in modern society have grown to understand our world in non-material metaphors, and he invokes Nietzsche to draw out the difficulties we have in our historical moment in perceiving the deeper meaning of ordinary reality. From non-material metaphors we build stories, and stories give our lives meaning. Unlike machines in earlier eras, the computer and the virtual world it creates provide a dynamic vitality that suggests human or at least intelligent agency is at work. People, metaphorically and emotionally, relate to machines as sentient and even intelligent entities. As Juliet Floyd’s chapter details, the Turing Test is as much a metaphorical exploration as it is a true empirical test: we anthropomorphize machines naturally. Jos de Mul rehearses the growth and incorporation of databases in human experience and shows how its transformative potential—for example, as applied through gene editing techniques—may even rewrite human destiny. Imminent though they may seem, databases may come to nudge if not control many aspects of human life. Jos de Mul points out that much of the technology that was developed for artificial intelligence and the decision-making that stems from it were based on particular samples of people who were not representative of the full array of future subjects and users of such systems. Here we can see that nudging choices are biased, with results that can be deadly. Even if such biases are more likely to be
8
J. KATZ ET AL.
due to laziness and questions of short-term economy than an intended grand conspiracy to hurt certain groups, they cause harm. Drawing on Heidegger, Jos de Mul argues that people tend to become their own instrumentality, manipulated by the technological systems they create. The current computational systems will always be compromised by the fact that they can only predict the future by extrapolating from the past. But when it comes to human nature, many complex behavioral systems become inherently unstable, thus defying the predictability upon which nudging depends. Ultimately, de Mul concludes, it is essential that humans maintain vigilant control over the algorithmic universe they are constructing and never allow themselves to fully delegate responsibility to machines. Sandra Laugier Sandra Laugier draws on the works of Wittgenstein, Diamond, and Cavell to note that belonging to the contemporary world, i.e., engaging in its forms of life, is means coming to terms with rules that require constant adjustment to particular scenarios. Beyond the explicit dictates of society, she notes that we are also “manipulated” and influenced by communication, advertisement, images, new media. Our freedom is limited. Given these realities and those of the capitalist world and of the social order, why not accept the idea of nudges, i.e., incentives that would gently lead us to behavior that is positive for us and for others? For Laugier, the problem with nudges does not concern restriction of freedom (nudges have the advantage of being non-coercive and allowing choice, as their promoters Sunstein and Thaler hold). We are all happy to be encouraged to become better, by any means, by people we love or admire, or by books and films that matter to us. The problem of nudges, Laugier argues, is not in method or freedom, but in morality. It is not the reality of soft power or control, or the influences and incentives that permeate our society that we must fight: these are inevitable and appreciable, and learning to lead a life is about creating a path in the middle of all this. What must be fought, holds Laugier, is moralism and conformity, and these are at work in political thought modeled on economic thought, embedded in the very concept of nudging, which in its formulation is intended to be positive and gentle. Why should we accept what governments and society adopt from the methods of trade/business, even if it is for our wellbeing, she asks? Fundamentally, Laugier concludes, we should not simply accept what
INTRODUCTION
9
some experts and governments decide or claim is good for us, or optimal in how to conduct ourselves. Instead, we each need to find our way— something that popular culture, e.g., television series, help us to do. Laugier comes to the conclusion that the nudging project, at its heart, reflects a deeply undemocratic vision of society and of the capacities of citizens. Juliet Floyd Juliet Floyd offers a dramatic philosophical move in her chapter. She seeks to not only correct the record on what the Turing Test was taken to mean by Alan Turing, but also to apply this understanding to the larger world of digital processes generally and the processes of nudging in particular. Her re-reading of the Turing Test unearths contemporary challenges facing “nudging” and AI. Drawing on Turing’s 1936 analysis of computation, she shows its resonance with Wittgenstein’s Blue and Brown Books. Using the power of this integration, Floyd answers current objections to the Turing Test (including Searle’s highly influential Chinese Room). She holds that Turing’s Test for “intelligent machinery” is construed as a social experiment between humans in the face of emerging technology. Floyd stresses the significance of what Wolfram has called “computational irreducibility” (discussed in the Wolfram interview chapter herein), as well as our need for what Wittgenstein called the need for “surveyability” of algorithms. She uses these concepts to interrogate contemporary concerns about nudging, algorithms and Artificial Intelligence as these are applied with increasing precision and ubiquity in everyday life. With broad brushstrokes, she inspects an array of hot-button topics including privacy, surveillance, and domination via informatics. She considers also lack of explicability of decision-making, biases, ethics, and unequal negative impacts on populations. Turing’s prescient idea -- that it is human beings who bear responsibility for continual, ever-evolving, meaningful public discussion of the sorting, typing and design of algorithms -- is defended as something more than what Kahnemann, Sibony and Sunstein have recently denigrated as mere “noise”, features of human particularity and articulate investment that these advocates of nudging claim should eliminated or minimized for sound decision-making. Human “noise”, Floyd argues, is ineliminable and part of what meaningful public expression involves.
10
J. KATZ ET AL.
Second Axis: Praxis The second axis of the book focuses on the way in which nudging comes into play in people’s lives at the quotidian and societally programmatic levels (Cummings, Guo). Here we find analyses at a high level of the ways in which nudging has been applied and how such attempts have worked out in practice (Wolfram). We also explore the everyday meanings of nudging expressed in ways in which people take their mental maps of algorithms into account (Katz & Crocker). These chapters interrogate the place of human imagination at work in concert with communication technologies that guide and influence choices (Ortoleva). It also includes a topic that is rarely discussed, but of special interest: how those who create the algorithmic nudges navigate a host of formal rules and ethical perspectives and simultaneously seek to pursue the goals of their organization (Beauvisage & Mellet). Also addressed are the ethical settings of nudging within behavioral contexts in terms of smoother integration of Artificial Intelligence (AI) with people’s routines and expectations (Harper). Stephan Wolfram Interview Stephen Wolfram is a famed scientist, author of A New Kind of Science and the creator of Mathematica, an on-line site on mathematics used by millions and run on the platform WolframAlpha. Juliet Floyd and James E. Katz interviewed Wolfram about nudging and the role of AI’s for humanity in August, 2022. With broad experience in the tech sector, and decades of experience designing humanly-usable systems, Wolfram takes account of present features of AI and the bigger, longer picture, stressing the limits of AI in relation to human history, interest, and action. AI constitutions for democracies, AI-written contracts, driverless cars, and the meaning of human historical time are all discussed, with emphasis on Wolfram’s notion of computational irreducibility and his view that human problems of philosophy and ethics are not resolvable by AI alone, but will require human discussion. James Cummings James Cummings provides an overview of numerous psychological considerations relevant to media-based behavior interventions. He begins by examining how media strategies can be used to gain benefits for users even
INTRODUCTION
11
as he reveals some of their drawbacks and limitations. He gives the example of electricity conservation encouragement via what can be called gamification. He shows how nudging can be used as a serious game to achieve an important social purpose but also as a mechanism for achieving behavior change. This and other behavioral interventions pose serious questions about surveillance and its effect on people’s behavior, not only within the target of an experiment/game, but also beyond. These questions are particularly relevant as centralized technologies are becoming prevalent in nearly all aspects of ordinary life, including pressing special situations where health or safety are of particularly important concern. Expanding upon the findings of that study, his chapter examines sundry psychological processes—including goal-setting, social modeling, message framing, social comparisons, and the competition for individuals’ finite attentional resources—that are relevant for media designers seeking to motivate and guide users toward particular habits or patterns of behavior. In turn, he presents a typology of psychological methods for effecting behavior change—namely, reinforcement, nudging, and internalization. He then reviews how different media technologies, both new and old, befit these different methods. Finally, he considers how media formats that differentially leverage these distinct approaches to motivational design and behavior change may be contrasted in terms of their (1) allowance for personal autonomy, (2) likelihood of instilling long-term compliance, (3) relative prioritization of designer goals versus user wellbeing, and, finally, (4) implications for fostering a healthy and functioning society. Lei Guo Lei Guo discusses the long-standing need of the government of China— particularly in light of the country’s turbulent history—to achieve stability among the population. That, plus the overwhelming predominance of political control exercised by the Chinese Communist Party, has led to a great emphasis on guiding what can and cannot be said and what can be accessed via the Internet and other social media. (Communist Party doctrine holds that the party is the vanguard that leads the rest of society, particularly the government, toward various social and economic goals.) Through both volunteer and paid efforts, as well as official governmental monitoring, a great deal of effort is devoted to monitoring and, depending on the content, promoting or suppressing digital communication. Using a case study approach, Lei Guo examines a group of Chinese
12
J. KATZ ET AL.
Internet users who voluntarily spread “positive energy”. That is, they seek to defend the status quo from the bottom up, a marked distinction to the specialists who are recruited and paid by the government to monitor, criticize and promote content according to its alignment with Party positions. She finds that this form of voluntary effort is a way that the government (as guided by the Party) has found to voluntarily nudge people to engage in conduct that serves societal interests as the government perceives them. It is worth noting the empirical approach that Lei Guo takes to the topic, drawing as she does on WeChat and national surveys as sources of data to inform her analysis. James Katz & Elizabeth Crocker James Katz and Elizabeth Crocker use journalistic-style interviews to understand how twenty-first century users, mostly in the US, perceive and react to algorithmic nudging that they experience and ordinary life. Along with Beauvisage and Mallet’s chapter, this essay juxtaposes what might be called an everyday sense of life alongside algorithmic nudging. As one might expect, there is a range of attitudes and responses. These range from bafflement to commonplace paranoia to seeking to actively resist the nudging forces. As one might also expect, there is a range of degrees of effort that people want to devote to the area of algorithms. Some have no wish to get engaged or prefer to bypass the attempts at influence exercised by the algorithms. Others readily surrender themselves to the nudges, believing that they are at worst neutral and at best benign. The third approach is to struggle against the influences of the nudges. This can take the form of trying to thwart or mislead the algorithm to trying somehow to neutralize or negate the nudge. At this juncture, these are not well-developed strategies, but rather ad hoc responses to what appears to be encroachment on individual autonomy. Katz and Crocker speculate that two forces may come together to make more explicit and systematic the responses to algorithmic nudging: the legislative and the cultural. In terms of legislative responses, groups like the OECD as well as state and national-level bodies are seeking to regulate and control algorithmic nudging. These of course will have a direct effect on what kind of nudging is taking place. But perhaps equally important are the socio-cultural ideas that are being developed about what our algorithmic nudges are and what are they doing. These ideas can become powerful forces on their own, as we have witnessed in the
INTRODUCTION
13
examples of the consumer sustainability movement and in the efforts to have the contributions of African heritage individuals more widely recognized within society. It is entirely plausible that a similar popular movement will take place across the algorithmic nudging arena. Thomas Beauvisage & Kevin Mellet Although many of the chapters focus on the way nudging is affecting users and their environment, sociologists Thomas Beauvisage and Kevin Mellet look at the situation from the perspective of what might be considered nudge-producers. They take a microscope to the process of way in which online advertising professionals seek to respond to the European union- wide regulations concerning General Data Protection Regulation (GDPR) which is aimed at creating appropriate consent processes for data collection on users of various sites and other digital mechanisms. The GDPR regulations are strict about compliance, and yet figuring out how to comply with them presents a puzzle for producers: how can they collect desired data while the same time adhering to regulations which themselves are not necessarily well-defined or suitably prescriptive? The producers of the data collecting mechanisms, upon which algorithmic nudges are predicated, are presented with a set of problems to design appropriate consent mechanisms. As this is new territory, intellectually and legally, Internet content producers find themselves facing a complex ethical, social, legal and economic environment. Based on first-hand interviews, Beauvisage and Mellet find that under certain arrangements, nudges from the viewpoint of the producers can be seen as “impossible designs,” as they require integrating conflicting objectives and moral codes. Their research stands in stark contrast to the default academic position that such producers are self-serving actors rather than well-meaning agents seeking to optimize their algorithmic arrangements. Richard Harper Richard Harper argues that the old mental models that people bring to computers is out-of-date with the realities of AI-driven computing. This results in people failing to interact with computers in an optimal way. Likewise, AI-driven computers can make major failures in interpreting what the intention is of the human, especially when it comes to judgments about social identity (though not necessarily the numerical identity of a
14
J. KATZ ET AL.
person). More, computers do not seem to appreciate the playfulness of people, even the serious play that people engage in. Harper finds fault with the one-size-fits-all models of current nudging systems, such as the suggestion rankings offered by popular search engines. Their inflexibilities, even as they may delight millions, can consistently disappoint the handful of users who cut against the grain. Going forward, Harper maintains that to successfully work together, to the benefit of people, AI systems require different assumptions. But so to do people, Harper says. To put it differently, Harper urges that AI systems be interactively designed with subjects at a more profound level than heretofore. At the same time, efforts need to be made to re-educate people about the nature of AI systems; in part these efforts could be led by the AI-computer systems themselves. Peppino Ortoleva Peppino Ortoleva provides an historical perspective to contemporary issues by exploring the relationship between laws and norms, on the one hand, and directing and nudging on the other. By describing the implicit and indirect messaging of laws and directives, which are almost always laden with normative meaning, he shows how words and the facts represented by them often come to occupy an uneasy relationship with the wishes and understandings of those who are subject to them. Through the “halo of meanings” they generate, norms may be the cause for ambiguities and misunderstandings that are as difficult to acknowledge as they are capable of long-lasting effect. The indirect communication of laws is not the subject of judicial interpretation, but may heavily condition social perceptions and political debates around them. Nonetheless, he argues, such multi-level directives and nudging can be enormously leveraged in scale. In this way, an army can be disciplined and directed by rules, and only a few violators need be punished in order for the entire organization to learn the imposed reality of those rules without needing to have them be explicitly applied in a one-supervisor-to-onesubordinate dyadic relationship. Relations among each particular individual within the organization are performative. In addition to the military sector, he shows how the same principles can be applied to national debates concerning the advisability of vaccination mandates and related imprecations arising from the COVID-19 pandemic. He does not limit the discussion only to the twenty-first-century pandemic, but also draws on history,
INTRODUCTION
15
raising issues facing the anti-vaccination movement in Britain and elsewhere since the Nineteenth Century. The pro-vax and no-vax campaigns since the Nineteenth Century have pivoted around their real or supposed implicit messages: that public authorities have the right and duty to directly act on the bodies of citizens and their children, for their own sake and to defend public health as a collective good. The roles of political choice and objective truth become engaged in contested ways. Ortoleva’s analysis thus complements and engages with the themes of the book by underscoring the philosophical and historical roots of what might be called applied nudging and the ethical implications thereof.
Summary Despite current plentiful and laudable analyses which offer valuable insights, and to which this volume aspires to contribute, there remains the need for a deeper philosophical inspection of algorithmic nudging in relation to media and communication and the larger mechanisms of social organization. Particularly valuable would be the conducting of comparative studies of a variety of cross-national and interdisciplinary settings, and some of this book aims to deliver. Drawing on multiple disciplinary perspectives—including sociology, behavioral psychology, history, ethics and philosophy—this book places front and center what it means to be human in an age of ever-expanding realms of nudging. It is worth noting that in this volume, the authors and co-editors take the term “multidisciplinary” seriously: chapters draw not only on reason and argumentation but also on derivations from data collected from those who experience nudges as well as some of those who create them, and authors conduct their analyses using surveys, content analysis, game theory, and interviews, as well as theory and historical and philosophical analysis.
References Belenguer, Lorenzo (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics, Feb 10:1–17. doi: https://doi.org/10.1007/s43681-022-00138-8. Fonseca, & Shaun B. Grimshaw (2017). Do behavioral nudges in prepopulated tax forms affect compliance? Experimental evidence with real taxpayers. Journal of Public Policy & Marketing, 36 (2): 213–26. https://doi.org/10.1509/ jppm.15.1.
16
J. KATZ ET AL.
Gal, David and Derek Rucker. (2022). Experimental validation bias limits the scope and ambition of applied behavioural science. Nature Review of Psychology 1, 5–6. https://doi.org/10.1038/s44159-021-00002-2. Gill, Dee. (2018). How to spot a nudge gone rogue. UCLA Anderson Review. https://anderson-review.ucla.edu/rogue-nudges. Hall, Jonathan D., Joshua M. Madsen. (2022). Can behavioral interventions be too salient? Evidence from traffic safety messages. Science, 376:6591. Hertwig R, Grüne-Yanoff T. (2017). Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science. 12(6):973–986. doi: https://doi.org/10.1177/1745691617702496. Pignatiello, G A, Martin R J, Hickman R L Jr. (2020). Decision fatigue: A conceptual analysis. Journal of Health Psychology, 25(1):123–135. doi: https://doi. org/10.1177/1359105318763510. Puaschunder, Julia. (2020). Artificial Intelligence and Nudging. Pp. 101–50 in: Behavioral Economics and Finance Leadership: Nudging and Winking to Make Better Choices. Berlin: Springer Cham. https://doi.org/10.1007/978-3- 030-54330-3_6. Straßheim, Holger. (2017). Handbook of Behavioural Change and Public Policy. London: Edward Elgar. Tagliabue, Marco. (2022). Tutorial. A behavioral analysis of rationality, nudging, and boosting: Implications for policymaking. Perspectives on Behavioral Science. 26:1–30. doi: https://doi.org/10.1007/s40614-021-00324-9. Thaler, Richard H. and Cass R. Sunstein (2008), Nudge: Improving decisions about health, wealth and happiness. London: Yale.
PART I
First Axis: Philosophy
Nudging and Freedom: Why Scale Matters Jens Kipper
Introduction Nudging seems to be a wonderful tool of public policy. A nudge gently guides us towards choices that are good for us, without reducing the number of options that are available to us. As technology progresses, nudges grow in scale: they are applied to more people, and they affect more of our decisions. Such nudges are also increasingly well-tailored to individuals and, at least potentially, more effective. A large part of this development is based on the use of algorithms. Algorithms can be deployed arbitrarily often, they enable the collection of large amounts of data, and they are essential for using these data to derive predictions about individual behavior. It is natural to think that if ordinary nudges are unproblematic, or even good for us, the emergence of large-scale nudging should not be a reason for concern. One might even suggest that, if anything, being guided towards better choices more frequently, and more effectively, is a good thing. At the same time, large-scale nudges raise concerns in many people about their effect on our freedom. Intuitively, a life governed by algorithms doesn’t seem to be a free life.
J. Kipper (*) University of Rochester, Rochester, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_2
19
20
J. KIPPER
In what follows, I will try to elucidate what is behind this concern. From a theoretical perspective, much has been written about the effect of nudges on individual freedom, but little attention has been paid to matters of scale—i.e., to how freedom is affected if nudges become more common, and if they target more people more specifically and more effectively. In what follows, I use the term ‘large-scale nudges’ for nudges that are highly individualized, prevalent and highly effective. I will argue that scale matters: compared to nudges that are employed on a relatively small scale, large-scale nudges raise much more serious and in part qualitatively different concerns about the freedom of those nudged. The notion of freedom is obviously highly contested, and thus, I should say a few words about how I understand this notion here. To avoid getting entangled in the philosophical debate about the nature of freedom, my starting point will be the commonplace assumption that whatever freedom precisely amounts to, it essentially involves control.1 Accordingly, my life can only be free if I have control over its course, and if no-one else does. Beyond that, I don’t wish to commit myself to any controversial claims about what freedom involves. My discussion is structured as follows: In section “Just a Nudge?”, I address a natural line of thought that suggests that any potential worries one might have about the effects of nudges on freedom must be misguided. According to this line of thought, nudges are—by definition— gentle interventions that don’t reduce the options available to those nudged; hence, nudges preserve freedom. Against this, I argue that even such gentle interventions can significantly restrict people’s freedom, at least if they are employed on a large scale. In section “Control as the Dependence of One’s Actions on Oneself, and On No-One Else”, I elucidate the notion of control that is involved in freedom. I explain how having control over one’s life or one’s actions can be understood to mean that one’s life or one’s actions depend on one’s own mental states, i.e., on one’s beliefs, desires, hopes, fears, intentions, etc.—and, crucially, not on anyone else’s. In section “How to Devise Effective Nudges”, I describe some of the prerequisites for developing large-scale nudges and give some 1 For instance, the first sentence of the entry in the Stanford Encyclopedia of Philosophy on free will reads: “The term “free will” has emerged over the past two millennia as the canonical designator for a significant kind of control over one’s actions” (O’Connor and Franklin 2021). And in the first paragraph of the entry in the Internet Encyclopedia of Philosophy on the same topic, it says: “Let us then understand free will as the capacity unique to persons that allows them to control their actions” (Timpe 2021).
NUDGING AND FREEDOM: WHY SCALE MATTERS
21
examples of such nudges. Then, in section “Losing Control in a World of Large-Scale Nudges”, I apply the previous insights to large-scale nudging. In particular, I discuss the question of how large-scale nudging can take control away from us and, thus, how it can compromise our freedom.
Just a Nudge? Richard Thaler and Cass Sunstein (Sunstein and Thaler 2003; Thaler and Sunstein 2008) have popularized nudging as a tool for policymakers. They construe the notion of nudge as follows: A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. (Thaler and Sunstein 2008, 6)
Purely based on this definition, one might wonder how nudges could even have a noticeable effect on our freedom. If nudges don’t forbid any options and are easy and cheap to avoid, how could they possibly be problematic? Along such lines, Thaler and Sunstein argue that nudges preserve our freedom, which is why they call their approach to public policy ‘libertarian paternalism’. However, at least when large-scale nudging is involved, it is far from obvious that there is no threat to our freedom. Most saliently, large-scale nudges raise concerns due to a) their effectiveness and b) their prevalence. Let me spell out these concerns in turn. Naively, one might assume that, in order to exert a significant amount of control over a person or over any other complex system, one needs to exert a significant amount of force. If this assumption were true, then, it seems, a gentle intervention such as a nudge couldn’t possibly be used to exercise control over people. Consequently, assuming that being free means having control over one’s life or one’s actions, nudges don’t affect people’s freedom, since they leave them in control. However, the assumption that exercising control requires exerting a lot of force is false (cf. Kipper 2020, 3–5). In particular, it is highly plausible that if one has precise knowledge of a person (or another complex system), one can control their behavior even with gentle interventions. To make this vivid, consider the behavior of chaotic systems. A chaotic system is one in which very small changes in its initial conditions have huge effects on how the system
22
J. KIPPER
evolves. This is commonly called the ‘butterfly effect’, to express the idea that, in principle, the flap of a butterfly’s wings can cause the formation of a tornado in another part of the world. Most complex systems that occur in nature are chaotic. Apart from the evolution of weather systems, the occurrence of, for instance, turbulence in air- and water flow, of earthquakes, cardiac arrhythmias and of seizures are also subject to chaotic processes. The butterfly is of course oblivious to this consequence of its actions and thus it cannot sensibly be said to exercise control over the weather. But the fact that the evolution of chaotic systems is sensitive to very small interventions implies that whoever were to have sufficiently precise knowledge of a chaotic system could exercise a tremendous amount of control over it without exerting much force. Hypothetically speaking, even a butterfly could have significant control over the weather, if it had complete knowledge of global weather patterns. We can draw a general lesson from this discussion: In principle, if one knows and understands a system well enough, one can control this system with minimal physical effort. There is little reason to think that this doesn’t apply to human beings as well. I should note that at present, we are still a long way from being able to exercise this kind of control over humans. As I discuss below, there is reason to think that current nudges are not very effective. My point here is just that in principle, given sufficiently advanced technology and sufficient information, human behavior can be influenced just like other complex systems. Accordingly, if someone had sufficiently detailed information about a person and was able to use this information to predict how this person responds to specific inputs, they could exercise significant control over them by means of gentle interventions. Consequently, large- scale nudges could diminish the control we have over our own lives and over our actions—and, thus, such nudges could restrict our freedom. The second feature of large-scale nudges that raises concerns regarding their effect on our freedom is their prevalence. Most obviously, even if each nudge only has a minor effect on our freedom, it is natural to think that the cumulative effect of nudges that affect every aspect of our life can be significant. Furthermore, it has often been noted that we don’t really want each of our actions to be free. Acting freely means making choices, and choosing can be a burden. Sunstein (2016, 62), in discussing this issue, gives the following quote of Barack Obama:
NUDGING AND FREEDOM: WHY SCALE MATTERS
23
You’ll see I wear only gray or blue suits. I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.
Primarily, it seems, what is valuable to us is to have control over our life—i.e., to live a free life. This includes having control over what kinds of decisions we make for ourselves, and over what kinds of decisions we leave to chance or to a nudge. Large-scale nudges could threaten this kind of control over our life. This is because they could affect too many of our actions and leave it outside of our control which kinds of choices we make for ourselves. The previous considerations show that even gentle interventions can be used to exercise control, especially if they happen on a large scale. Consequently, large-scale nudging has the potential to significantly restrict our freedom.
Control as the Dependence of One’s Actions on Oneself, and On No-One Else My previous discussion has relied on the notion of control. As I noted, freedom assumes that we are in control of our actions or of our lives. But what exactly does this control amount to? In what follows, I wish to elucidate the notion of control, but—once again—without committing myself to contentious claims about the nature of freedom. I will thus try to identify a condition for the kind of control involved in freedom that at least most parties in the debate on the nature of freedom can agree to. Let me start with the observation that in order to be in control of our actions—and, thus, to act freely—these actions have to depend on ourselves. A life determined by algorithms, or by other people—i.e., a life controlled by something or someone other than us—is a life in which our actions don’t depend on ourselves. In fact, there is an even more basic rationale for the idea that our actions should depend on us. Think of any of the myriad of actions you have performed today. Let us assume that you got up, brushed your teeth, took a shower, went to the kitchen, had breakfast, etc. For each of these actions, one can ask why this action occurred—one might, for instance, ask you directly why you did that. In each case, at least a part of the answer will involve your mental states: you were afraid to be late for your meeting, you wanted to keep your teeth clean and free of cavities and believed that brushing them helps you achieve
24
J. KIPPER
this, etc. Explanations of actions thus crucially involve the agent’s mental states. Such explanations are causal explanations. An agent’s actions thus causally depend on the agent’s mental states. This is the basic rationale for the idea that our actions should depend on us. If some of an agent’s bodily movements don’t causally depend on any of their mental states, these bodily movements don’t even qualify as actions.2 In these cases, the agent’s bodily movements are fully explained by something other than their mental states. But causal dependency comes in degrees. Many of our actions are partly dependent on our mental states and partly on other—external—factors. Even in cases in which an agent’s bodily movements depend to a large extent on such external factors, they can still count as actions. However, such actions are largely outside of the agent’s control, which implies that they aren’t free—or at least, not completely free. Consequently, an action that isn’t fully explained by the agent’s mental states isn’t completely free. Furthermore, the more an explanation of an agent’s actions needs to appeal to factors other than their mental states, the more the agent’s freedom is compromised. One might think that this line of reasoning rests on controversial assumptions about the nature of freedom, for the following reason: Assume that determinism is true, which states that every event is fully determined—i.e., can be fully explained—by the previous state of the universe together with the pertaining laws of nature. Determinism thus implies that the initial conditions of the universe (plus the laws of nature— for brevity, I will omit this qualification in what follows) fully explain any event, including any action. If this were to imply that any action is outside of our control and thus unfree—as my previous discussion seems to suggest—this would mean that incompatibilism is true, which states that freedom and determinism are incompatible. However, many philosophers reject incompatibilism and insist that there can be freedom even in a deterministic universe. It might thus appear as though my claim that actions that aren’t explained by an agent’s mental states are outside of the agent’s control and thus unfree, commits me to a contentious account of the nature of freedom—namely, incompatibilism. In what follows, I will explain why this appearance is mistaken. My discussion will also help bring out more precisely how freedom relates to explanations of actions.
2 I consider this to be an extremely plausible claim. But let me note that some proponents of libertarianism about free will, such as Chisholm (1964), would deny it.
NUDGING AND FREEDOM: WHY SCALE MATTERS
25
It will be useful to further illuminate the relation between determinism and explanations of actions. As we just saw, if determinism is true, then each of our actions can be fully explained by the initial conditions of the universe. But in a deterministic universe, each action can also be fully explained by the conditions that obtained five minutes (or ten seconds, or ten thousand years) before the action took place. There can thus be several distinct explanations—even distinct complete explanations—of one and the same action, provided that these explanations appeal to events and conditions obtaining at different times. Consequently, even in a deterministic universe, an action can be explained by an agent’s mental states. This shows that my assumption that free actions have to be explainable, to a significant extent, by an agent’s mental states doesn’t assume incompatibilism. It also highlights that an agent’s freedom isn’t necessarily compromised if there are alternative explanations of their actions that don’t appeal to their mental states. Let me mention one common kind of case of this type. Our mental states are usually shaped, to a significant extent, by external factors. This means that if our mental states explain our actions, then so do these external factors, at least partly. Depending on the nature of these external influences, such shaping of our mental states can be problematic. But in itself, the fact that our mental states are shaped by external factors needn’t affect our freedom. The previous discussion suggests that an agent’s actions can be free even if these actions can be explained without appealing to the agent’s mental states. Nevertheless, it is plausible that the existence of some kinds of alternative explanations for an agent’s actions does compromise their freedom. Specifically, I want to suggest that freedom is compromised if an agent’s actions can be explained by means of someone else’s mental states. The following examples should serve to support this claim. Assume, first, that Conrad threatens Anatoly and his family to get Anatoly to spy on his employer. Eventually, Anatoly gives in to these threats. We can further assume that Anatoly is generally a loyal employer and law-abiding citizen. In normal circumstances, his behavior would thus be completely out of character. In the case at hand, the agent’s actions causally depend on his mental states—for instance, on his fear that something bad is going to happen to him or his family and on his belief that the only way to avoid this is to spy on his employer. Anatoly’s actions aren’t free—at least, not completely free. On the view I suggest, this is because his actions depend heavily on someone else’s mental states, namely on the goals and beliefs of Conrad. More generally, my view explains why coercion restricts the
26
J. KIPPER
freedom of its targets. One might object that there are cases of intuitively free actions where someone acts (sometimes even out of character) because someone else wants them to—for instance, to do a friend a favor. But in such cases, the agent’s desire to help their friend is still the most salient explanation of their action. In a case of coercion, however, the agent’s own preferences, values, etc. seem far less relevant to such an explanation. Next, consider Brian, who is a member of a religious cult led by Donna. Brian has abandoned his previous life and blindly follows Donna’s orders, who controls every aspect of Brian’s life. Here, too, we are strongly inclined to judge that Brian’s life and his actions aren’t free. Again, I believe that the reason for this is that Brian’s actions and the general course of his current life can be explained, to a large extent, by the mental states of someone else—namely, by the values, goals, beliefs, etc., of Donna. Being in control of one’s life and of one’s actions thus requires that the course of our life and our actions cannot be explained by someone else’s mental states. Let me summarize the main results of my discussion. We saw that freedom requires control. In turn, in order to have control over our actions, they need to depend on us. That is to say, such actions have to be explainable, to a significant extent, by our own mental states. This doesn’t preclude the existence of alternative explanations for our behavior: if an action is explained both by our mental states and by other, external factors, we can still be in control of this action. However, I argued that we aren’t in control of those of our actions that can be explained by someone else’s mental states. As we saw, this claim explains our judgments about cases of coercion and other cases where someone’s freedom is compromised. Consequently, if our actions can be explained to a significant degree by someone else’s goals, intentions, beliefs, etc., our freedom is compromised.
How to Devise Effective Nudges The point of a nudge is to influence people’s actions. Their effect on people’s behavior is even part of Thaler and Sunstein’s definition of nudges. As they put it: “A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way […]” (2008, 6). Accordingly, nudges that are especially effective—i.e., those that have a particularly strong influence on people’s behavior—are to be considered as especially successful. Since Thaler and Sunstein believe that
NUDGING AND FREEDOM: WHY SCALE MATTERS
27
nudges are beneficial to those nudged, this suggests that the more effective a nudge is, the more beneficial it is. At the same time, highly effective nudges might raise concerns. Given my previous discussion, one might worry that such nudges exercise—or are used to exercise—considerable amounts of control over people’s actions and over their lives. This would imply that highly effective nudges significantly compromise people’s freedom. Against this background, it might seem comforting that extant types of nudges are typically not very effective. Thaler and Sunstein’s writings on nudges give the impression that they are quite optimistic about the extent that people’s behavior can be influenced by nudges. But research on the effectiveness of nudging hasn’t confirmed this optimism. It appears that, in the majority of cases, the effect sizes of nudges that have been investigated are quite moderate (see Hummel and Maedche 2019 for a recent meta-study). However, even if existing nudges aren’t very effective, much more effective nudges may well be developed in the future. In what follows, I sketch some general reasons to expect that highly effective nudges will be developed. I then discuss ethical problems raised by such nudges, and more specifically by large-scale nudges. As we will see, these problems include, but aren’t limited to, concerns about the effects of large-scale nudges on our freedom. Let me start by giving some possible reasons why most kinds of nudges that have been used aren’t very effective. Thaler and Sunstein base their recommendations on results from behavioral economics. The relevant research typically tries to identify factors that influence human decision- making in general—i.e., on a population-level. Likewise, most of the nudges that have been employed to date (as well as most nudges discussed by Thaler and Sunstein) target large groups of people indiscriminately. But different people’s behavioral dispositions vary greatly. It is thus to be expected that a nudge’s effect differs markedly between individuals. For the same reason, it is difficult to find very strong population-level effects. Above, we considered complex, chaotic systems. There, we saw that small differences in the initial conditions of such a system can have huge effects on the system’s evolution over time. This means that the behavior of such a system is very difficult to predict unless one has detailed knowledge about this system. But if one cannot predict the behavior of a system, one cannot control its behavior, either. If we assume that humans are chaotic systems, or at least behave like chaotic systems in many respects, this suggests that highly effective nudges are very difficult to devise unless one has
28
J. KIPPER
both detailed knowledge about individual people and the ability to specifically target these individuals. Let me mention another possible reason why the effectiveness of existing nudges is limited. Any nudge is deployed in a specific context—this context is often called the ‘choice environment’. Many features of the choice environment can influence how a person reacts to a nudge, but typically, it is difficult to control for all of these features. The character of the choice environment can thus decrease a nudge’s effectiveness, since it exposes the nudge’s targets to other, potentially counteracting influences. It is of course difficult to pinpoint why most existing nudges aren’t as effective as libertarian paternalists might have hoped for. I should therefore add that to some degree, the preceding discussion had to be speculative. Nevertheless, I believe that the following conclusion can be drawn from this discussion: if one is able to control a nudge’s choice environment, to obtain detailed knowledge about individuals and to target these individuals specifically, this should vastly increase the chances of devising far more efficient nudges. In these respects, digital nudges seem particularly suitable. We are already constantly exposed to digital nudges. Just consider how often you see advertisements, “clickbait”, etc. Certainly, these types of nudges would not be approved by libertarian paternalists such as Thaler and Sunstein. But there are other kinds of digital nudges that seem to be very much in the spirit of libertarian paternalism—such as those used by fitness apps to increase our physical exercise. Researchers have only recently started to pay closer attention to digital nudging (cf. Weinmann, Schneider and vom Brocke for a brief overview of digital nudging). It seems clear that digital nudges have great potential when it comes to influencing people’s behavior. Digital choice environments are much easier to shape and control than physical environments. The designer of a digital nudge can determine precisely which influences they are exposed to when they make certain decisions. Furthermore, digital environments enable the collection of enormous amounts of data about the preferences of individual users. And they also make it possible to use this information by targeting specific individuals. Digital nudges thus have all the above-identified ingredients required for making nudges highly effective. It might be that it will never be the case that all digital nudges to which we are exposed are highly effective, if only because these nudges can have conflicting goals. Nevertheless, there is little reason to think that highly effective nudges won’t be developed.
NUDGING AND FREEDOM: WHY SCALE MATTERS
29
Losing Control in a World of Large-Scale Nudges In what follows, I discuss the effects of large-scale digital nudges on our freedom. Such nudges can be divided into three categories, depending on whose preferences and wellbeing they are meant to promote—those of the nudgers, those of society, or those of the nudge’s targets. As I argue, even the latter two kinds of nudges can lead to a loss of control on the side of those nudged, and thus, to a loss of freedom. Many of the large-scale digital nudges that we are exposed to are used to promote the goals of the nudgers—an example of this type of nudge are targeted advertisements. If such nudges were to become very effective and highly prevalent, much of our behavior would be determined by other people’s goals. One problematic feature of such a scenario that is only indirectly related to freedom is that it would leave too much power in the hands of a few people. Moreover, it would be difficult to hold those exerting this kind of power accountable, since the nudgers, the nudges, and their aims might be opaque to us. Nudges that promote the goals of a few nudgers also have the potential to undermine our freedom. But, while I believe that the development of large-scale nudges of this type poses a real threat, they are decidedly not in the spirit of libertarian paternalism. In what follows, I will therefore focus on nudges designed to promote our goals. Thaler and Sunstein suggest several nudges that are meant to promote the public good. Nudges of this kind are also already used in digital environments—e.g., nudges designed to reduce energy consumption (cf. Gregor and Lee-Archer 2016) or to increase the use of environmentally friendly transportation (cf. Karlsen and Andersen 2019). The goals of such nudges may or may not align with our own goals, or with our own wellbeing. If these nudges are highly effective, this means that our actions can— to a significant extent—be explained by goals, values and beliefs that may not be ours. Given our previous discussion about the requirements of freedom, this implies that nudges of this type can significantly compromise our freedom. One might think that nudges that are designed to promote the goals and the wellbeing of those nudged are unproblematic, and cannot negatively affect our freedom. After all, it seems that the actions towards which they are supposed to guide us can be fully explained by our own preferences and other of our mental states. But recall that our freedom can also be compromised if there is an alternative explanation of our action in
30
J. KIPPER
terms of someone else’s mental states. If we are dealing with a highly effective nudge, our action can be explained to a large extent by the goals, values and beliefs that devise the nudge. To illustrate this, consider a (hypothetical) nudge that is maximally effective: its deployment guarantees that we perform the intended action. This action is fully determined by the mental states of the nudger. If, for example, the nudger had different beliefs about what promotes our goals or our wellbeing, they would have devised a nudge that would have guided us towards a different action. Our resulting action can thus be fully explained by the mental states of the nudger—in this case, their desire to promote our wellbeing in a paternalistic way and their beliefs about what this wellbeing consists in. I believe that it is also intuitively plausible that, in a case like this, we aren’t in control of our actions—even if these actions promote our goals and our wellbeing. Accordingly, highly effective nudges can significantly compromise our freedom, even if they are intended to guide us towards actions that are good for us. This is because, the more effective nudges become, the more we lose control over the actions we are guided towards. Let me conclude this section by noting another reason why digital nudges might compromise our freedom. Since so many of our decisions today are made in digital environments and since digital nudges can be applied arbitrarily often, they have the potential to affect almost all of our actions. I mentioned above that we don’t necessarily want all of our actions to be free. What is more important to us is to be able to live a free life, which includes deciding which of our actions we want to be in control of. But large-scale digital nudges have two characteristics that are suitable for undermining this control. Firstly, they are potentially highly effective, which means that if we are faced with such nudges, it is difficult to resist them and make our own decision. Secondly, they are highly prevalent, which means that they encroach on more and more parts of our lives. Taken together, this means that large-scale digital nudges can undermine our ability to decide which decisions we are in control of, and hence, they can compromise our ability to live a free life.
Conclusion I have argued that large-scale nudges, which will most likely be employed in digital environments, have the potential to compromise our freedom. This is because the actions towards which such nudges guide us can be explained, to a significant degree, by the mental states of those who nudge
NUDGING AND FREEDOM: WHY SCALE MATTERS
31
us. It is thus natural to ask what we can do to prevent the development of such large-scale nudges. In my view, the best way to proceed is to protect our privacy, i.e., to take measures to prevent the collection and use of huge amounts of private data. For, as we saw, the development and deployment of highly effective nudges depends on detailed knowledge about individuals. Acknowledgements I would like to thank Bart Engelen and Zeynep Soysal for very helpful comments and discussion.
References Chisholm, Robert. 1964. “Human Freedom and the Self.” In Robert Kane (ed.), Free Will. Blackwell. Gregor, Shirley, and Brian Lee-Archer. 2016. “The Digital Nudge in Social Security Administration.” International Social Security Review 69(3–4), 63–83. Hummel, Dennis, and Alexander Maedche. 2019. “How Effective Is Nudging? A Quantitative Review on the Effect Sizes and Limits of Empirical Nudging Studies.” Journal of Behavioral and Experimental Economics 80, 47–58. Karlsen, Randi, and Anders Andersen. 2019. “Recommendations with a Nudge.” Technologies 7(45), 1–16. Kipper, Jens. 2020. “Irresistible Nudges, Inevitable Nudges, and the Freedom to Choose.” Moral Philosophy and Politics. https://doi.org/10.1515/ mopp-2020-0013. O’Connor, Timothy and Christopher Franklin, “Free Will”, The Stanford Encyclopedia of Philosophy (Spring 2021 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2021/entries/freewill/. Sunstein, Cass R. 2016. The Ethics of Influence: Government in the Age of Behavioral Science. New York: Cambridge University Press. Sunstein, Cass R., and Richard H. Thaler. 2003. “Libertarian Paternalism Is Not an Oxymoron.” University of Chicago Law Review 70(4), 1159–1202. Thaler, Richard R., and Cass R. Sunstein. 2008. Nudge. Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press. Timpe, Kevin, “Free Will”, The Internet Encyclopedia of Philosophy, ISSN 2161-0002, https://iep.utm.edu/, 10/23/2021. Weinmann, Markus, Christoph Schneider, and Jan vom Brocke. 2016. “Digital Nudging.” Business & Information Systems Engineering 58, 433–436.
Metaphors We Nudge By: Reflections on the Impact of Predictive Algorithms on our Self-understanding Jos de Mul
Metaphors of the Human Humans are unfathomable beings. This applies not only to the actions and motives of others, but also to ourselves. Or, as Friedrich Nietzsche formulates it the preface of Genealogy of Morality (1887): “We knowers, we are unknown to ourselves. […] We remain strange to ourselves out of necessity” (Nietzsche 2006, 3). That sounds counterintuitive. Of all things,
This chapter consists of an adaptation of a lecture given in The Studio of Science Museum NEMO Amsterdam, on the occasion of the exhibition Bits of You (September 2021–May 2022). An abbreviated Dutch version of the text has been published in Dutch in the weekly Magazine De Groene Amsterdammer, March 23, 2022,. My colleagues Julien Kloeg and Awee Prins were kind enough to read and criticize the draft version of the present chapter.
J. de Mul (*) Erasmus University Rotterdam, Rotterdam, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_3
33
34
J. DE MUL
aren’t we the most familiar with ourselves? That may be so, but to think that this leads to self-knowledge is, according to Nietzsche, the mother of all errors. In the second edition of his The Gay Science, also published in 1887, Nietzsche explains why this is the case: “The familiar is what we are used to, and what we are used to is the most difficult to ‘know’—that is, to view as a problem, to see as strange, as distant, as ‘outside us’ …” (Nietzsche 2002, 215). Is that problematic character of self-knowledge why, in our attempts to understand ourselves, we have traditionally compared ourselves with and distinguished ourselves from beings that lie outside of us and which fundamentally differ from us, such as immortal gods and animals? Since the rise of modern technology and the associated mechanization of the worldview, the machine has become a beloved metaphor: “With the emergence of the mechanical philosophies of the seventeenth century, and the ambition to give an account of the whole of nature in terms of inert matter interactions alone, it was only natural to think of life as nothing more than a specific type of machine, the difference between organisms and mere artificial automata reduced to a quantitative one, residing solely in the degree of complication. One should thus place the effort of developing a mechanical paradigm in the context of the historical emergence of modern science, with the successful appearance, especially after Galileo, of a new physics, opposed to classical Aristotelian physics and in an ongoing struggle with the animistic worldview.” (Marques and Brito 2014, 78). A century after Descartes described the workings of the human body in purely mechanistic terms, although he made—perhaps for fear of being persecuted for heresy—an exception for our immortal, immaterial soul, Julien Offray de La Mettrie in his radical Machine Man (1748) also explained the human mind as a sheer product of material processes (La Mettrie 1996). According to La Mettrie, we are, like other animals, merely machines, at most more complex. Or, as Daniel Dennet, a modern follower of La Mettrie, puts it—quoting a favorite pop philosopher Dilbert— “moist robots” (Schuessler 2013). The machine metaphor has turned out to be a particularly fruitful one. Metaphors are more than “ornamental varnish” as may be seen in the case
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
35
of modern medicine. When they become conventional and widely understood, they fit our accounts of truth in the same way as nonmetaphorical sentences do (Lakoff and Johnson 1980, 172). Metaphors are important cognitive tools that not only help us capture unknown or elusive things in familiar concepts but also orient our actions. Understanding the heart as a mechanical pump not only introduced a new understanding of the circulatory system but also nudged1 the users of the conceptual metaphor man is a machine to act accordingly (Peterson 2009).2 Conceptually, the metaphor opened the way to repair or replace broken parts, such as a defective heart valve, just as we do with machines. It took a few centuries to achieve, but since 2021 doctors can even replace a defective heart with a completely artificial heart, in this case one developed by the French company Carmat (Bailey 2021). Thus, the conceptual metaphor became what we might call a material metaphor (Hayles 2002, 22). An idea that has become reality. Man is not a machine, but is literally made into a machine on the leash of metaphor.3
1 I am using the concept ‘nudge’ here in the definition given by Richard Thaler and Cass Sunstein in their popular book Nudge: Improving Decisions About Health, Wealth, and Happiness: “A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates.” (Thaler and Sunstein 2008, 6). 2 In this chapter, following the notation used by Lakoff and Johnson, conceptual metaphors will be written in small capitals in order to distinguish them from the linguistic metaphors that have their ground in them. 3 Although conceptual metaphors may be regarded as nudges in the sense that they are more or less “cheap to avoid”, material metaphors like artificial hearts become, as soon as they have been implanted, “hard-wired mandates” in the sense that their impact on the receiver cannot be avoided. As we will see in the remainder of this chapter, this does also apply to other material metaphors like predictive algorithms, especially when they are implemented in law-enforcing decision systems. However, in as far as such material metaphor subsequently may affect, but not necessarily determine our self-understanding, they may act as conceptual nudges as well.
36
J. DE MUL
Well, to a degree. No matter how fruitful the machine metaphor has turned out to be, we must not confuse it with reality.4 At best, metaphors reveal a certain aspect of what and who we are and puts that aspect in the foreground, obscuring other aspects. Although mechanistic-material metaphors dominate the current life sciences (as expressions like “sensomotoric apparatus”, “molecular machines”, “hard-wired pathways” illustrate), they still fail to get a grip on organic functions like agency and self- organization (Soto and Sonnenschein 2020), let alone on our inner world of subjective, (self-)conscious and intentional experiences, such as bodily sensations, thoughts, perceptions, motives, feelings, and the guidance of our lives by our moral, aesthetic and religious values (Zahavy 2005; De Mul 2019). The living experience of the color, smell and taste of strawberries, the feeling of being in love, or the beauty of Bach’s cello suites cannot be described in mechanical terms. Even if you could capture these first- person, transparent and (self-)conscious experiences in a quantitative, mathematical formula, it would not equal having that qualitative experience of intentional objects itself. As Nietzsche expresses it in The Gay Science: “An essentially mechanistic world would be an essentially 4 According to Nietzsche, the truths of concepts actually “are illusions of which we have forgotten that they are illusions—they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.” Whereas metaphors spring from living imagination, concepts display “the rigid regularity of a Roman columbarium” (Nietzsche 1980, 881–2, translation JdM, cf. the chapter “Frozen metaphors” in De Mul 1999, 35–73). In a similar spirit, Paul Ricoeur distinguishes between ‘living metaphors’, metaphors of which we know that they are metaphors—an imaginative, meaning creating ‘seeing x as y’—from ‘dead metaphors’, metaphors of which we are no longer aware that they are metaphors (Ricoeur 2010, 142, 305).We find similar thoughts in post-positivistic philosophy of science, in which it is stated that the analogies on which metaphors are based make it possible to map new, not yet well understood phenomena (Hesse 1966; Keller 2002). For example, the wave theory of light found its inspiration in the wave character of physical media such as water. As a result, there are also shifts in the meaning of the terms used. For example, the application to light did not leave the meaning of the term ‘wave’ untouched. Metaphors mobilize concepts and ontologies. In doing so, they not only reveal similarities, but also open our eyes to the differences between the things that are brought together in the metaphor. For example, for properties of light waves that we do not find in the waves in the sea. Some philosophers of science go a step further and argue that actually all scientific models should be understood as metaphors. Even models expressed in seemingly literary terms or in mathematical formulas are not based on a one-to-one correspondence between elements of the model and of reality. They are more or less fictitious abstractions, the value of which is measured not so much by a literal correspondence, but rather—pragmatically—by the extent to which they enable us to explain, predict and control events (Van Fraassen 1980).
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
37
meaningless world! Suppose one judged the value of a piece of music according to how much of it could be counted, calculated, and expressed in formulas—how absurd such a ‘scientific’ evaluation of music would be! What would one have comprehended, understood, recognized? Nothing, really nothing of what is ‘music’ in it!” (Nietzsche 2002, 239). After all, “I do not see color-sensations but colored things, I do not hear tonesensations, but the singer’s song.” (Husserl 1970, Vol. 2, 99). But while everyone is familiar with such qualitative experiences (also known as qualia), the volatility of our inner world makes them even more difficult to capture than our bodily characteristics, which are part of the physical, measurable outside world. That is why, when it comes to our inner world, we have to resort to non-mechanistic metaphors. Because the Christian “immortal soul”, to which Descartes still sought refuge has lost credibility, stories have become a popular identification model. After all, telling and listening to stories are important characteristics of human life (Gottschall 2012). When we try to understand ourselves and others, we often resort to stories. Moreover, our lives share essential characteristics with stories. Like a story, our life takes place in time, has a beginning and an end, is intertwined with memories of past events and anticipations of future events, and imbued with motives, goals, reasons, feelings and values (Ricoeur 1991). Like stories, our lives are characterized by the opposition of freedom and destiny, by chance and opposition. Our life is an intentional project, ceaselessly interrupted by unexpected events.5 And unlike the material processes in the outside world, which are determined by general laws, each life story is unique. Sure, there are familiar “life genres”—quests, adventures, tragedies, romantic comedies, farces—but what makes real and fictional life stories so intriguing is precisely their uniqueness and singularity. Because of this entanglement of stories and human life the conceptual metaphor life is a story has become a common metaphor (Lakoff and Johnson 1980, 172–175). And as with the machine metaphor, the story metaphor is also a conceptual and material metaphor in one. The story, argues philosopher Paul Ricoeur, not only offers a striking picture of our lives, but we actually construct our “narrative identity” through stories (Ricoeur 1992, Holstein and Gubrium 2000). From the many and often ambiguous life experiences we construct an explicit life story and then 5 “A story is a choice that is interrupted by something accidental, something fatefully accidental; this is why stories cannot be planned, but must be told” (Marquard 1991, 120).
38
J. DE MUL
identify with it.6 Other people play an important role in narrative identity. We identify ourselves with the life stories of real and fictional role models, while others play various roles in our life story (as for instance parents, children, lovers, neighbors, colleagues, officials, opponents, strangers, enemies) as well. Moreover, other people also form the audience to whom we tell our life stories. And last but not least, we play various roles in the life stories of others. The machine metaphor and story metaphor can coexist peacefully, for example when we use them in different contexts. When we break a leg and consult a surgeon, we will probably prefer the machine metaphor, whereas the story metaphor is more convenient when we talk about a love affair with our friends. But there are also situations where different conceptual and material metaphors can be in opposition or even clash. For example, this happens when a psychiatrist, consulted by a person with depression, has to choose between a conversation about the reasons for the depression and one about the right medical treatment for its supposed genetic-material cause. Or when a jury or judge has to choose between holding accused persons fully responsible for their actions or to decide against this because such persons were forced to execute these actions or has an unsound mind (non compos mentis). In some cases conflicting metaphorical conceptualizations may be mixed: for example in the medical context by a combination of therapeutic talks and pills, or in the juridical context by declaring suspects partially unaccountable for their behavior.
Information Processing Systems Humans are inexhaustible beings. The history of technology ceaselessly offers new metaphors with which we interpret our existence and disclose and create new dimensions of being human. Where the millennia-old ‘alpha technology’ of the spoken, performed and written story gave us a narrative identity and the more recent ‘beta technology’ of the machine a mechanistic self-image, with the development of the computer and the information technologies based on it, a new source of self-identification 6 Because of this identification our ‘narrative identity’ is not, as Dennett wrongly states in ‘The self as a center of narrative gravity’, the same kind of theoretical fiction as the ‘center of gravity’ in physics (Dennett 1992). Although both ‘narrative identity’ and ‘center of gravity’ are theoretical fictions, unlike the ‘center of gravity’ that we attribute to a physical object, ‘narrative identity’ is a ‘fiction’ that we are and actively live. We might call it a fictitious entity, but as a material metaphor, it is a fiction that causes real effects.
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
39
has come to the fore.7 With the computer metaphor, the human being— and in particular the human brain—is represented as an ‘information-processing machines’ (humans are computers8) or the processed products of computational systems (humans are computations9). The fascinating thing about the computer is that it bridges the gap between the alpha and beta technologies mentioned. Although computers are machines, or more precisely: the subclass of automatons, machines which can independently process data with the help of a hard-wired or software program, the processed data also include stories (for example in narrative computer games) and other linguistic and language-based phenomena, such as law (legal expert systems) and politics (voting aids). And in social interaction, the computer also functions as a ‘gamma technology’, regulating human actions. Social networks such as Facebook and TikTok, dating sites such as Tinder and Parship and gsm, ble, gps and nfc chips in mobile phones and smart cards nudge our actions or even determine with whom and in what way we can communicate or meet other persons, which spaces or means of transport we are allowed to enter and what financial transactions we can carry out, as but a few examples. Because the programmable computer “could in fact be made to work as a model of any other machine,” Turing called it “a universal machine” (Turing 2004, 383). Information technology is therefore also referred to as a “system technology,” not only because it consists of a complex multitude of heterogeneous components (e.g., hardware, software, protocols, 7 The terms alpha, beta and gamma technologies used in this section refer to technologies that relate respectively to knowledge and cultural transfer (such as writing, printing, film, radio, TV), to the interaction with nature (hand axe, steam engine, nuclear plant, microscope, telescope, etc.) and social interaction (means of transport and communication such as cart, ship, train, plane, letter, telephony, e-mail). Using the first three letters of the Greek alphabet to distinguish these three types of technology is inspired by the custom to refer to the three classes of sciences (humanities, natural sciences and social sciences) as alpha-, betaand gamma-sciences. 8 It is not without irony that in this conceptual metaphor computers are used as source domain and human beings as target domain, as originally the human being was used as the source domain and the machine as the target domain. When, in 1936, Turing wrote ‘On computable numbers’, in which he famously introduced the idea of a programmable ‘computer’ (nowadays known as ‘the Turing computer’), a computer was not a machine at all, but a human being, working as a mathematical assistant (Turing 2004, 40, 58–90). 9 This conceptual metaphor is part of a family of conceptual metaphors, ranging from reality is a massively parallel computing machine (Steinhart 1998) and life is a computer simulation (Bostrom 2003) to mind is processed by the brain (Barrett 2021).
40
J. DE MUL
legislation, designers, users), but also because it is intertwined with almost all other systems and processes in society. That is why information technology increasingly functions as the ‘operating system’ of our social and personal lives. In this sense, the computer metaphor is, just like the machine and story metaphors, both a conceptual and a material metaphor, but because of its universal applicability, affecting virtually all aspects of human life, its impact is especially pervasive. The thesis I want to defend in the remainder of this chapter is that this metaphor not only nudges us to think of ourselves as a database, but also literally transforms us into databases, often in compelling and unavoidable ways.
Databases Digital and digitalized data—such as numbers, words, images, and sounds—are the raw material for computers. Basically, computers receive, store, process and output data. For that reason, data management constitutes the basis of information technology and databases play a crucial role in virtually all computer programs. However, the term ‘database’ can refer to different things. In the first place, it is used to indicate a collection of data, but it can also refer to the physical carrier of that data (book, card box, computer memory) or to the way data are organized (the database model). After all, digital database management systems are not only used to store data, but also to maintain and query them (Lemahieu et al. 2018). With regard to the database, four basic operations can be distinguished, which are sometimes called the ABCD of persistent storage: Add, Browse, Change and Delete.10 Databases have a long history, which is characterized by increasing flexibility. The old-fashioned telephone book is an example of a rigid paper database. Although it was quick and efficient to look up (browse) a subscriber’s name, adding, changing, or deleting data required phone books to be reprinted entirely, much to the chagrin of the postmen who had to distribute them. Card index boxes with contact details had the advantage that data more easily can be added, changed and deleted. However, the problem remained 10 These four operations equal four basic commands in the Structured Query Language (SQL), which is used to design and operate relational databases. Synonyms of ABCD are ACID (Add, Create, Inquire, and Delete) and CRUD (Create, Read, Update and Delete). See for a short explanation of these SQL commands (Sulemani 2021).
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
41
that this type of database, like the phone book, could only be searched along one specific attribute, for example the name of the subscribers. If you want to systematically browse through another attribute, you have to rearrange the card index box in its entirety or use multiple card boxes next to each other, such as in the old-fashioned library, in which the same data are arranged twice, in two different card index boxes: once by author and once by subject. And copying and distributing card boxes remained a time-consuming activity, too. With the computers connected in networks, the flexibilization of data management has taken off. Already with a simple spreadsheet program, a contact list can be organized by any attribute (name, address, zip code, telephone number, e-mail address, age, partner, profession, etc.) by a single mouse click. Copying and exchanging data or even complete databases is also quite easy. The first digital database management systems were introduced in the 1950s and in the course of their history different database models have been developed, such as hierarchical and network databases. The relational database, developed in the 1960s and 1970s, is in a way the highlight of this flexibilization process. In this type of database, which is based on set theory and predicate logic, the object of data management is ‘atomized’, divided as much as possible into single, non-divisible elements (Codd 1970). With the help of queries, the user can combine virtually every possible combination of those elements, in order to prioritize, classify, associate and filter them. This increase in flexibility is connected to the fact that in relational databases, queries are not pre-defined. Customers, querying the database behind the websites of the web shop, database streaming service or dating site, can easily select the article, film or partner that meets all their criteria. And if a mobile phone company has stored all the customer’s data in a relational database, it could, for example, easily select all customers whose telephone subscription expires in two months in order to make them an appropriate renewal offer, based—for example—on the previous subscription type, previously purchased telephone and/or payment history. Database management systems are potentially economic goldmines. Although there are also non-profit database projects like Wikipedia, the boom of the internet has strongly been stimulated by commercial enterprises. The financial successes of multinational Big Tech companies, such as the internet and e-commerce giants Google and Amazon, social networks like Facebook, Instagram, TikTok and Twitter, streaming services
42
J. DE MUL
like Spotify and Netflix and dating sites like Tinder and Parship are based on the transformation of both the customer base and the products on offer into large relational databases. When you order a book from Amazon, your personal data is not only related to your search and purchase history and your reviews, but also to those of all other customers. Based on these correlations Amazon makes you recommendations of the type “Customers who bought x also bought y.” Meanwhile, more than 35% of all items sold by Amazon are the result of such recommendations, and no less than 75% of what we stream on Netflix (Clark 2018).11
Databasification of Human Life However, the impact of the ‘databasification’ of human beings is not limited to the economic sphere. Thanks to the ‘datafication of everything’, more and more data are linked to our personal data, from our financial transactions, debts, energy use, telephone traffic, website visits and movements through geographical space to our medical and genetic data, ethnic characteristics and sexual and political preferences. In addition, numerous, more or less heterogenous and structured databases are increasingly being merged into gigantic ‘data warehouses’ and even bigger ‘data lakes’ (Big Data), which are subjected to various forms of data linking, data mining and data analysis. With the help of profiling techniques, patterns and correlations in the characteristics and behavior of groups and individuals are uncovered for the purpose of diagnosing, predicting and controlling social interactions (Lemahieu et al. 2018, 549–730). Various forms of artificial intelligence (AI) are also used. The two main types are classical rule-guided algorithms and self-learning neural networks (Fry 2018). In the first type, the instructions of these algorithms are drawn up by a human and are direct and unambiguous. As with a recipe, the instructions are step-by-step. An example of this type is the decision trees used to predict flight or public safety risk in case of making pretrial release decisions. In the case of neural networks (nowadays branded as self- learning or deep learning) the program independently and in an—even for experts—unpredictable and unexplainable way, discovers patterns in large data collections. Doctors, for example, use neural networks to distinguish 11 In 2012, Amazon acquired a patent for an “anticipatory shipment” algorithm, which sends items in your direction even before you’ve ordered them or even knew you’d like to do so this in the near future (Lomas 2014).
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
43
benign from malignant tumors and bioinformaticians use databases with clinical and genetic data from patients to find out which genetic abnormalities correspond to certain disease patterns, on the basis of which molecular geneticists can develop a therapy (for example, by using CRISPR/Cas9—a programmable genetic scissor—to replace pathogenic genes with healthy ones) or to predict what chance a still healthy person has of developing a specific disease in the future, so that the patient is nudged to adjust his lifestyle if needed. Neural networks often perform better in diagnosing and predicting diseases than medical experts.12 Moreover, with the help of a genetic database, it is possible to recombine genes of humans and other organisms (De Mul 2021). For example, human genes are ‘built into’ the genome of sheep to produce medicines for hemophilia and cystic fibrosis. And in January 2022, a patient in Baltimore, with severe heart disease, was implanted with a pig’s heart after it underwent genetic modification so that it will—hopefully—not be rejected by the patient’s immune system (Reardon 2022). Such human- animal combinations are organic metaphors of database technology. The development of the electronic computer in the 1940s was strongly stimulated by the Second World War, especially by demands in the domains of ballistics and encryption and decoding. Many contemporary diagnostic, predictive, and control-oriented forms of Big Data analysis continue these kinds of computing. Governments, for example, use these types of data analytics to detect potential fraudsters, criminals and terrorists, or—often in close collaboration with scientists of different stripes—to predict socio- economic, financial, political, military, epidemiological and climatic trends and developments. The aforementioned examples indicate that database technologies and data analytics may have a disruptive societal impact. In the domain of 12 In 2019, a large systematic review and meta-analysis of 14 studies, in which the performance of deep learning networks was compared with that of health-care professionals, and which provided enough data to construct contingency tables, enabling the calculation of the sensitivity (do the networks and professionals see what they need to see?) and specificity (don’t the networks and professionals see things that aren’t there?), the researchers found that the present deep-learning networks already perform slightly better than the professionals: “Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87⋅0% (95% CI 83⋅0–90⋅2) for deep learning models and 86⋅4% (79⋅9–91⋅0) for health-care professionals, and a pooled specificity of 92⋅5% (95% CI 85⋅1–96⋅4) for deep learning models and 90⋅5% (80⋅6–95⋅7) for health-care professionals.” (Liu et al. 2019).
44
J. DE MUL
economy and finance it disclosed the era of “surveillance capitalism” (Zuboff 2018, Sadowski 2020), in science and technology it leads to new digital epistemologies and paradigm shifts (Kitchin 2014), and in governance it inspired new ideas and practices of biopolitics (Johns 2021).
Weapons of Math Destruction Although it is undeniable that the datafication of human life has countless useful applications, in recent decades the dark sides have also become visible. In 2008, non-transparent financial algorithms were at the root of a global banking crisis (Reyes 2019), and two years later, algorithms that help stockbrokers automatically sell when prices fall led to a Flash Crash (Poirier 2012). The business model of Big Tech companies such as Facebook and Google, which aim at the generation of as much marketable data and traffic as possible, has not only lead to giga profits for these companies, but also to fake news, anti-social behavior and social tensions. They not only predict our behavior but also influence and modify it, often with disastrous consequences for democracy and freedom (Zuboff 2018). The damage was not limited to the financial and economic world. Whistleblowers such as Edward Snowden revealed that civilian and military intelligence and security services in the US illegally and widely tapped data and in 2018 the equally illegal manipulation of Facebook users’ data by Cambridge Analytica, benefitting the Trump campaign, led to great outrage (Hu 2020). Nor was the damage limited to the US. In China databases and data analysis are being used en masse to develop new forms of digital disciplining and biopolitics, as the social credit system and the Uyghurs’ ethnocide in Xinjiang province show (Roberts 2018). And Russian trolls, often working in close cooperation with the government, are actively undermining Western democracies by spreading fake news via social media. With regard to pitfalls associated with database technology, different causes can be distinguished. In the first place, they can be the result of faulty data. A well-known adage in data management is ‘garbage in, garbage out’ (Mayer-Schönberger and Cukier 2013). For example, medical expert systems often only use the data of white male patients, so there is a good chance that women or non-white people will be misdiagnosed or prescribed the wrong therapy (Feldman et al. 2019; Ledford 2019). Although the Latin word ‘data’ literally translated means ‘given’, in reality data are never simply given, but are created and selected. Even
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
45
seemingly objective personal data such as date of birth, gender and nationality are not natural facts, but are assigned to people on the basis of certain cultural conventions and historical developments. For example, a date of birth depends on the use of a Gregorian or Islamic calendar. And the current gender discussions show that ‘gender’ is not a simple fact, but a choice that is not without prejudice (bias). Whether one can choose from two genders or—as with Facebook since 2014 (Goldman 2014)—from 58 different genders (which for Facebook means profit from both the point of view of atomization and marketing) depends on the choices offered to the user. And whether categorizations such as country of origin, ethnicity, religion, political preference or criminal record are part of governmental databases is the result of political decisions. Secondly, much depends on the quality and the correct interpretation of the outcome of the algorithms used. Data mining is all about discriminating, that is: making distinctions. Does the image concern a good or malignant tumor, is this person a fraudster or not, does this music fall within the taste pattern of the consumer or outside it? The middle ground must be found between ‘sensitivity’ (does the algorithm see everything that it needs to see?) and ‘specificity’ (does it leave out what does not belong to the category?). That is a precarious balance where mistakes can easily arise (Fry 2018). Moreover, not only do people have biases, but algorithms also develop them. We see this, for example, with AIs such as PredPol, a predictive policing program aimed at predicting and preventing criminal behavior by means of where-who-when registration of certain forms of crime in the city (Karppi 2018). If, on the basis of such a prediction, more surveillance is carried out in a certain neighborhood and more criminals are caught as a result, this has a self-reinforcing effect (it becomes a self-fulfilling prophecy), because more crime data are collected in those neighborhoods, which in turn leads to a greater deployment of police. The profiling of possible perpetrators has the same effect, because if more people from a certain ethnic group are arrested, that group will also be more strongly represented in the crime data. This happened, for example, with the artificially intelligent System Risk Indication (SyRI), which was used in The Netherlands by the Tax and Customs Administration to predict allowance fraud and which used ‘dual nationality’ as one of the selection criteria between 2012 and 2015 (Van Bekkum and Zuiderveen Borgesius 2021). This also led to ethnic profiling. In these cases, discrimination not only leads to distinction, but also to unequal treatment of citizens, which is contrary to the first article of the Dutch constitution.
46
J. DE MUL
Algorithms, argues mathematician Cathy O’Neil in Weapons of Math Destruction, not only inadvertently cause harm, suffering and injustice, but they are often deliberately abused by Big Tech companies, bankers, stockbrokers and governments to enrich themselves at the expense of others or to undermine the rule of law and democracy (O’Neil 2016).
Dehumanization by Malgorithm The philosopher Martin Heidegger argued in the 1950s—the formative years of the electronic computer—that human beings think they rule the earth with the help of modern technology, but they are doomed to become the “most important raw material” (wichtigsten Rohstoff ) of technological control themselves (Heidegger 1973, 104). The danger Heidegger is pointing at goes beyond the aforementioned damage caused by ‘malgorithms’. On a more fundamental level, what it is to be human, humanity itself, is at stake. The datafication of humans reduces human beings to a quantifiable and calculable ‘thing’. This dehumanizing reduction is a common feature of the computer metaphor and the machine metaphor, but the computer metaphor goes a crucial step further by making not only the human body an object of calculation, but our qualitative experiences—beauty, justice, love—as well. The ideology of ‘Dataism’ (De Mul 2009), the belief that everything can be captured in quantifiable data, that these data are objective and that algorithms and artificial intelligences are infallible, leads to the deprivation of feelings and qualitative judgments, because within this ideology they are considered to be ‘subjective’, ‘irrelevant’, ‘misleading’ or even ‘illusory’. Especially predictive algorithms undermine our experience of freedom and responsibility. Morality and law are based on the idea that we have a certain freedom of action. Under normal human conditions, we are held accountable for our behavior because we could have acted differently than we de facto did. And we are judged on what we have done in the past. These ideas are closely linked to the story metaphor. Like all metaphors, this metaphor is a product of imagination and as such fictitious. But as a material metaphor, it functions as what Kant in his Critique of Judgement calls a “heuristic fiction”, an imaginative fiction with real effects, that actually makes us free and responsible (Kant 2007, A771). Predictive algorithms, on the other hand, claim to be able to determine what our future behavior and future circumstances will be. They peg us in the present down as the consumer, patient, criminal or dissident that we
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
47
will be in the future. In doing so, they undermine our narrative causality, our ability to steer our actions by reasons. But it is thanks to this very ability that we can be held accountable for our actions and their future consequences. A judge will therefore attach great importance in his judgment to the reasons that a suspect had for his actions. Big Data analysis, on the other hand, completely ignores narrative causality. Or as MayerSchönberger and Cukier put it, not without pride (sic): “Knowing what, not why, is good enough” (Mayer-Schönberger and Cukier 2013, 52). That statement is correct in the sense that Big Data analysis indeed is not focused on causal relationships, but only deals with more or less accidental statistical correlations. The fact that there is a remarkable correlation between eating ice creams and wildfires does not mean that eating ice cream causes forest fires, or that you could fight wildfires by banning ice creams. The situation is even worse if the correlation, which is erroneously understood as a causal relationship, is the result of discriminatory choices, such as in the ‘childcare benefits scandal’, which forced the Dutch government in 2021 to resign. One of the causes of the discrimination was that having a dual nationality was used as a risk factor for fraud, which lead to the belief of tax authorities that having dual nationality is the cause of fraud and that you could combat fraud by punishing persons with dual nationality in advance, reversing the burden of proof (N.N. 2021). It is reminiscent of the visionary science fiction movie Minority Report, in which predicted crime leads to ‘anticipatory imprisonment’ of the suspects (Spielberg 2002). Spielberg’s film revolves around the question of whether free will exists if the future can be predicted. The danger of predictive algorithms is not so much that they show that freedom to determine your own future does not exist, but that they destroy this freedom in a double sense. On the level of the conceptual metaphor—computers can predict the future— they nudge us to—consciously or unconsciously—believe that the predictions are objective truths and that, for that reason, there is no alternative to the predicted outcome. But as this metaphor also acts as a material metaphor, it also has real consequences: the victims were not only excluded from childcare benefits, but also fined and put in a downward poverty spiral. They were rendered guilty by the prediction. Although the punished were not, like the accused in Minority Report, locked up and killed, they became just like them as ‘prisoners of prediction’ and robbed of their future.
48
J. DE MUL
Behind predictive algorithms is a malignant paradox. Predictive algorithms pretend to predict the future, but in fact they rely entirely on extrapolations of the past. That may be useful when it comes to events in inanimate nature, obeying fixed natural laws, but it ignores the openness to the future that characterizes narrative causality. Human beings are unfathomable and inexhaustible. As a result, the actions of individuals are never completely predictable. An inveterate atheist can convert on her deathbed, a criminal can repent, and a faithful partner can turn out to be a cheater. Historical events—wars, epidemics, scientific inventions—are also fundamentally unpredictable. Or, as Wilhelm Dilthey, one of the founders of the theory of narrative identity puts it: “We will never be done with what we call chance [Zufall]; what has become significant for our life, whether wonderful or fearsome, seems always to enter through the door of chance” (Dilthey 2002, 96). When humans are pulled through the digital shredder of the database, they lose their indivisible individuality and literally become dividuals, collections of digital data fragments in a data lake. That may be useful for Big Tech companies and governments in their quest for algorithmic profit, disciplining and biopolitics, but it also destroys those characteristics that give the human being its humanity: the experience of agency and responsibility.
Nudging Metaphors It would be naïve to think that datafication, algorithms and artificial intelligence will disappear. And because they can be useful tools in our efforts to cope with the increasingly complex problems we face in the twenty-first century, such as the climate crisis, that option would not be desirable either. But it is of the utmost importance to prevent the disadvantages and dangers associated with these means—the widening of the gap between rich and poor, the undermining of the rule of law and democracy, and the destruction of human freedom and responsibility—from overtaking their usefulness and itself becoming the greatest threat for human life. Fortunately, there is growing attention for the harmful consequences of using automated databases and attempts are being made to design databases that respect economic justice, the rule of law and human dignity. Fundamental themes are at stake, such as a fair right of ownership and use of the data produced by the users, transparency and explainability of the algorithms and AIs used and democratic control of national and
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
49
international database management in all social domains, and—last but not least—our very humanity. The challenges are huge. The opponents—Big Tech and authoritarian states such as China and Russia—are powerful and can only be contained by the cooperation of all forces that represent the aforementioned values, both in civil society and at (inter)national political levels. It is encouraging that awareness of the dangers of information technology and social media is growing and that initiatives are emerging both nationally and internationally to achieve better regulation. In April 2021, the European Commission presented an Artificial Intelligence Act. “The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law” (European Commission 2021a, b). According to this Act, AI systems must comply with these acts in order to be admitted to the European market and in November 2021, EU countries joined the Digital Services Act (DSA) and Digital Markets Act (DMA) that are intended to protect users from harmful content and must prevent the Big Tech companies from abusing their power. In order to develop laws and regulation that can help us to prevent algorithms and artificial intelligences from causing harm, it is important to remain alert to the ontological and deontological implications of the conceptual and material metaphors that underlie database management systems. And to remain able to detect potentially harmful effects, to nudge them in the right direction or to replace them by alternatives that help to make human life flourish instead of shrivel. This requires a ‘material metaphor criticism’, which is situated somewhere in between literary criticism, philosophy of technology, and political theory. Such a ‘politico-poetic’ project cannot escape being nudged by material metaphors (such as technologies), but at the same time it should on its turn be able to nudge these metaphors. It should not only make us aware of the aforementioned dangers, but it should also be imaginative, because without our metaphor-creating imagination, we will not be able to nudge existing metaphors and create new ones when necessary. Humans should always remain in a narrative feedback loop in order to be able to nudge the metaphors that nudge us. In order to nourish the vital imaginative powers that are needed for this task, we must open ourselves to the
50
J. DE MUL
creative and inexhaustible recombinatoriality that characterizes human life.13 This article began with the observation that humans are unfathomable. One of the implications of this is that human actions, in the final analysis, are unexplainable. It is true that within the narrative view of human life, we assume that people have reasons for what they say and do, but there is an end to the justification. Sooner or later we come across the fluid foundations of justification.14 When we consider that the computer is a metaphorical transfer from human calculation to a machine, we should reverse the aforementioned computer metaphor humans are computers into computers are human.15 In the case of neural computer networks, this is reflected, among other things, in the fact that, in the final analysis, they are
13 These imaginative powers have their roots in the recombinational character of human thought and language. Human language distinguishes itself by semantic compositionality. The meaning is determined not only by the meaning (semantics) of the constituent parts and the context in which they are used (pragmatics), but also by the way in which they are combined (syntax), from ‘flying horses’ in Greek mythology to ‘thinking machines’ in the epoch of modern technology. Human imagination is not alone in this; this recombinatory database ontology is a phenomenon that characterizes the entire physical, organic and psychic nature (matter, life and consciousness). Just as physics and chemistry investigate the recombination of elementary particles and elements, and the life sciences study the recombination of the genetic elements, so do the humanities study the cultural recombination of human thoughts, artifacts, and actions. 14 Compare Wittgenstein reflections on the final uncertainty of our believes:94. But I did not get my picture of the world by satisfying myself of its correctness; nor do I have it because I am satisfied of its correctness. No: it is the inherited background against which I distinguish between true and false.95. The propositions describing this world-picture might be part of a kind of mythology. And their role is like that of rules of a game; and the game can be learned purely practically, without learning any explicit rules.96. It might be imagined that some propositions, of the form of empirical propositions, were hardened and functioned as channels for such empirical propositions as were not hardened but fluid; and that this relation altered with time, in that fluid propositions hardened, and hard ones became fluid.97. The mythology may change back into a state of flux, river-bed of thoughts may shift. But I distinguish between movement of the waters on the river-bed and the shift of the bed itself; though there is not a sharp division of the one from other. (Wittgenstein 1969, 15e) 15 See note 8 on the origin of Turing’s name to his universal machine. In the present context, this reversal means that we must understand and treat artificial intelligence as an extension of human intelligence and not as an external form of intelligence. In terms of the theory of technological mediation, our relationship to artificial intelligences should be shaped as an embodiment relation rather than an alterity relation (Ihde 1990, respectively72 ff. and 97 ff.).
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
51
not transparent and unexplainable, just like the neural networks in the human brain (all neural networks are unexplainable). While we have to accept the unexplainability of people as a fact of life, we understandably have a lot of trouble with accepting it in digital computers. That is also one of the main underlying reasons for the European demand that algorithms and artificial intelligences must be transparent and explainable. Where this is possible with traditional, rule-based AI, this is not the case with neural computer networks. True, an important part of the current AI research aims at increasing transparence and explainability of neural computer networks by developing rule-based software that can make neural networks locally explainable.16 However, total explainability is as impossible for artificial neural networks as it is for the neural networks in our brains. The complexity of both go beyond human intelligence. The number of neurons in the neural network of a single human being is about 86 billion, almost as much as the number of stars in our galaxy! The recombinational complexity of artificial neural networks, how impressive it already may be, is still almost negligible compared to the complexity of the human brain. As artificial networks become even more complex, their explainability will just grow. The unexplainability of artificial neural networks is an additional reason for keeping humans ‘in the loop’ (artificial neural networks need humans). In the case of database management systems humans must be active on both sides of the technological mediation: the representatives of the company or the governmental body designing and using the database management system as a commercial or bio-governmental instrument, as well as the consumers and citizens whose data are being used in the process of data profiling, mining, analytics etc. in order to nudge and enforce their actions. The designers and users should be in the loop because someone must be responsible and accountable for the use of the database management system. Although human actions may be as unexplainable as the processes of the neural network, unlike the system, the human operator can be held responsible and accountable (‘I don’t know why I did it’ doesn’t make a human person less responsible and accountable, and neither should he 16 In visual networks, a certain degree of explainability can be achieved by investigating to what extent which parts of the visual image determine the outcome. For example, neural networks used to select certain bird species from a multitude of images of different birds appear to assign a relatively heavy weight to the head. (Ras et al. 2018).
52
J. DE MUL
resort to ‘computer says no’ kind of arguments). In a way, this is part of the tragic condition of human life: in the final analysis, we are responsible and accountable for the unforeseen and unforeseeable consequences of our actions. In the case of the person who is the ‘subject’ (and therefore in danger of becoming the “most important raw material”) of the database management system, there should always be a degree of freedom with regard to decisions of the system. In this case two different types of ‘material metaphors’ have to be distinguished: nudging versus enforcing types of technology. An example of a nudging-type is the so-called ‘persuasive mirror’ that will display what you will look like in five years’ time if you get no exercise, and live on booze and junk food. (Knight 2005a).17 It nudges the person in the direction of a healthier lifestyle, but the mirrored person remains free to follow the nudge and as such also remains responsible for the outcome. In the case of the Dutch childcare benefits scandal, the material metaphor was not nudging, but acting as an external, (law-)enforcing entity. There was no room to escape the decisions of the system, and complaints about wrongful decisions were not heard by the tax authorities. Even when such systems would work perfectly, they are dehumanizing because they take away our fundamental freedom and responsibility (Mulligan 2008). Although less scandalous at first sight, the hidden algorithms behind the search engines and social networks are no less dehumanizing. With these types of behavior-enforcing database management systems, there should always be a possibility and procedure to resist the decision, and to force the representative of the system to explain the decision or to withdraw it. In cases of erroneous systems, this may cause a lot of regulation and bureaucracy. However, this is a small price for saving the humanity of human beings.
17 “A computer builds up a profile of your lifestyle, using webcams dotted around your house. The images of your activities are sent to a software able to identify, for instance, when you have spent most of the day sitting on the couch, and will spot visits to the fridge. Once the profile is built up, another software will extrapolate how this behavior could affect your weight in the long term: eat too much and the computer will add pounds to your reflection in the mirror. Another package will work on your face. If you are a heavy drinker, your reflection will show early wrinkles, shadows under the eyes and blotchy skin” (Knight 2005b).
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
53
References Bailey, Stephanie. 2021. “This new artificial heart responds to the patient.” [CNN Business], accessed February 3. https://edition.cnn.com/2021/03/25/business/carmat-artificial-heart-spc-intl/index.html. Barrett, Lisa Feldman. 2021. “This is how your brain makes your mind.” https:// www.technologyreview.com/2021/08/25/1031432/what-i s-m indbrain-body-connection/. Bostrom, Nick. 2003. “Are we living in a computer simulation?” The Philosophical Quarterly 53 (211): 243–255. Clark, Alisson. 2018. “How helpful are product recommendations, really?”, accessed February 9, 2022. https://news.ufl.edu/articles/2018/09/how- helpful-are-product-recommendations-really.html. Codd, E.F. 1970. “A Relational Model of Data for Large Shared Data Banks.” Communications of the ACM 13 (6): 377–387. De Mul, Jos. 1999. Romantic Desire in (Post)Modern Art and Philosophy, The SUNY series in postmodern culture. Albany, NY: State University of New York Press. De Mul, Jos. 2009. “Dataïsme. Het kunstwerk in het tijdperk van zijn digitale recombineerbaarheid.” In Anders zichtbaar: de visuele constructie van het humanisme edited by Johan Swinnen, 264–176. Brussel: VUB Press. De Mul, Jos. 2019. “The emergence of practical self-understanding. Human agency and downward causation in Plessner’s philosophical anthropology.” Human Studies 42 (1): 65–82. De Mul, Jos. 2021. “From mythology to technology and back. Human-animal combinations in the era of digital recombinability.” In Ecology 2.0. The Contribution of Philosophical Anthropology to Mapping the Ecological Crisis, edited by Katharina Block and Julien Kloeg, 79–97. Berlin: De Gruyter. Dennett, D.C. 1992. “The Self as a Center of Narrative Gravity.” In Self and Consciousness, edited by F. Kessel, P. Cole and D. Johnson, 275–288. Hillsdale, NJ: Erlbaum. Dilthey, Wilhelm. 2002. The Formation of the Historical World in the Human Sciences. Selected Works. Vol. 3. Edited by Rudolf A. Makkreel and Frithjof Rodi. Princeton, N.J.: Princeton University Press. European Commission. 2021a. Artificial Intelligence Act. European Commission. 2021b. “AU Artificial Intelligence Act: The European Approach to AI.” https://futurium.ec.europa.eu/en/european-ai-alliance/ document/eu-artificial-intelligence-act-european-approach-ai. Feldman, Sergey, Waleed Ammar, Kyle Lo, and et al. 2019. Quantifying Sex Bias in Clinical Studies at Scale With Automated Data Extraction. JAMA Open Network 2 (7). https://doi.org/10.1001/jamanetworkopen.2019.6700. Fry, Hannah. 2018. “Hello world: how to be human in the age of the machine.” In. London: Transworld Digital.
54
J. DE MUL
Goldman, Russell. 2014. “Here’s a List of 58 Gender Options for Facebook Users.” ABC News, accessed February 21. https://abcnews.go.com/blogs/ headlines/2014/02/heres-a-list-of-58-gender-options-for-facebook-users. Gottschall, Jonathan. 2012. The Storytelling Animal. How stories make us human. Boston: Houghton Mifflin Harcourt. Hayles, N. Katherine. 2002. Writing Machines. Cambridge: The MIT Press. Heidegger, M. 1973. The End of Philosophy. New York: Harper & Row. Hesse, Mary B. 1966. Models and analogies in science. [Notre Dame, Ind.]: University of Notre Dame Press. Holstein, James A., and Jaber F. Gubrium. 2000. The Self We Live By: Narrative Identity in a Postmodern World. Oxford: Oxford University Press. Hu, Margaret. 2020. “Cambridge Analytica’s black box.” Big Data & Society (July–December): 1–6. https://doi.org/10.1177/2053951720938091. Husserl, Edmund. 1970. Logical Investigations (2 Volumes). New York: Routlaedge & Kegan Paul. Ihde, Don. 1990. Technology and the Lifeworld. Bloomington/Minneapolis: Indiana University Press. Johns, Fleur. 2021. “Governance by Data.” Annual Review of Law and Social Science 17: 53–71. Kant, I. 2007. The Critique of Judgement. Oxford/New York: Oxford University Press. Karppi, Tero. 2018. “‘The Computer Said So’: On the Ethics, Effectiveness, and Cultural Techniques of Predictive Policing.” Social Media + Society (April- June):1–9. https://doi.org/10.1177/2056305118768296. Keller, Evelyn Fox. 2002. Making sense of life. Explaining biological development with models, metaphors, and machines. Cambridge, Mass: Harvard University Press. Kitchin, Rob. 2014. “Big Data, new epistemologies and paradigm shifts.” Big Data & Society: 1–12. Knight, W. 2005a. “Mirror that reflects your future self.” New Scientist, February 5, 23. Knight, W. 2005b. “Mirror that reflects your future self.” accessed February 24, 2022. https://we-make-money-not-art.com/mirror_that_ref/. La Mettrie, Julien Offray de. 1996. Machine Man and Other Writings. Translated by Ann Thomson. Cambridge: Cambridge University Press. Lakoff, George, and Mark Johnson. 1980. Metaphers We Live By. Chicago/ London: The University of Chicago Press. Ledford, Heidi. 2019. “Millions of black people affected by racial bias in health- care algorithms.” Nature 574 (October 31): 608–609. Lemahieu, Wilfried, Seppe vanden Broucke, and Bart Baesens. 2018. Principles of database management: the practical guide to storing, managing and analyzing big and small data. Cambridge: Cambridge University Press.
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
55
Liu, Xiaoxuan et al. 2019. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digital Health 1: e271–97. https://doi.org/10.1016/S2589-7500(19)30123-2. Lomas, Natasha. 2014. “Amazon Patents “Anticipatory” Shipping—To Start Sending Stuff Before You’ve Bought It.” accessed 16 februari 2014. http:// techcrunch.com/2014/01/18/amazon-pre-ships/. Marquard, O. 1991. In Defense of the Accidental: Philosophical Studies, Odéon. New York: Oxford University Press. Marques, Victor, and Carlos Brito. 2014. “The rise and fall of the machine metaphor. Organizational similarities and differences between machines and living beings.” Verifiche XLIII (1–3):77–111. Mayer-Schönberger, Viktor, and Kenneth Cukier. 2013. Big data. A revolution that will transform how we live, work, and think. Boston: Houghton Mifflin Harcourt. Mulligan, Christina M. 2008. “Perfect Enforcement Of Law. When To Limit And When To Use Technology.” Richmond Journal of Law & Technology 14 (4): 1–49. N.N. 2021. “Dutch childcare benefits scandal.” accessed February 8. https:// en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal. Nietzsche, F. 1980. Sämtliche Werke. Kritische Studienausgabe. Vol. Band 7: Nachgelassene Fragmente 1869–1874. Nietzsche, F.W. 2002. Beyond Good and Evil: Prelude to a Philosophy of the Future. Translated by Rolf-Peter Horstmann and Judith Norman, Cambridge Texts in the History of Philosophy. Cambridge, U.K./New York: Cambridge University Press. Nietzsche, F.W. 2006. On the Genealogy of Morality. Translated by Keith Ansell- Pearson and Carol Diethe, Cambridge texts in the history of political thought. New York: Cambridge University Press. O’Neil, Cathy. 2016. Weapons of math destruction : how big data increases inequality and threatens democracy. First edition. ed. New York: Crown. Peterson, Mary Jane. 2009. “Human 2.0. Conceptual Metaphors of Human Beings in Technologists’ Discourse. A Study of the MIT Media Lab’s Human 2.0 Symposium.” PhD, Human and Oganizational Systems, Fielding Graduate University. Poirier, Ian. 2012. High-Frequency Trading and the Flash Crash: Structural Weaknesses in the Securities Markets and Proposed Regulatory Responses. Hastings Business Law Journal 8: https://repository.uchastings.edu/hastings_ business_law_journal/vol8/iss2/5. Accessed February 22, 2022. Ras, Gabriëlle, Marcel van Gerven, and Pim Haselager. 2018. “Explanation methods in deep learning: Users, values, concerns and challenges.” In Explainable and Interpretable Models in Computer Vision and Machine Learning., edited by
56
J. DE MUL
Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yağmur Güçlütürk, Umut Güçlü and Marcel van Gerven. Cham: Springer. Reardon, Sara. 2022. “First pig-to-human heart transplant: what can scientists learn?” Nature 601 (February 21): 305–306. Reyes, G. Mitchell. 2019. “Algorithms and Rhetorical Inquiry: The Case of the 2008 Financial Collapse.” Rhetoric and Public Affairs: 569–614. https://doi. org/10.14321/rhetpublaffa.22.4.0569. Ricoeur, P. 1991. “Narrative identity.” In On Paul Ricoeur. Narrative and Interpretation, edited by D. Wood, 188–199. London: Routledge. Ricoeur, Paul. 1992. Oneself as another. Chicago: University of Chicago Press. Ricoeur, Paul. 2010. The rule of metaphor. The creation of meaning in language, Routledge classics. London: Routledge. Roberts, Sean R. 2018. “The biopolitics of China’s “war on terror” and the exclusion of the Uyghurs.” Critical Asian Studies 50 (2): 232–258. https://doi. org/10.1080/14672715.2018.1454111. Sadowski, Jathan. 2020. Too Smart. How Digital Capitalism Is Extracting ontrolling Our Lives, and Taking Over the World. Cambridge, MA: MIT Press. Schuessler, Jennifer. 2013. “Philosophy That Stirs the Waters. Interview with Daniel Dennett.” New York Times, April 29. Soto, Ana M., and Carlos Sonnenschein. 2020. “Information, programme, signal: dead metaphors that negate the agency of organisms.” Interdisciplinary Science Reviews 45 (3):331–343. https://doi.org/10.1080/0308018 8.2020.1794389. Spielberg, Steven. 2002. Minority Report. 20th Century Fox. Steinhart, E. 1998. “Digital Metaphysics.” In The digital phoenix : how computers are changing philosophy, edited by Terrell Ward Bynum and James H. Moor, 117–134. Oxford; Malden, MA: Blackwell Publishers. Sulemani, Maryam. 2021. “CRUD operations explained: Create, read, update, delete.” accessed February 10, 2021. https://www.educative.io/blog/ crud-operations. Thaler, Richard H., and Cass R. Sunstein. 2008. Nudge : improving decisions about health, wealth, and happiness. New Haven, Conn.; London: Yale University Press. Turing, Alan Mathison. 2004. The essential Turing : seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life plus The secrets of Enigma. Oxford; New York: Clarendon Press. Van Bekkum, Marvin, and Frederik Zuiderveen Borgesius. 2021. “Digital welfare fraud detection and the Dutch SyRI judgment.” European Journal of Social Security 23 (4): 323–340. Van Fraassen, Bas C. 1980. The scientific image, Clarendon library of logic and philosophy. Oxford/New York: Clarendon Press & Oxford University Press.
METAPHORS WE NUDGE BY: REFLECTIONS ON THE IMPACT…
57
Wittgenstein, Ludwig. 1969. On Certainty. Edited by G. E. M. Anscombe, G. H. von Wright and Denis Paul. Oxford: Blackwell. Zahavy, Dan. 2005. Subjectivity and Selfhood. Cambridge, MA: MIT Press. Zuboff, Shoshana. 2018. The age of surveillance capitalism. The fight for the future at the new frontier of power. London: Profile Books.
Can Nudges Be Democratic? Paternalism vs Perfectionism Sandra Laugier
Nudges, Morality, and Conformity If the issue of nudges divides researchers and politicians, it is undoubtedly because we are already immersed in a standardizing society and organizations which subject us to stimuli, incentives, and manipulations that we do not choose. Nudges, when they are transparent, can be seen as a way of uncovering such stimuli, incentives, and manipulations, thereby increasing freedom of choice in a benevolent way—or, on the contrary, as an instrument for increasing the normalization of behavior. Thus, the question becomes: do nudges allow us to become more aware of our choices? In fact, nudging becomes ever more complex and ubiquitous as the data collected from ever-present mobile technology is used in increasingly finegrained ways to present us with “choices”. These processes may be The author is very grateful to Daniela Ginsburg for her translation, Katie Schiepers for her edits, and Juliet Floyd for invaluable help and support.
S. Laugier (*) University of Paris 1 Panthéon-Sorbonne, Paris, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_4
59
60
S. LAUGIER
discussed and brought to awareness. However, even supposing this question could be answered does not solve everything. In particular, the issue of the framework for these choices, which refers to the democratic organization of our societies, remains. The enthusiasm for nudges (shared even by liberal politicians) is explained by the fact that they would make it possible to carry out actions that would be more effective, at a lower cost (possibly?), without resorting to coercive methods. Conceived as instruments for public policy and management, the significance and utility of nudges was widely debated in politics a few years ago. In 2010, Prime Minister of the United Kingdom David Cameron, asked for the creation of a “nudge unit” and reports published by various institutions continue to addressed the issue: see the British Cabinet Office’s “Behavioural Insights Team” reports (www. bi.team) and a report by the Centre d’analyse stratégique français on “Nudges Verts” (2011), and so on. The scope for the application of nudges is extremely wide: they are used in ecology, marketing, etc. Nudging has also informed policies in the US, UK, Singapore, New Zealand, Australia, Germany, and so forth. On the one hand, Thaler and Sunstein (2008) consider that citizens are always influenced in their decision-making, particularly by the context in which they operate—and very often, they are influenced in a bad way. Therefore, if the value principles guiding the political use of nudges are libertarian paternalism and Rawls’ (1971) principle of publicity, for Sunstein and Thaler nudges do not contradict the freedom of citizens. On the other hand, critics of nudges, such as Mozaffar Qizilbash (2012) and Jeremy Waldron, stress the manipulative and undemocratic nature of such practices. But both positions are incredibly naive in the eyes of Pragmatic or Wittgensteinian philosophers. The problem is not being manipulated (as Sunstein and Thaler say we are all the time), nor does it have to do with losing or maintaining freedom. We are no freer when we are offered an actual choice in a situation (as in the often cited cafeteria example of leaving healthier items close to the check-out counter) than when we are coerced; this freedom is purely abstract. The transparency of choices does plays a major role in the possibility of judging the practice in question as manipulative or not. If the effectiveness of nudges is based on a social norm—in other words, on our tendency towards social conformity—the question is whether such conformity should be valued. If the key to having the “right” behavior is, in particular, concern for the judgment of others, then practices that encourage
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
61
individuals to take a critical look at social norms are sidelined altogether from consideration. Indeed, the more sensitive individuals are to social norms, the more likely to conform to them, the more functional the nudges will be. In this sense nudges are ultimately about conformity and not the so-called common good. Cavell, who devoted his first works to Wittgenstein and Austin then took it upon himself to make Emerson’s voice reheard in philosophy, insists on the tension between self-reliance and conformity. Conformist readings of Wittgenstein lead to focusing on the rules that would constitute grammar; a grammar of the norms of language’s functioning and its “normal” uses, to be acquired like a form of knowledge. Cavell, on the other hand, proposes a reading of Wittgenstein in which learning is an initiation into a form of life. In learning a language, you do not merely learn the pronunciation of sounds, and their grammatical orders, but you learn to participate in the “forms of life” which make those sounds the words they are, make them do what they do (Cavell 1979, 177). Where Wittgenstein (2009) speaks of rules or language he does not forward a thesis or explanation, but rather describes what we do: we learn how to use words in certain contexts, from our elders, and all our lives we must use them in new contexts and without any background set of rules for what to say in specific circumstances, without any guarantee, without universals. We must project them and create new meanings, or improvise them against the background of our forms of life. We must, in short, continually build ourselves up Cavell 1969, 52). This is also what Emerson says in a famous remark often cited by Cavell: “Their every truth is not quite true. Their two is not the real two, their four not the real four; so that every word they say chagrins us and we know not where to begin to set them right” (1841, paragraph 10). We enter here the territory of moral perfectionism. The search for a better self and for the region associated with self-perfection is at the basis of perfectionist ethics. Cavell develops this concept in relation to Emerson, while tracing its origins back to Plato. For Emerson, as for Cavell, philosophy is founded on “aversion to conformity”. The philosopher must be a non-conformist—this is Emerson’s definition of “self-reliance,” in his essay of that name. Democracy, for Emerson, is inseparable from Self-Reliance, that is to say, from confidence—not as hollow self-conceit or a feeling of superiority (a debased version of perfectionism), but as a refusal of conformity, a refusal to let oneself be spoken for by others. This self-reliance is also the
62
S. LAUGIER
capacity each person has to judge what is good, and to refuse a power that does not respect its own principles (its own constitution). Self-reliance is thus a political position, claiming the voice of the subject from conformism, from uses that are uncritically accepted, and from dead institutions, or those no longer representative or “confiscated.” It is this theme that Cavell takes up again with Emerson, and that he proposes in order to constitute an alternative to the liberal political (and economical) thinking that is made emblematic by the work of his colleague at Harvard, John Rawls. For Cavell, and for Emerson, I must consent to my government, consider that it speaks in my name, to give it my voice. But how is such an agreement possible? When did I give it my consent? There was in reality no social “contract”, and our relations to one another and to ourselves in a modern democratic society are not wholly rule-governed, like contracts are. Self-reliance claims, in fact, the continued right to take back one’s voice from society. My concern is what I think, not what others think. And therefore the principle of self-reliance is also one of democracy. Cavell proposes, along with Emerson (1841), a form of radical individualism that is not a selfish claim of private concern; on the contrary, it is public. The issue of self-reliance becomes the issue of who decides the common good, but also how it is decided.
Who Nudges? The Ethical Problem Who decides the direction in which choices and behaviors should be oriented? Can technocrats, whether experts in public policy or marketing, really afford to answer everywhere for citizens? The need for a collective discussion around what is considered desirable by a society arises here, with all the difficulty that can be presented by “directing” people towards such a debate. If the public sphere is fragile and heterogeneous, it nevertheless remains the relevant sphere for thinking about the frameworks of choice and about a necessary manipulation towards our own good and the common good. Actually, in the paternalist view, such a decision is never left to the people. We have all seen friends and family make terrible decisions, and been tempted by visions of the pain they would be spared if we could only make them follow our advice or the advice of “competent” people. The same feeling motivates well-intentioned technocrats to take charge of the public: ordinary people are plainly making unfortunate blunders they will regret, and so they need to be advised by wiser people.
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
63
Thaler and Sunstein (2008) present the latest version of this temptation in their influential work. They argue that wise decision-makers should tweak the options and information available so that the easiest choice is the right one. For example, people can be guided to donate their organs in greater numbers if organ donation is made an opt-out rather than an opt-in choice. Or people can be encouraged to plan for retirement by making pension contributions automatic for everyone who does not explicitly opt out of the system. “Nudging” is appealing because it provides many of the benefits of top-down regulation while avoiding many of the drawbacks. Bureaucrats and leaders of organizations can guide choices without dictating them. Thaler and Sunstein call the approach “libertarian paternalism”: it lets people “decide” what they want to do, while “guiding” them in the “right” direction. The main problem, though, is that Thaler’s and Sunstein’s ideas presume that good technocrats can use statistical and experimental results to guide people to make choices that serve their own real interests. This is a natural belief for scientists and some intellectuals, especially those who see the ways scientific knowledge is ignored and politically abused: they think life would be better if scientists had more authority. However, this idea of guiding has been widely contested in all the democratic movements of this century—most recently with the issue of mask mandates and vaccines in the midst of the COVID pandemic. Influencing people behind their back is often considered to be the most problematic aspect of nudging, because it is not apparent to the nudgee what is happening. However, for me this is not the problem. We are constantly influenced in both good and bad ways. The television show 24, for example, has had a bad influence by somehow making torture banal; it has had a good influence by portraying a black president and smart, powerful women. It is not obvious how to decide what are good or bad influences, and considering that some people can decide this for others is actually the antidemocratic point of nudging. In any case, Sunstein offers a way out by means of the publicity and transparency condition, according to which nudges ‘should be visible, scrutinized and monitored’ (2014, 147–148). While publicity and transparency conditions are clearly important, they are not enough to answer the manipulation objection. Being open about nudges does not make them less manipulative. The problem is political and the idea of nudging is a problem for democracy. The liberal paternalist’s idea is that nudges try to “make the person do something that she has not herself (actively) chosen” (Tengland 2012,
64
S. LAUGIER
144). The wrongness of nudging lies not so much in what it gets people to do (a discussion about goals) but in how it works (the techniques that pervert the decision-making process) and the political and moral hierarchy it supposes. We manipulate a lot and are being manipulated, but the question is by whom and with which justifications. Is being manipulated into what someone else thinks is good for me the right thing for me? This is what we do not accept if we are full-fledged citizens. I am certain that in this sense the promoters of nudging wouldn’t like being nudged themselves (although they might well deny this). As Jeremy Waldron (2014) puts it, nudges are. an affront to human dignity: I mean dignity in the sense of self-respect, an individual’s awareness of her own worth as a chooser [...] My capacities for thought and for figuring things out are not really being taken seriously.
Waldron rightly does not want to live in a “nudge-world” full of manipulating marketers and policy-makers; he wants the government to respect us and let us err autonomously rather than round us up and herd us like sheep into the pen of health or happiness. However transparent it might be, and even if it helps me reach my true goals (for example, a healthier life), nudge manipulation is wrong in the sense that, and because, it perverts my decision-making capacities. In any case, it should not be presupposed that I implicitly accept being nudged even for the common good, especially a common good that has been determined without my participation.
Democracy and Education Many have noted a tone of skepticism about, and frustration with, democratic decision-making in Sunstein’s writings. Rather than being “citizens”—a description that emphasizes humans’ political status and their active participation in choosing and controlling those they elect— humans are primarily regarded in his writings as “consumers” (hence the cafeteria example, even if it is carefully presented as a public institution), emphasizing their role as market actors (and somewhat passive ones at that). Thus, one of the principal arguments against nudges is that we as citizens impose constraints on ourselves by electing officials who will regulate in our collective interests, even through public discussion. Democracy and
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
65
its nature are incompatible with nudges: no one wishes to be constrained by people who know better, even to do good things. Sunstein sees traditional regulation as problematic because it is analyzed as strongly paternalistic, particularly in limiting the freedom of choice of consumers for their own good; he wants to have consumers freely choose their own good. But what citizen wants that? And why should we each work for our good and not the government? Nudges lets forms of governances off the hook by giving individual citizens responsibility for the common good. The idea of public service seems incompatible with nudging, since responsibility lies either with the government (which has to explain and submit its actions to the citizens) or with the people (who then will neither want nor need to be nudged towards decisions made by others). The most worrying aspect of paternalism has to do with its use of empirical evidence. Sunstein is apparently committed to evidence-based policy-making. At several points in the book, he stresses the need to test criticisms of nudging against empirical results of nudging in practice. But what strikes me is how unempirical Sunstein’s book is, not just because he doesn’t appear to do empirical work himself but because he underreports available evidence against nudging. He is attempting to present a theoretical defense of nudging, rebutting claims that nudging is paternalistic. A detailed investigation was carried out by the Science and Technology Select Committee of the upper House of the United Kingdom’s Parliament (the House of Lords) (2010–2012). McCrudden and King (2016, 91) comment: This Report makes sobering reading for those contemplating introducing nudging as a central element in government regulation. Evidence from this Report in the United Kingdom indicates that, as practiced, nudging undermines human dignity in at least two ways: first, by diverting government from its responsibility to use other, more effective, instruments that would secure the just redistribution of resources essential to us being able to exercise our human agency; and, second, by reducing opportunities for public deliberation and democratic discourse in favor of non-transparent, technocratic manipulation.
Thus some types of nudging strategies in practice restrict the opportunities for citizens to act as moral agents, and restrict government responsibilities. The relevant critique of nudging is based not on freedom, but morals; if we want to analyze nudging and the influence and effect of
66
S. LAUGIER
choice architecture, environment, and policy on people, we obviously need a thicker understanding of ethics, and we will want to adopt a more complex conception of the person and agency, as well as a deeper sense of responsibility and accountability for the role of government in furthering the common good. Seeking to pursue actually progressive politics should surrender the nudging paradigm in favor of regulation that is more transparent, more democratic, and allows citizens to act as moral agents. It is not surprising that nudging has always been studied and promoted by center-right-wing governments. The fact that this is done with the help of academics is revelatory of the antidemocratic drive that has come to affect contemporary political thought.
Defenses of Paternalism For some commentators, education and information are not enough to conduct our lives; we are pretending that we are competent in ways we are not and should be more modest: And because coercive paternalism not only recognizes our cognitive shortcomings, but moves us to help us where those abilities are shaky, it actually values our choices about our ultimate goals more than does the sort of paternalism that simply gives us a hint in the right direction but then keeps out of the way as we make choices that entirely undercut our aims and values. (Conly 2012, 242–243)
But why should these authors’ values be stronger that any citizen’s? What is actually very strange is the obsession with the fact that people make bad and non-rational choices—“choices that entirely undercut our aims and values.” But who has made bad decisions? Are individuals/ordinary people responsible for what happens to the climate, perhaps more culpable than people in power? Insisting on informing and persuading gets things wrong; nudges aim to help people to do what they are already convinced of. For Conly it is hard to see how this would be degrading, insulting, or disrespectful. He adds, characteristically: “To insist that governments should treat us as rational beings is somewhat absurd in light of the evidence that reveals this to be an unrealistic ideal that gives rise to ineffective policy measures.” How can governments reliably come to know what people themselves
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
67
judge to be their real preferences? We can usually expect capitalism to manipulate us, but not governments, for this would mean that liberal governments are at the service of capitalism (an old suspicion, but one totally justified by this nudge obsession). According to Waldron (2014), nudging policies involve two radically separated parties. First, there are ordinary, biased, myopic, and weak- willed people (the nudgees). Second, there are people “endowed with a happy combination of power and expertise” (the nudgers), who know how ordinary people think and can use clever choice architecture to influence their decisions. Waldron is thus concerned with government officials and experts (‘them’) steering ordinary people (‘us’) towards specific goals. Many defenders of nudging do not believe that employing nudges implies that one has to stop informing and persuading people, but they think that focusing exclusively on the latter is likely to prove ineffective, because it is based on an unrealistic view of human behavior and psychology. But again, this involves ignorance of the actual processes of decision. Knowledge, information, and action cannot be separated in a process that would move from knowledge and its consolidation to rational decision and action. This simplistic rationalist scheme, already fragile for the individual, is downright ineffective for the collective. The complexity and singularity of situations results in irreducible uncertainty as to the results of human action and, consequently, in particular difficulties encountered in formalizing these actions. Hence the need for practical knowledge, Aristotelian phronesis, which articulates knowledge and action: analyze a case in its complexity before taking a decision, take into account all opinions and interests, allow collective deliberations, remain attentive to signals that could be indicators of hidden or invisible difficulties, and so on. This capacity for phronesis and prudence consists in integrating all the additional premises implied by the particularity of human actions and situations, what Castoriadis calls “the realm of the human.” Thaler and Sunstein focus on obvious cases. Of course, we can safely assume that the majority of people want to be ‘healthy, wealthy and happy.’ Who really and explicitly wants to die in a car crash or from obesity-related causes or from pollution? But these caricatured examples have little to do with everyday moral problems and moral decisions.
68
S. LAUGIER
Consent to Nudging The nudge theorist gives a more perverse answer to Waldron: in democratic societies like ours, “we” (the citizens) are part of “them” (the government). “What ‘they’ do, ‘they’ do in ‘our’ name and because ‘we’ enable ‘them’ to do it.” Nudging is not about being manipulated by experts who know better, but about us “collectively invoking government’s help when we know we are likely to make bad decisions.” (Conly, 30). But to what have we consented in electing a government? Not to everything it does. Democracy is not limited to the moment of elections, it is also at work between these moments: it is a permanent claim of citizens to their own power. Why are they—the manipulators—in a better position than us—the manipulees—to know what we really want? Why should we trust ‘them’ with this kind of judgment and power? This is the matter of consent. Even if nudges are manipulative, consent counts as a reason for justifying them: for promoters of nudging, manipulation can be justified only when the manipulee would endorse the process, or the means to the end attained, along with the end. What exactly is the problem with government nudging us towards our health, if we are informed about and agree with its goal (we want to become healthy) and its means (we want to be nudged to become healthy)? Such a government is not so much disrespecting its citizens as taking up its responsibility to help citizens act upon their own values. (Carter and Hall 2012, 11)
We want to stress the role of vital democratic processes—a role that is dismissed by nudge theorists. There is a neoliberal tendency—in this deploration of mistakes and errors of ordinary citizens—to transfer state responsibilities to individuals. This leads, as we know, to the dilution of responsibilities; the arrangement is very different for these decisions (what you should do for the common good) depending on whether you are a president, minister, business owner, an Amazonian Indigenous person expelled from your land, or an unemployed person who does not have enough money to pay for gas to get to a job. The question is therefore: who can make these decisions, or rather, who does not make these decisions? Who nudges whom?
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
69
Hence we must attend to the historical importance—as in the case of tobacco or the AIDS epidemic, and now with climate change—of public discourse and engagement. But also to the fact that there are many people doing something, developing innovations with society (“short circuits,” local initiatives to promote local production; organic agriculture; grassroots, movements fighting for environmental justice, etc.). Many people also act on a daily basis in inner cities, for example, to maintain or restore social cohesion. They don’t need to nudge, or to be nudged. To be a bit provocative, we could say that the people who need nudging are those who have the power to nudge and want to guide others’ behavior. Without going so far, perhaps we could tell Sunstein and company that instead of looking for nudges and for ways to change other people’s behavior, you have to change the way you yourself ask questions, and try to change your own behavior. The first step in any promotion of nudging is the need to take into account the interests of all and not only of experts and governments—this means democratic participation in deciding on what constitutes the common good. The public can no longer be conceived as an ignorant mass whose irrational fears or erroneous beliefs must be contained, but rather must be regarded as a competent community of citizens if the idea of democracy makes sense. The aim today is to take into account of and assess the public’s ability to organize and acquire a collective understanding of political and general issues—the public being defined as all those affected by decisions and who should have a voice in them. Any method for influencing choices must integrate and recognize the competence of citizens: democracy is defined as government through the equal participation of all, without distinction as to citizens’ possession of knowledge. John Dewey’s analyses of what he calls the “constitution of the public” are important here. Dewey recognizes that all members of a society have equal responsibility and competence in the collective work of dealing with the public issues that arise in the near future for them and that they are under an obligation to resolve. Inquiry is a procedure whereby a “community of inquirers” manages to solve a “problem situation” with which it is suddenly confronted. It is therefore a collective work, carried out in three stages: recognizing the problematic situation; defining the problem it poses; and discovering the most satisfactory solution from the point of view of its foreseeable consequences. It is not a matter of individual choice. By contrast, the concept of nudging denies the collective character of political choice, leaving each person on her own.
70
S. LAUGIER
Dewey’s inquiry apprehends the members of a society as they are at the time when they must engage in collective research and respect its logic considering that the necessarily public nature of this investigation imposes a framework within which the arguments exchanged adjust to each other in such a way that they remain acceptable to all. In The Public and Its Problems (1927) Dewey applies this conception to the realm of politics. In this book, Dewey starts from a Durkheimian idea: “There is no sense in asking how individuals come to be associated. They exist and function in association.” However, in the last chapter of his (1927), he writes, “The fact of association does not by itself produce a society. This requires (...) the perception of the consequences of a joint activity and the distinctive role of each element that produces it.” Such a perception creates a common interest, i.e., a concern on the part of everyone for joint action and for the contribution of each of the members who engage in it. So there is something that is truly social and not just associative. Dewey calls this method democracy. He admits a certain division of labor: if the inquiry is in the hands of experts, they must deliver all the data they produce (and do so completely and honestly) to citizens who engage in collective debate on this basis of objectivity (whose validity they can also criticize). In this distribution, all that is required of citizens is to be able to understand what these specialists are telling them. As Dewey fully recognizes (1927): the ability to judge the extent of the knowledge provided by others on common concerns. As long as secrecy, prejudice, bias, false reports and propaganda are not replaced by investigation and publicity, we will have no way of knowing how much the existing intelligence of the masses could be capable of judging social policy.
For Dewey, the intelligence of the actors is less important than the “collective intelligence” deployed by a community of investigators using the democratic method. The appeal of inquiry theory today, the recognition that ordinary people are not politically impotent, ignorant, or incompetent, can be explained by a decline in belief in the determinacy of politics, under the influence of the disillusionment it has never failed to provoke; by the growing demand for “participation.” The appeal to participation and citizen involvement is undoubtedly the result of a widely accepted idea, found both in public life and even in some
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
71
academic research, that people have competence in deciding what is good for them when it comes to the questions that concern/affect them. Empirical analysis of these participatory mechanisms seems to show that they still fail to give real power to citizens to act and decide. The demand for real democracy goes far beyond what these illusory and manipulative techniques of citizen empowerment can produce (consider the case of “public debates” on energy or waste storage according to the theory of nudges). It lays down a radical and ordinary requirement: every citizen of a society possesses political knowledge that is sufficient to unconditionally give them the responsibility to make decisions that affect the future and destiny of a community.
Viewers’ Competence and Moral Progress: Education Through TV Shows Studying TV shows means paying attention to ‘popular culture’ as a moral resource. Reconsidering the ‘popular’ leads to rethinking the connections between culture and democracy, in order to organize both of them pragmatically around actual, shared practices and forms of life. Popular culture (movies and TV shows, videogames, as well as music, Internet videos, and so on) plays a crucial role in re-formulating ethics and in the political and social constitution of democracy. It gives us an alternative to “nudging” paradigms. Dewey (1927, 1934) defines the public as emerging from a problematic situation: individuals experience a problem that they initially see as arising from private life, and a solution is arrived at through the interactions between those who decide to give public expression to this problem. The digital revolution has allowed for new forms, agents, and models of artistic action. My ERC project DEMOSERIES (https://www.demoseries.eu/) considers security TV series as the site of an “education for grownups” through the transmission and discussion of material that is widely available and shareable. The project will study the role of security TV series in the transmission of meanings and values. Though forms of soft power may seek to use fictional representations of terrorism to attempt to influence the enemy’s decision-making processes or as forms of internal propaganda, movies and TV series can play a subtler, significant, and so far under-studied role in shaping scholarly analysis, education, and collective understandings of terrorist violence.
72
S. LAUGIER
In 1935, W. Benjamin reflected on the consequences for human lives and societies of new techniques for mechanically reproducing visual and musical works of art. Today, the digital revolution has allowed for new agents and models of creation that contest both elitist conceptions of “great art” and “populist” conceptions of popular art. TV series are now seen as spaces where artistic, ethical and hermeneutic authority can be re- appropriated, and where viewers can be empowered by constituting, sharing, and discussing their own unique experiences—not choosing to be nudged but determining their own tastes and cultural personality. The lack of formal or technical training required for viewing moving images makes it distinct from other art forms, and more democratic. This is also a reframing of ethics. In this context, we may redefine popular culture’s specific “nudging”, i.e. agency: no longer as “entertainment” (even if that is part of its social mission), but also as a collective labor of moral education, as the production of values and ultimately of reality. This culture (comprised of blockbuster movies, TV series, music, videos shared on the Internet, etc.) plays a crucial role in re-evaluating ethics, and in constituting real democracy on the basis of images, scenes, and characters—on the basis of values that are expressed and shareable. The question of morality is shifted toward the development of a common sensibility which is both pre-supposed and educated/transformed by the sharing of values. Series create care and awaken affectivity through digital moving figures or situations. Their very form gives them their moral value and expressivity: the regularity with which viewers frequent them, the integration of characters into viewers’ ordinary and familial lives, viewers’ initiation into new and initially opaque forms of life and lexicons, viewers’ attachment to characters, and finally, the methodology and modes of narration of series. This leads to revising the status of morality, locating it not in rules, transcendental norms, or principles of decision-making, but rather in attention to ordinary behaviors, to everyday micro-choices, to individuals’ styles of expressing themselves and making claims. These are transformations of morality that many philosophers, weary of overly abstract meta-ethics and overly normative deontological ethics, have called for. One of the tasks of series philosophy would be to demonstrate, through a reading of the moral expressivity constituted by a series, the individual and collective moral choices, negotiations, conflicts, and agreements at the basis of moral representation: the choices and trajectories of fictional characters, the twists and turns of the plot.
CAN NUDGES BE DEMOCRATIC? PATERNALISM VS PERFECTIONISM
73
New modes of participation and interaction are opening the way for new forms of subjective authority. Today, the question of democracy indeed becomes the question of the individual’s capacity for unique aesthetic and moral actions decisions and choices, and for making an active and creative usage of fiction. Film and TV series are now not only the subject of study or analysis by film critics or researchers, but also subject to in-depth analysis by large crowds including audiences and producers. This profoundly transforms the question of nudges and somehow makes it trivial, although no less problematic. Education thus conceived, including education of grownups, may appear as an alternative paradigm to nudging.
References Benjamin, W. 1935/2007. “Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit, 1st draft, 1935, English translation from 3rd version “The Work of Art in the Age of Mechanical Reproduction”, in H. Arendt, ed., Illuminations, pp. 217–252. New York, Schocken Books, 2007. Carter, A. and Hall, W. 2012. “Avoiding Selective Ethical Objections to Nudges”, The American Journal of Bioethics (12 (2)), 12–14. Cavell, S. 1969. “The Availability of Wittgenstein’s Later Philosophy”, in Cavell, Must We Mean What We Say? A Book of Essays, pp. 44–72. Cambridge University Press. Cavell S. 1979. The Claim of Reason: Wittgenstein, Skepticism, Morality and Tragedy. Oxford. Cavell S. 1993. Conditions Handsome and Unhandsome. University of Chicago Press. Centre d’analyse stratégique, Note d’analyse 216—“Nudges Verts”: de Nouvelles incitations pour des comportements écologiques (mars 2011), at http://archives. strategie.gouv.fr/cas/content/note-d%E2%80%99analyse-216-nudges-vertsde-nouvellesincitations-pour-des-comportements-ecologiques-.html (accessed March 21, 2022). Conly S. 2012. Against Autonomy. Cambridge University Press. Dewey J. 1927. The Public and Its Problems, An Essay in Political Inquiry. Athens, OH: Swallow Press. Diamond C. 1991. The Realistic Spirit: Wittgenstein, Philosophy and the Mind. MIT Press. Emerson, R.W. “Self-Reliance” (1841/2014). In Self-Reliance: Essays and Essays, Second Series, The Portable Emerson, ed. J S. Cramer. New York: Penguin Books. Floyd and Katz, eds. 2016. Philosophy of Emerging Media: Understanding, Appreciation, Application. Oxford University Press.
74
S. LAUGIER
House of Lords Science and Technology Select Committee, 2nd report of session 2010–2012. Behaviour Change. House of Lords, 2011. At https://publications.parliament.uk/pa/ld201012/ldselect/ldsctech/179/179.pdf (accessed March 21, 2022). Katz, J. and Floyd, J., eds. 2015. Philosophy of Emerging Media: Understanding, Appreciation, Application. Oxford University Press. Laugier, S. 2019. Nos vies en series, Flammarion Climats, Paris. McCrudden Ch. & J. King. 2016. “The Dark Side of Nudging” in Alexandra Kemmerer/ Christoph Möllers /Maximilian Steinbeis/Gerhard Wagner (eds.) Choice Architecture in Democracies. Exploring the Legitimacy of Nudging. Qizilbash, M. 2012. Informed desire and the ambitions of libertarian paternalism. Soc Choice Welf 38, 647–658. https://doi.org/10.1007/s00355-011-0620-8. Rawls, J. 1971. A Theory of Justice. Cambridge. Harvard University Press. Sunstein C.R. 2014. Why Nudge? The Politics of Libertarian Paternalism. New Haven, CT, Yale University Press. Tengland PA. 2012. Behavior change or empowerment: on the ethics of health- promotion strategies - Public health ethics, - academic.oup.com. Thaler, R.H. and Sunstein, C.R. 2008. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT, Yale University Press. Waldron J. ‘It’s All For Your Own Good’ NY Review of Books (October 9, 2014). Wittgenstein. 2009. Philosophische Untersuchungen = Philosophical Investigations. Trans. and Edited by Anscombe, G. E. M., Hacker, P. M. S. and Schulte, J. Malden, MA, Wiley-Blackwell.
Revisiting the Turing Test: Humans, Machines, and Phraseology Juliet Floyd
Introduction: The Turing Test and Human Sociality Seventy-two years ago Alan M. Turing proposed an “imitation game” that came to be known as the “Turing Test”.1 In what follows, the Test’s significance will be retrospectively and philosophically revisited. We shall see, not only how far-reaching Turing’s proposal was, but also how well his philosophical framing of it helps to illuminate, not only the arc of our history with computation over the last seven decades, but the central significance of human-to-human phraseology and social interaction in the presence of computational technology, an idea with great significance for a world in which AI is integrated increasingly into everyday life. The notion of “phraseology” is Turing’s, drawn from a philosophical tradition,
1
Turing (1950).
J. Floyd (*) Boston University, Boston, MA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_5
75
76
J. FLOYD
and this will be clarified in what follows.2 The perspective on the Turing Test our revisiting affords is important, because recently AI developers and computer scientists have occasionally dismissed Turing’s Test as something that is merely a simple “game”, no test at all, and/or too vague to be useful.3 We first rehearse the historical backdrop to the Turing Test in Wittgenstein’s philosophy (§2). This helps us explain that Turing was not attempting to prove that machines can think, and draws into view the issue of phraseology. By revisiting the Turing Test with this background in mind (§3), we may regard it as a repeatable experiment probing the evolution of human phraseology and self-conceptualization in the presence of machines. This allows for a re-assessment of Turing’s Test in light of Searle’s Chinese Room and other arguments (§4). We conclude with brief remarks on the state of AI in our world (§5). Our overarching theme is to stress the element of the social in Turing’s work, since it has been underestimated, ignored, or treated naively in much of the literature. In fact, his Test asks us to take up a kind of anthropological and/or sociological and/or philosophical perspective on ourselves, something practically essential in helping us to face the open-endedness and social dynamism of what we count as a “human” task, or “thinking” in our computationally-driven world (Floyd 2021). In a world increasingly involving AI in human activities, among the many hidden and often unacknowledged influences on us we must include as centrally important our phraseology, our ways of conceptualizing ourselves. The perspective originated in Turing’s Test was designed to help us gauge the power of our changing modes of self-presentation and intelligibility, as well as the inevitable irreducibilities and limits of machines and of humans. A revisiting of its philosophical significance is due. Human thinking is itself always already “artificial”, in that it has evolved over eras, constructed to stretch itself and alter in the face of newness. It continues to change under the pressure of new technology, and new ways of unfolding human relations with that technology. This is what makes our The notion of “phraseology” appears in the title of Turing 1944/45, with explicit debts expressed to Wittgenstein, who used the notion of a “phraseology” in the Blue Book (1969, 69). Compare Floyd (2017, 108n). 3 McDermott (2014) and Vardi (2014), both responses to the entrance of Turing into popular culture with the biopic The Imitation Game. 2
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
77
differentiating ourselves and our actions from those of computers a ticklish but significant matter. There is much talk of “AI and ethics” these days, as the assignment of responsibility for harms and goods induced by driverless cars, powerful deep learning mechanisms, and the micro- targeting of our everyday choices through “nudging” and “choice architecture” become more and more ubiquitous. We don’t want to allow engineers to design first, test what breaks, and pay no mind to the uses of humans as fodder for experiments whose outcomes often harm the least well off. Not, at least, without paying attention to the responsibilities they, as each of us individually, carry.
Turing’s 1936 Model of Computation: Historical Backdrop, Contemporary Issues It is a fascinating question how it was that Turing came up with his analysis of the notion of taking a step in a formal system of logic, an analysis sufficiently sharp, mathematically and philosophically, that he used it to resolve the famous Entscheidungsproblem of Hilbert (Turing 1936).4 This problem asked for a “systematic method” for determining, in a finite number of steps, whether or not one sentence in a formal system of logic follows from another. Turing showed that there can be no such systematic method: logic is not “gap free”. Validity, that is, logical truth—hence our inferential thinking with truth and information generally—cannot be determined by one general algorithm. The implications of this result are profound, for they relate to our handling of algorithms in life, as well as the limits of mathematics and science. Turing proved his result by analyzing the very idea of a “systematic method” for a formal logician in terms of an everyday idea: the picture of a human being reckoning according to a fixed rule, or method of calculation, “unthinkingly” or “mechanically”, in a step-by-step fashion dictated by a fixed and explicit procedure. In this picture, the human uses a finite set of discrete symbols to take in symbolic configurations “at a glance” and obeys sharply formulated commands at each step, operating with pen, eraser and paper. As Turing later stated, “The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail” (1950, §4). Wittgenstein remarked, with characteristic 4
Floyd (2017).
78
J. FLOYD
acuity, that “Turing’s ‘machines’: these are humans that calculate”.5 We humans, Wittgenstein had already emphasized in his Brown Book (1969) are frequently used as “calculators” or “machines” for a variety of tasks, such as reading from a text out loud without interpretation (p. 119), or being asked to fetch bolts in a particular order from a shelf (p. 85f.). “Computers”, i.e., humans, act mechanically in calculating simple sums or following mathematical algorithms. The evolution of our ordinary phraseology tracks this. Originating in English in the seventeenth century, until the 1940s the term “computer” meant a human being, usually a woman,6 working as a clerk in, e.g., an insurance company alongside teams of other “computers” (Fig. 1).7 By the late 1940s, when Turing’s ideas began to be engineered, machines came to be called “computers”. With our characteristic human need to see ourselves as differentiated from machines,8 the term by the 1960s came to be applied nearly exclusively to objects made of wires, metal, tapes and microchips—although metaphorically, to call a person an “automaton” was and remains a perfectly grammatical form of description.9 It is salient that nowadays in English we do not generally call cell phones “computers”, despite their serving ably as computational devices. Because they are carried with us in the palm of our hand, and play such a ubiquitous role in helping us navigate the world—including our social relationships—it is as if we regard them as parts of our bodies (hence not media at all) and hence transparent, as if extensions of ourselves (like artificial hands).10 “Laptops” and “tablets” transition the space between Wittgenstein (1980, §1096); MS 135 (1947), 118 (see Wittgenstein 2015–). According to Wikipedia, the first usage of “computer” in English stems from 1613 (https://en.wikipedia.org/wiki/Computer, accessed 7/23/2022). Turing describes (1947, 495) how “girls read off values and punch them on cards”, correctly conjecturing that these activities will be mechanized in the future. 7 The image of Blythe House, London https://upload.wikimedia.org/wikipedia/commons/8/89/Blythe_House_preparing_totals_for_daily_balance_1930s.JPG, accessed 7/30/2022. 8 See Mays (2021) for recent empirical research on the effects of the “uncanny valley” on our responses to robots. 9 Wittgenstein (2009, PPF §§19, 420). 10 Katz and Aakhus’s pioneering studies of the spread of mobile technology, which they called “pandemic” already in their (2002, 137), also wrote of an image of “angelic communication” between us, a kind of “perpetual contact” with the social that accompanied the felt and acted entrance of the mobile phone into everyday life in the early 2000s. On the idea of an age of Apparatgeist see Katz (2003, 2014) and compare Floyd (2018, 2019, 2021). 5 6
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
79
Fig. 1. Blythe House London, 1930, used as a Post Office. (https://upload. wikimedia.org/wikipedia/commons/8/89/Blythe_House_preparing_totals_ for_daily_balance_1930s.JPG, accessed 7/30/2022)
mobile phones and “computers”, which are othered. We may use our cell phone to calculate, but when we do, in the present world of everyday life, it is we who are computing, by using the phone—and the AI recording our strikes on the keyboard will “see” things this way as it constructs a virtual avatar of our behavior. However, to the extent that we are more and more finely “nudged” by the design of our phone apps, as well as the algorithms that record and shape what we are exposed to and the choices we make, we are inclined at times to regard the AI system as already “in charge” of our choices about what to compute, and when. In the original context of Turing’s “machines”, before modern computers were constructed, the Entscheidungsproblem required Turing to focus on exploring how it is that human beings proceed to think by operating signs in a step-by-step, “purely logical” or “algorithmic” fashion. “Logic” is something that we practice. We know that Turing became
80
J. FLOYD
fascinated by the foundations of logic in the spring of 1932, about a year after he came to Cambridge as an undergraduate.11 Before then he may well have either sat in on Wittgenstein’s seminar “Philosophy for Mathematicians” or overheard from other students what Wittgenstein was discussing.12 It is striking that in The Blue and Brown Books, dictated to his “Philosophy for Mathematicians” class in 1933–1935, Wittgenstein explicitly connects the idea that thinking takes place when human beings operate with signs, writing things down in a step-by-step fashion according to a fixed rule, with the question whether a machine can think.13 Wittgenstein noted that it may appear nonsensical to hold that a machine might think, or that thinking might take place in the hand: it is as if, he remarks, we were asking for the color of a number. The difficulty here, Wittgenstein argued, is “grammatical” or conceptual, rather than empirical. The question whether a machine can think is not like an empirical question one might have asked two centuries ago, e.g., Can a machine be constructed that can liquefy a gas?.14 The point is rather that if one did at some point in the future meet a machine that one wanted to say “thinks”, one would not really know quite what one was saying, one would be creatively projecting the concept of “thinking” in a new way. The reason, Wittgenstein suggested, is that the grammatical type- structure of the relevant concepts (human, machine) is distinct and complex, and getting a clear view of the interactions in our present phraseology between the concepts (and their further interactions with the concept of thinking) requires careful discussion. For Wittgenstein—as later for Turing—this is a philosophical question about how we embed words in life. The answer to the question whether machines can think, Wittgenstein was arguing, cannot just be stipulated, Yes or No. Nor is it given wholly in advance. Rather, it reflects a host of different articulations and formations of our lives.—In contemporary Chinese the character for “computer” is “electronic brain”—a phrase that became popular in Britain in the early 1950s, when Turing engaged in a pair of radio broadcasts.15 Turing tried
Hodges (1999, 6, 2012, 85) and Floyd (2017, 114). Floyd (2017). 13 Blue Book (Wittgenstein 1969, 16, 47). 14 Wittgenstein (1969, 47). 15 Turing (1951) and Turing et al. (1952). 11 12
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
81
in his public appearances to unpack the complexities of this phrase, and approach the issue with realism. One might try to counter the strangeness of the idea that a machine can “think”—as many philosophers traditionally have—by arguing that since we are machines anyway, there is no trouble with the idea of “thinking machines” (see de La Mettrie 1747/1996). Alluding to the traditional Cartesian resistance to this mechanistic line of argument, Wittgenstein pointed out in The Blue Book that a philosopher might try to justify the apparent nonsense of asking whether machines can “think” by holding that there are two distinct worlds, one built from mind and one from matter. On this dualistic view, the mind is an ephemeral theatre of flowing conscious representations, whereas matter is part of world governed by (mechanistic) laws of physics, and “thinking” does not take place in the latter. The problem of consciousness remains a “hard” problem, according to recent metaphysicians, along, one must add, with the nature of human agency.16 But these are also practical problem. An employee at Google was recently put on paid leave after claiming that Google’s language model is sentient, coming too close to giving in to the mechanistic side of the issue.17 Yet even Descartes himself argued—as Turing knew when he designed his test—that the best test for whether a body (or mechanical automaton) in front of you is animated by a separate mind is its ability to respond meaningfully to you with language.18 Wittgenstein wisely urged his students not simply to assume that we have a characterization of “thinking” or “human” in our own everyday language that would be sharp enough to determine which of the two traditional arguments (mechanism or dualism) is right. In particular, we cannot hold that a machine cannot think (or that thinking cannot go on “in the head” or “in the hand”) without further ado, for we express ourselves meaningfully on certain occasions in just this way. Throughout The Blue and Brown Books what thinking is illustrated as something various, something that may appear (and be spoken about) in different ways. Wittgenstein clarifies this point through a comparison and contrast between what he called “language-games”: simplified snapshots of imagined overlapping portions of human language use designed to clarify the fluid boundaries Chalmers (2010, 2022). Grant and Metz (2022) and Hanna and Whittaker (2020). On the power of the language module, see Johnson (2022). 18 On Turing and Descartes see Abramson (2011). 16 17
82
J. FLOYD
evinced in how we embed ordinary concepts in life, special thought “experiments”.19 The question whether a machine can think is here construed as one having to do with how we characterize the behavior of machines and human beings in everyday life. To see how we characterize is itself something that requires, Wittgenstein argued, investigation. Returning to his larger point, a human being may certainly be used as or act as a “machine”, so we should really ask, “In what ways?” In a series of language-games exploring this Wittgenstein foreshadowed Turing’s way of recasting the Entscheidungsproblem. He pictured human “mechanical” behavior as a game in which we draw up short tables of commands, training humans to follow the rules in step-by-step ways using a basic set of symbols.20 Today our apps serve as contemporary examples, albeit ones leaving us certain leeways of check-list-choices. As in Wittgenstein’s language-games, the choices must be carefully circumscribed: they must not be too numerous, must be able to be taken in at a glance, and are best if they are easily presentable on a single screen of a mobile phone. Back in the 1930s, before any stored program computers had been built, it was just a picture or language-game. What is remarkable is that Turing saw, in his great paper (1936), how to use the picture to attack the Entscheidungsproblem. Turing imagined modeling each possible “systematic” or algorithmic mode of human behavior in a canonical way. He chose the image of a human calculating out the expansion of digits of a real number in order to boil the issue down to the “least cumbrous technique” (1936, Introduction) and speak to the mathematicians. His model provided the human with a finite set of discrete symbols that could be “take[n] in at a glance” (1936, §9), an unlimited tape, divided into squares in which symbols could be erased or written down, a finite set of commands stating the way to use each of the symbols at each step in the process, and a finite set of states that the “machine” (or human) would be commanded to take up after each step of the calculation. The procedure of each “machine routine” could thus be pictured in terms of its command structures. Turing wrote these down using sequences of symbols formulated in an alphabet that would be used to direct the step-by-step uses of finite set of symbols by the human-used-as-a-machine. Each of these canonical ways of expressing 19 Wittgenstein’s Blue and Brown Books speak of the language-games as “experiments” (1969, 7ff., 41,140, 153ff)—here applied to the idea of a “voluntary act”. 20 1969, Brown Book §41.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
83
routines in a finite manner would soon come to be called a “Turing Machine”.21 Turing eliminated any hypotheses about the “state of mind” of the computer by relying on the idea of an actionable human command: any meaningful direction can be passed to a co-worker with a note of instructions.22 Calculation in this sense is “impersonal”, and the notion of “computable” does not turn on any thesis about psychology, except noting the fact (as Hilbert had) that we are unable to differentiate at a glance among symbols that are too complex and large—the point being here that we need mathematical routines (addition, etc.) after a point. Turing’s analysis did not turn on any thesis in philosophy of mind, or any particular account of “understanding” involved in the actions of a human computer. Rather, he relied on an everyday idea about human beings following rules “mechanically”. Turing then constructed a Universal Machine, designed to do the work of any or all of his machines. He simply coded up the whole alphabetized list of all machines and showed that we could write them down in a lexically- ordered single sequence. Because the Universal Machine can mimic each individual machine, it incorporates all the possibilities of each machine changing the routine of any another by way of a “computation”—including its own. This self-referential aspect yielded the momentous concept of the stored program computer: one that can change its own programs in the face of new situations indefinitely. Struck by this, after discussing the idea with Turing Wittgenstein began writing down remarks about machines (i.e., humans) that symbolize their own actions.23 And in his later paper setting forth the Turing Test, Turing used the Universal Machine to disarm the objection that a machine cannot be “the subject of its own thought” (1950, §5, see §3 below). The fundamental consequence of the Universal Machine is that there are no longer sharp categorical distinctions to be drawn between software, hardware, and data or input to the machine: the boundaries of these concepts are, in any instantiation of a Turing Machine, contextual, and evolve depending upon
Church (1937, 43). Turing (1936, §9). Sanford Shieh pointed out to me that if the co-worker cannot in principle carry out the command, then she is no co-worker. 23 Wittgenstein (2015–MS 199, 28, from 1937); see Wittgenstein (2009, §193) and Floyd (2016, 25). 21 22
84
J. FLOYD
the environment of the computation described.24 The same may be said of the idea of the “action” of a machine: what machine it is may depend upon which routines it has followed to get to be itself. What Turing’s argument exposes at the foundations of logic is that the idea of a command or algorithm that can be acted upon or followed is fundamental to our very (human) notions of “logic” and “computation”.25 To resolve the Entscheidungsproblem he constructed within the Universal Machine a self-referential machine which entangles itself in its own rules, thereby demonstrating that there is no generally applicable systematic routine or algorithm (command structure) that can calculate ahead of time, by a “systematic method”, the behavior of all machines. It follows immediately that there is no machine that can determine all relations of logical consequence in any (relevantly designed) system of formal logic. The Universal Machine can do the work of any machine, but it cannot determine in advance what an arbitrary machine will do. It must wait and perform the work. “This machine will never stop (or will always stop) its calculating process” and “This is a formal consequence of your sentence” are not algorithmically solvable questions in general, although they may be in certain special cases. Turing’s analysis of “systematic method” or “algorithmic computation” is mathematically robust in that what is “computable” remains so quite independently of any particular formal or programming language chosen to formulate the instructions for the computer. The Universal Machine reflects the fact that we can, at least theoretically, continue indefinitely to cobble command structures (higher-level programming languages, apps) together in single and groups of devices—subject of course to the limitations of physics, our natural and economic and psychological resources, programming and manufacturing capacities, and, ultimately, our artifactual desires and needs. These human desires and needs really mattered to Turing, philosophically speaking. A “Turing Machine” has many faces, so that what a Turing machine in general is, is not so easy to picture. From one point of view, a Turing Machine is simply a mathematical object: it is a sequence of
24 Davis (2017). This may seem not to apply to hardware, which we ordinarily picture as fixed. But if we take a human being utilizing a smart phone to be part of the hardware, then this in-principle-lack-of-boundary point becomes clearer. 25 Floyd (2012).
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
85
commands, equivalent to a set of equations.26 From the point of view of the alphabet, it is just a quintuple of letters. From another point of view, though, when these commands are allowed to dynamically unfold through time, we see an action or process. Notably, it is we who bring a processual, dynamic, temporal perspective to bear. And when we actually construct stored program computers, and let them loose in the wild—particularly with mobile technology—it becomes very difficult to picture how the processes we are cobbling together will go. As Turing established, it is in general impossible to see or picture this. Furthermore, it is as a matter of fact impossible for us to predict and anticipate the outcome of even a relatively simple collection of Turing Machines by devising a short-cut or better humanly-available set of instructions. As Wolfram has stressed (2002), this means we are faced with a new kind of science. In prior eras, the ideal of physics was to lay down a set of equations, study the possible inputs, and predict what will happen. Now that computability is a possible feature of any set of equations, that ideal must be shifted, and we are faced with a systematic, more experimental exploration of the behavior and complexity of computational systems. This phenomenon Wolfram calls computational irreducibility, and he has recently stressed, in testimony to the U.S. Senate on the future of AI, that the phenomenon is crucial for us to understand as we proceed into a world where AI is “in charge”.27 Wolfram knows well that the problems that emerged in the 1930s in the foundations of logic are problems that arise for us in myriad forms today. His system WolframAlpha has provided millions of users with an interface where they can view beautiful, complex models of the behavior of certain Turing Machines, posing questions about answers to mathematical questions in natural language—an activity that helps the whole system of WolframAlpha evolve and construct new typings of language to interface between our perspective and the lower-level computational characterizations.28 AI becomes more and more important to us as more and more dense ways of embedding command structures at the machine level are 26 On the Herbrand-Gödel-Kleene systems and the history of alternative approaches to “effective calculability in a logic”, see Kennedy (2017). 27 See Wolfram (2019). In replying to arguments that machines cannot think because human behavior is unpredictable, as opposed to machine behavior, Turing appealed to computational irreducibility as a feature of machines (1950, §5, §8). 28 See https://www.wolframalpha.com/input/?i=mathematica, accessed 7/23/2022.
86
J. FLOYD
accomplished. A billion Tweets cannot possibly be “taken in at a glance”. Nowadays there arise greater and more various needs for human beings to be able to talk with one another intelligibly about the behavior of the machines and humans as they interact in multiple ways with the higher- level (including natural) languages used to program them. As interactions with machines permeate and structure human interactions in everyday life, the ways in which we speak about and picture machines are part of the reality that is constructed.29 Ultimately in the broadest sense “logic” and meaning require that humans be able to picture and communicate intelligibly what is going on, to embed ideas about AI into ways of speaking and acting as we embed our words in our (often differing) forms of life. In the late 1930s this problem already existed even before computers were built. Responding to Turing’s analysis of the idea of taking a “step” in a formal system, Wittgenstein explored philosophically the need for a transition from the context of pure logic to everyday language. He noted that formal reasoning explodes in length very quickly: the price of reducing logic to step-by-step, individually surveyable procedures is what he called the “unsurveyability” of proofs as a whole (1978 III). At the time he wrote his most trenchant remarks about this difficulty, Turing was attending his 1939 lectures on the foundations of mathematics at Cambridge (Wittgenstein 1989). In his notebooks and lectures Wittgenstein explored our need to be able to “take in”, to repeat and intelligibly communicate and picture a line of argument using everyday, accessible procedures, to develop “techniques” for handling unsurveyable “proofs”. Inspired by these lectures, Turing began novel research into the structure of “types”, or increasingly accessible artificial languages (like Wolfram Alpha) that would respect and take in ordinary human grammatical sortings of pieces of language used in everyday life.30 Turing’s aim was surveyability, in Wittgenstein’s sense. In the end, Turing pointed out, mathematical logic is an “alarming mouthful” for most mathematicians and should be hidden from everyday uses of computers as far as possible. Compare Katz and Aakhus (2002). Turing (1944/45), with commentary by Floyd (2013) and Wolfram (2013). We now know that Turing kept a journal and a notebook “Notes on Notations”, after he arrived at Bletchley Park. This was exploratory work on how to improve notation based on his own exploration of difficulties in the history of logical notations, containing remarks on Peano, Leibniz, Weyl, Hilbert, Courant, Titchmarsh, and Pontryagin and others. See Hodges and Hanna (2015), at https://www.bonhams.com/magazine/18629/, accessed 3/10/2023. 29 30
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
87
There would be no point, he wrote, in attempting to design a single, overarching formal programming language for science, because “no democratic mathematical community would stand for such an idea, nor would it be desirable”.31 However, as the process of offloading human algorithmicizeable tasks to machines progressed, Turing was well aware that our concepts would shift in ways sometimes unrecognizeable. For as soon as one routine was deemed “appropriate”, its computational embedding would come to potentially shift its contours: no single overarching principle would stand alone. Turing noted that the “Masters”—i.e., the mathematicians … are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself. It may happen however that the masters will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well- chosen gibberish, whenever any dangerous suggestions were made. I think that a reaction of this kind is a very real danger.32
In the face of the automation of work, the human ability to create nonsense and “well-chosen gibberish” is a real fact of our time, just as Turing envisioned. It reflects the fact that our conceptual evolutions are open- ended. We may well “rage against the machine”, but even so our uses of concepts swim in what Wittgenstein called the “seas of language”, where philosophy and meaning are created through new forms of nonsense.33 As a matter of fact, the AI technology depends upon our very human collaborations. If everyone were to suddenly unplug from Facebook, it would fail to survive as a company (recall the dip in Facebook stock after the Cambridge Analytica scandal). In a sense, the data collected by Wolfram- Alpha and Facebook are humanly-created deep resources. For responsible AI we would like these treated, not simply as valuable gold-mines for companies, but as fiduciary pools for the greater good. Access to these pools, however carefully engineered to protect the company interests, must be engineered if we are to have any way of assessing the harms and goods the pools bring about. The problem of surveyability is connected with any Turing (1944/45, 245). Turing (1947, 495–496). 33 Wittgenstein (2009 §194). 31 32
88
J. FLOYD
sense of responsibility and agency. Faced with a billion Tweets, one simply does not have any choice to make. Faced with a network graph, one can say more. In our quest for community, we continually make demands for sense. The question is us: what we will settle on, count as meaningful, be able to proceed with naturally. That question is philosophical. Among mathematicians it is very much an ongoing dispute today.34 The number of symbols involved in executing computational tasks vastly outstrips what a single human being could ever follow in a step-by-step way. “Proofs” are written down that no human can take in at a glance, or articulate. Moreover, as soon as principles are enunciated to determine constraints on the behavior or this or that group of machines (or societal institutions), we must expect from the inevitable fact of undecidability that these principles will themselves shift their sense. This shifting is characteristic of our concepts of “machine”, “algorithm”, “proof”, “game” and “concept” itself—just as Wittgenstein noted in his conversations with Turing. We face then the very practical problem of devising what both Wittgenstein and Turing called techniques for surveying of the logic of our concepts in a computational world. This is an issue posed sharply before the public’s mind as AI begins to affect our everyday conversations and modes of relating to one another. There cannot be just one technique, there will have to be many, and they will have to be constantly evolving. Full transparency being impossible in itself, given our processing abilities, is not even desirable after a point, for systems will then be gamed. Full secrecy is undemocratic and worse. Given the ubiquity of mobile technology and its ways of shaping our experiences and desires, our choices of movies and clothes, our views of journalism, our social relations, our self- development, our credit scores, and so on, it well may be that AI is already “in charge”. The difficulties are pressing. There certainly are numerous negative results unfolding. Haraway has placed “technoscience” into a critical frame where issues of feminism, capitalism, colonialism, the Anthropocene and environmental degradation are 34 See Zeilberger (2022), a sharply worded critical response to Avigad (2022b), replied to in Avigad (2022a). The issue is whether it is mathematical proofs and theorems—whether graspable by humans or not—or human understanding of concepts that characterizes (or should characterize) what mathematics is. This was the form of the question Wittgenstein explored with Turing in 1939 before the advent of the computational revolution. See Floyd (2022) for discussion of “surveyability” in the context of the Hilbert program.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
89
centrally implicated in a system of “informatics domination”.35 Her points have increasing bite when we ponder the remarkable jump in the numbers of parameters, size of data sets and power encoded in so-called “language modules” in the field of natural language processing, systems trained to predict sequences of words, characters or sentences. These are increasingly used across wider and wider swathes of human daily activity. The increase in sophistication and size of these data sets has led to a large jump in the “human-like” behavior of translation modules and text generation just since 2019. Yet this has led to the increased power and dangers of “stochastic parrots” disrupting and amplifying harmful aspects of human activities, including those that are human-to-human. As researchers Timnit Gebru and Margaret Mitchell pointed out in their widely-read paper on this issue,36 not only are there negative environmental impacts associated with these massive models: training a single AI model can emit as much carbon as five cars in their lifetimes, and the impacts of this are worse for less privileged communities.37 “Unfathomable” (i.e., unsurveyable) data sets representing only a subset of human languages are used. Not only are biases encoded here and difficult to extract and scrutinize, 90% of human languages—spoken by over a billion people—lack language support, with the more privileged among us benefitting from life-easing technologies such as Alexa and iRobots while others are left behind.38 Added to these concerns, the imputation of “meaning” to the outputs of language processors amplifies the biases and types used by the humans whose phraseology has trained them—a danger of conformity- reinforcement, bias-intensification, and paralysis in human collective action that continues to stretch over wider and wider areas, even as engineers attempt to use limited data sets and target their software to avoid large-scale distortion and unfairness. The study of algorithmic accountability as a branch of science soars in activity; it is clear that domain expertise, human ethical discussion, constant oversight of data sets, methods, and evolving uses of these technologies against a backdrop of journalistic and professional standards are part of what humans must develop in order to assign “responsibility” to technologies and their developers.39 While Haraway (1991, 161). Bender et al. (2021). 37 Bender et al. (2021) and Hao (2019). 38 Bender et al. (2021, 612). 39 Grasso et al. (2020). 35 36
90
J. FLOYD
AI’s tendency to tout itself unrealistically in the press remains a concern, the press (including organs of popular culture) have a responsibility to help the public come to terms with their present and their future. In fact, the presence of social media makes the challenge of communicating science to the public a problem for science itself. Without intelligibility, the demand for sense, what is unsurveyable will be harmful. Yet the angles of approach to analysis of the challenges must be many and capable of shifting in the face of human discussion. This is itself a challenge. We are far beyond Turing’s day, but we are not much ahead of him in evolving techniques for oversight, accountability, and design. As Gebru and Mitchell were to discover, discussion of fairness and techniques for shaping discussions of fairness are challenging. Their conversation at Google around the dangers of “stochastic parrots” proved too disruptive for the company, and they were pressured to resign, leading to much discussion of whether and how Google was hushing them up.40 Fearing open discussions of AI, Google had already dissolved its AI external advisory board for “responsible AI” within a week of instituting it, in April 2019.41 Reaching further back to the case of a woman killed by a driverless car in Tempe, Arizona in March 2018, the family reached a settlement with Uber rather than heading to court to determine responsibility.42 Public discussion is now focusing on many aspects of AI technology and the “nudging” of human choice and performance. What Zuboff calls “surveillance capitalism” leaves privacy as a traditional concept stretched, if not broken, as every click of a mouse or cellphone is potentially recorded and exploited to “nudge” our behavior.43 Doing something for those affected by human-generated-and-AI-furthered revenge porn is difficult to square with the idea of “freedom of speech”,44 and spills over, potentially, into the difficult territory of “hate crimes”.45 Drone warfare with its errors, and more generally the many biased and unjust outcomes of algorithms,46 including those used by the state to administer its Simonite (2020). h t t p s : / / b l o g . g o o g l e / t e c h n o l o g y / a i / e x t e r n a l - a d v i s o r y - c o u n c i l - h e l p advance-responsible-development-ai/. 42 Neuman (2018). 43 Zuboff (2019). 44 Citron (2020). 45 Citron (2014). 46 Smith (2020). 40
41
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
91
functions47—the list goes on. We really do face the need for creativity, for human-to-human work on concepts and phraseologies, with all its friction (to use Wittgenstein’s famed metaphor of words’ embeddings into forms of life (2009, §107)). This is what “responsible AI” really comes to, a notion carrying with it the features of indeterminacy and irreducibility that came into view with Turing’s work and implying the need for constant work at developing techniques of intelligibility, tolerance, semantic ambiguity and responsibility. Human agency is filled with what advocates of nudging mechanisms call the “flaw” of “noise”.48 However it is internal to our very concepts of agency and responsibility that action is not always everywhere determined, but consists in a human being at least aiming to fit their behavior into a particular event in a way we can find intelligible. “Noise” in algorithmic (“nudging”) decisionmaking, as Turing showed, is in any case inevitable. Moreover, sometimes noise, with all its problems, is a good we should make space for. Teachers must allow students the spontaneity of response in order to develop resilience and creativity: not everything can be rote learning if we are to have “learning”. To return to AI, NextGen, the Federal geolocation regulation for airlines, determines “rails in the sky” to uniformize flight patterns from airports.49 This stops the natural dispersion or “noise” human pilots created in the past. NextGen creates targeted noise pollution at ground level that has communities in an uproar: dispersed noise is better for humans. Thus “noise” is not always a negative thing. Algorithms may and should be developed to give human judges feedback on the waywardness and apparent haphazardness of some of their decisions, which are certainly often laced with bias. But it is the human judges who ultimately and collectively allow for the concept of “responsibility” to have a grip. How people feel about how they are treated, and what they say about it, is noisy, and yet this “noise” really matters to the outcomes we will see in terms of culture and society. Democracy itself requires the cacophony of different voices, each speaking his or her mind. In real life, silence is meaningful too (Das 2020). This may seem to create a “paradox” if we ask how democracy will avoid tumbling over into repressive, noise-repressing regimes; historically it is quite clear that there is no inevitability in Citron and Calo (2020). Kahneman et al. (2022). 49 https://en.wikipedia.org/wiki/Next_Generation_Air_Transportation_System. 47 48
92
J. FLOYD
democracy’s continuation without careful human nurturing.50 But this is no paradox if we take seriously, as we do in ordinary conversation, the norm of every person’s voice getting heard, and our very human ability to confront and discuss together “noise”, making of it meaningful speech. My point so far has been that the problems of “hidden persuaders” emerged early on in the proof-theoretic context: How could one be “persuaded” by a proof one could not survey? We see today that such difficulties have become entangled with our ordinary phraseology, which is something social. Turing presciently predicted this. Although in 1939, when he sat in on Wittgenstein’s lectures, he had already begun to work on cryptography at Bletchley Park, he continued with philosophical reflection in his spare time. Explicitly expressing his debt to Wittgenstein’s lectures, he stressed the need for a “reform” of mathematical notation in light of the “types” we use in ordinary scientific phraseology.51 He recommended going through all the textbooks and culling the ordinary ways of speaking used by scientists and mathematicians to develop a new type- language that might play a kind of mediating role between the human and the machine levels. The requirements on notation should be “exceedingly mild”.52 By 2022 it has become part of ordinary parlance to speak of “artificial intelligence”. AI has become more or less ubiquitous in our lives, through mobile technology, deep learning, geolocation data and the vastly expanded use of the world wide web in everyday life—particularly in social life—by humans across the globe.53 The “artificiality” is jointly human and machine. AI generated texts are, for ordinary or routine brief essays, now approaching the point where we cannot easily distinguish humanly generated paragraphs from machine-created ones.54 But what will this imply for education, observation, and “human understanding”? That depends partly on us. There is no question that the majority of “mechanical” labor tasks are capable of automation, and this has the potential, not only for far Gershberg and Illing (2022). Turing (1944/45), compare Hodges and Hatton (2015). 52 Turing (1944/45, 245). 53 According to the Pew Research Center, 97% of Americans owned smartphones in 2021, up from 35% in 2011 (https://www.pewresearch.org/internet/fact-sheet/mobile/, accessed Juy 23, 2022). As of 2019 more than 5 billion humans (out of 7.673 billion) had mobile devices (https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-growing-rapidly-around-the-world-but-not-always-equally/, accessed 7/23/2022. 54 Johnson (2022). 50 51
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
93
greater efficiency in outcomes—perhaps environmentally sustainable ones55—but also for saving humans from tasks that are unpleasant. But at what price does this elimination of human labor come? The worry is that we will end up reducing the notion of a “citizen” to that of a “user”, and that, in evolving beyond scribal culture to immersed virtuality, we will lose ourselves, i.e., a sense of community and history, the ability of individuals to reasonably grapple with alternative forms of life, traditions and modes of argumentation that have mattered to moral progress over centuries.56 The task of self-education, of becoming someone, is not trivial. We still have to face it, together, in a brave new world. Where does that leave the Turing Test? I want to revisit this question in light of the fundamentals we have just discussed. The Turing Test Revisited As is well-known, Turing’s Test is a 3-player game, one Turing occasionally called a viva voce (oral) exam.57 Human Player A poses a series of questions to a remotely located pair B and C, knowing in advance that one of them is human and one of them is a machine. B and C respond to A’s questions with linguistic expressions delivered to A remotely, by a monitor screen. Ignorant of which source is Human B and which the Machine, Human A is tasked with determining by the end of the game which of the respondents is “intelligent”.58 The screen serves to block out the immediate physical appearances of Human B and the Machine; A’s terminal, connected remotely, serves to screen off their voices and accents. In this way 55 Although on the environmental effects of mobile devices alone see https://www.epa. gov/sites/default/files/2015-06/documents/smart_phone_infographic_v4.pdf accessed 7/23/2022. 56 Frankel and Krebs (2022). 57 Turing (1950) is the classic formulation, see 560 for “viva voce”. Copeland (2000) canvasses the various versions Turing framed. One (Turing, Braithwaite, Jefferson, and Newman 1952) sets up a jury and a series of human “confederates” as well as machines to play the game, and Turing excluded experts on computers from the jury. This was the setup pursued in the Loebner Competition, an actual “Turing Test” that was carried out from 1991 until it became defunct in 2020. See footnote 67. 58 As Copeland points out (Copeland ed., 2004, 437f.), Turing considered the time involved in the playing of the game the chief technical problem. It is clear that the game is to be played in the actual world, and is not designed to be played in all possible worlds: that would allow the machine to run through all possible combinations of response, and thereby directly “imitate” the human, but trivially.
94
J. FLOYD
at least some of A’s implicit (philosophical, emotional) biases are screened out. Is this really an “experiment”? Yes. Interestingly, Turing’s control test is for gender: he imagines a comparable game where a human player plays the game with a Man and a Woman to see how often the questioner is able to differentiate them.59 Somewhat playfully, Turing suggests using this as a baseline for the Machine doing well in the “imitation” game. As we now more explicitly recognize, Turing’s control game depends upon the players utilizing a binary treatment of gender that might well be rejected by human participants.60 This picture shows the importance of our socially- embedded uses of concepts in everyday life to the setup of the original Turing Test (Fig. 2): Turing’s test is designed to elicit from us forms of expression that we can explore over time together: the embedding and re-embedding of our concepts in the face of the “friction” of everyday life (Wittgenstein 2009 §107). How this evolution will go, we cannot say in advance. For the notion of “intelligence” is a notorious “family resemblance” notion, a “suitcase” word.61 Like “love”, there are so many different things we might call “intelligent” (pack into the suitcase) that determining necessary and sufficient conditions for the application of the concept is a will o’ the wisp. Our uses of such weighty concepts are best regarded as open- textured, to use a phrase coined by Wittgenstein’s collaborator Waismann.62
59 Genova (1994), Sterrett (2000) argue that the gender test is a different test; I follow Copeland, ed. (2004, 436) and Proudfoot (2013, 39) on regarding the male-female game, from the point of view of the experimental character of Turing’s Test, as a control. But this does not gainsay the fact that in the context of the imitation game men and women might fool the questioner by adopting unexpected stereotypical “gendered” behavior, or even come to reconstrue their uses of the concept of gender: this is Turing’s point. His 2-step experimental design gets around the biologically biased objection that while humans generally exhibit sexual preference in mating, machines do not. Moreover, as Sterrett emphasizes (2000, 2020), the need for human contestants to reflect on their own biases in answering questions, and role of successful impersonation in intelligence, are indeed prescient worries about “mechanical intelligence” that Turing saw very early on.—This ability to reflect on the borders of concepts is, I would add, also the point of Wittgenstein’s method of “language-games”. 60 And might possibly even have been rejected by Turing, given his own sexual orientation. See Hodges (2012) and Genova (1994). 61 Wittgenstein (2009, §67) and Minsky (2006, 11). 62 Mokovec and Shapiro, eds. (2019).
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
95
Fig. 2 Gender as a Control Test. (Constructed from extension of https:// commons.wikimedia. org/wiki/File:Turing_ Test_Version_1.svg, accessed 7/25/2022)
Thus the “reality” of the Test is something social.63 After the game is played, the two humans will emerge from behind the screen, and interact with one another. A might ask B to have a cup of coffee. B, perhaps insulted by being erroneously classified as a machine, might not be willing. The real question is how A and B will go on together, phraseologically and agentially speaking. In a world of mobile technology constantly mediated by AI, Turing’s Test may be run over and over again, depending upon the state of machine- evolution, as well as our language, and a variety of different sorts of human beings (with differing sorts of expertise) might be involved, as Turing himself repeatedly stressed. Over time, the Loebner competition increasingly excluded computer scientists as judges, for example. We see that in a world of mobile technology, chatbots and deep fakes, the Turing Test is going on all the time, iterated over and over again, in our daily lives. The picture is an indefinitely evolving one (Fig. 3): There is no limit to what we may discuss in terms of cobbling together fragments of responses into new arrangements. This picture shows that the fact of computational irreducibility really does matter to our relations with one another: predicting the unfolding in time of these games is more than we—or the machines—can survey, to allude back to Wittgenstein’s notion. What are needed are then tests and creative Floyd (2021).
63
96
J. FLOYD
Fig. 3 The Turing Test as an Evolving Social Test, constructed from (https:// commons.wikimedia. org/wiki/ Category:Turing_test#/ media/File:Turing_ Test_version_3.png, accessed 7/25/2022)
B
A
? C
B
A
? C
B
A
? C
techniques—in Turing’s sense—to see what sortings and conceptualizing human actions we are able to take to be “intentional”, “responsible”, and “fair”. These are tests that require the kind of steely determination to face the complexities of examples that were exhibited in the work of
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
97
Wittgenstein’s student G.E.M. Anscombe: her work on what counts as an “action” or “intended action” is still à propos (Anscombe 2000). Turing’s Test has officially been conducted 1990–2019 as the Loebner competition.64 In the last few decades the idea of an “imitation game” with “intelligent” machines has entered popular culture, e.g. the films Bladerunner (1984), The Matrix (1999), Ex Machina (2014), and The Imitation Game (2014). IBM’s AI machine Watson won the game show Jeopardy! stunningly in 2011. Nowadays, in a world increasingly populated by chatbots the Turing Test has become a ubiquitous part of everyday life. Elon Musk held off on purchasing Twitter in May 2022 until it could be proven that less than 5% the traffic on that platform was conducted by chatbots: a protracted legal battle loomed in court facing the question of what it is to do “due diligence” on a social media company.65 Robots have entered our social discourse as “companions” (“polluters”? “distractors”? “part of the infrastructure”?)—What are we to say? That question was Turing’s point in proposing the Test. This has been underappreciated. Neither the Loebner competition (until 2019) nor the above-mentioned films envisioned a role for social media at all. This bias against the idea of the Test as a way to explore our concepts—as we might say, the human aspects of the game—stemmed from a philosophical prejudice, one prevalent still in our time: the idea that individual consciousness and mentality are the marks of “intelligence”, “agency”, “value” and “thought”, or “the human”. As we now vividly see, however, the human social dimension is a driving force. IBM’s Watson worked its magic by trolling the web with quick searches, taking advantage of the collectivity of human-generated content available through Wikipedia and other platforms (or publications) on the web. What we really have is an integration of human and machine “intelligence”.66 And the eternal price of integration is vigilance: human and machine criticism. The social nature of the Turing Test has been downplayed because of a particular Cartesian philosophical take on who we are as human thinkers, and it is this—as Wittgenstein anticipated—that is dislodged and shifted by the presence of “thinking machines” in our midst. In the 1940s Logical https://en.wikipedia.org/wiki/Loebner_Prize, accessed 7/25/2022. https://thenextweb.com/news/elon-musk-twitter-bots-spam-fight-analysis, Twitter post accessed 6/30/2022; Mehta (2022), Conger (2022). 66 Sterrett (2017). 64
65
98
J. FLOYD
Behaviorism had attempted to escape the dialectic between mind and matter by holding that (individual) mental states are defined by typical bodily dispositions and behaviors. This position was sophisticated in the 1970s into “functionalism”, according to which (individual) mental states were said to be tracked by the “functional” descriptions of behavior associated with particular concepts (e.g., “pain”).67 Turing has usually been taken to have been a Behaviorist or Functionalist in this sense. Unsurprisingly he has also often been taken to have advocated a reductive mechanism about the human body. But Turing, like Wittgenstein, never endorsed these ways out of the Cartesian dilemma.68 He well understood that a rejection of the Dualism/ Materialism dichotomy requires us to rethink from the ground up, not only how our concepts involving our ideas of human agency and mind evolve, but how all concepts evolve in the presence of new technology. Turing was engaged in deeply philosophical reflection on the evolution of our notions of “intelligence”, “agency”, “value” and “thought”, focused on what he called the future of “intelligent machinery”. For Turing’s Test is a language-game, a way to elicit what Wittgenstein called “criteria”: our ordinary, untutored, everyday uses of concepts to classify, distinguish, liken, and assimilate things. What is to be explored are the specific phraseological means the contestants naturally use in discussing and determining how we classify things as “intelligent” vs. “merely mechanical”. Elicitations of criteria are not designed to carry us into all possible worlds or establish necessary and sufficient conditions for the existence of kinds of items, legitimate applications of concepts, and so on.69 Nor are they pragmatically operationalized definitions carrying a conventionally stipulated, merely definitional necessity.70 Rather, criteria are elicited socially, in the context of a philosophical investigation, something which only takes place when we do not quite know what to say.71 Such a situation marks our entering into the terrain of philosophy, where expertise is both communal and individual. Turing’s test was promulgated, not merely to stimulate the development of “intelligent 67 See Putnam (1960, 1967). On Putnam’s change of heart about functionalism see Putnam (2012). 68 Hodges (2012) makes this clear. 69 Cavell (1979) I I. 70 Copeland (2000, §2); though compare Vardi (2014). 71 Compare Wittgenstein (2009 §123), where a philosophical problem generally is said to emerge when we “do not know [our] way about”.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
99
machinery” with a game, but also to explore our willingness or lack of willingness to project a concept into our “forms of life”. It is a question of our phraseological responses, both to machines and to humans. This is why Turing is always careful to say that his arguments are not conclusive, but only designed to open up the possibility that we might come to agree that machines think: the Test shapes by allowing us to discuss the concepts.72 For Turing “intelligence” is an “emotional” concept, like “free” or “happy”.73 And that is why he explored the question, “Can a machine think?”—The “Turing Test” aims at garnering a sufficient rather than a necessary criterion.74 It admits from the outset that particular human judgments may be “flawed”, i.e., capable of refinement.75 It measures a kind of threshold for our everyday use of the concept of “intelligence”. It is an experiment, not only in phraseology but in the effort to justify when detailed causal and procedural evidence are unavailable.76 And we are running it continually now on many of our most important “emotional” concepts, such as “privacy”.77
Turing’s Argument Turing’s paper (1950) considers several objections: (§1) the “theological” objection (we have an immortal soul), (§2) the “heads in the sand” objection (the consequences of saying machines think are too dreadful), (§3) the “mathematical objection” (human minds escape Gödelian incompleteness), arguments from human “consciousness” which cannot be in machines (§4), (§5) the argument from various disabilities of machines Compare Sterrett (2000, 480). Turing (1948, 516); Proudfoot (2020) explores the response-dependent account of intelligence. 74 Copeland (2000). 75 McDermott (2014, 5). 76 Turing (1950, 554, 562, 556). 77 Citron and Henry (2010), in reviewing Solove (2008), mention his “pragmatism” as deriving partly from Wittgenstein’s “family resemblance” idea of concepts such as privacy. They raise the reasonable worry that Solove’s Wittgensteinian method runs the danger of collapsing into merely philosophical or emotional responses on the part of judges (1120–21). Their suggestion that professional checklists may help, and that a “rules of thumb” approach may be useful for considering competing privacy interests is welcome, but does not go against anything Turing or Wittgenstein held. To see how debates about the Turing Test have a history, see Saygin, Pinar, Cicekli and Akman (2000)’s prediction, nearly a quarter century ago, that the Turing Test would remain relevant. 72 73
100
J. FLOYD
(one should expect increased diversity of machine behavior in the future, with greater storage capacity; the Universal Machine shows a sense in which machines can be “subjects of their own thoughts”, (§7) the continuity of the human nervous system (randomness in machines would sufficient approximation to model such behavior), the unpredictability of human behavior (§8), and—interestingly—an argument from telepathy or extra sensory perception (§9), a possibility Turing takes to be “quite a strong one” in order to cover the complete space of arguments. Instantaneous, inexplicable fathoming of when we are in the presence of humans expressing themselves and have understood their expressions as authentic and meaningful is a talent science has not replaced, or explained so far. Wittgenstein, gesturing at relativity theory’s incorporation of the effects of the observer’s own point of view into measurement, had likened this phenomenon to Fizeau’s 1851 experiment establishing the surprisingly small effect of a medium on the speed of light (1969, 185): we can never know the moment when a ray of light hits a mirror (or a human being has been properly understood).78 At the same time, he felt that the human body was the best picture of the human soul, requiring some sense of embodiment for expression (2009, PPF iv §25). Turing does not deny this but, adapting Wittgenstein’s allusion to Fizeau, points out that so far at least science has gotten along pretty well without the assumption of extra sensory or telepathic, instantaneous perception. The theological and heads in the sand objections he does not take as seriously as we perhaps should today: maybe the advent of AI’s effects on humans and the earth is too horrible even to contemplate, for some. Of all the objections Turing canvasses to his “imitation game” in his 1950, it is Lady Lovelace’s that he takes up most seriously (§6)—and we can say why. For this objection turns on the idea of “creativity”. Lady Lovelace argued that a machine cannot be creative, cannot be original, cannot go beyond what we order it to do, and so cannot surprise us in the ways humans do. Pointing out that she had not had the advantage of ever seeing even the idea of a stored program computer,79 Turing proposes that if one could set up sufficiently complex inputs to the machine, one might 78 Compare Wittgenstein (2009, PPF §335), where Wittgenstein denies that there are “techniques” that can indubitably establish the presence of authentic expression of emotions. Today the dangers of facial recognition technology create new forms of phrenology about which philosophers from Hegel to Wittgenstein have worried. 79 Sloman (2013, 97) claims that Lady Lovelace did understand that one machine could “virtually” mimic the behavior of another.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
101
train or teach it, and it might “learn”, developing its responses to the point of originating “thinking”. Turing develops this quasi-organic idea of “learning” or “child machines” elsewhere.80 Turing argues that the Lovelace objection suffers from the presumption that machines cannot surprise us. He is adamant, based on his own experiences, that machines can surprise us. (We have seen in our day “learning algorithms” that surprise us.) More rigorously, Turing points out that the idea that machines cannot surprise us ignores something deeper, and his point is not simply phenomenological, psychological, or brutely “emotional”. What is ignored in the Lovelace objection is the fact that there is no general decision procedure for working out the consequences of a sentence. She supposed that once one has a proposition in mind one could grasp all its consequences simultaneously, without surprise. Were this true, as Wittgenstein had once cavalierly remarked, “in logic there [would be] no surprises”.81 Turing’s (1936) showed otherwise. Even sticking just to “mechanical algorithms”, there is a need for “creativity” to solve problems. The deep failure of the Lovelace objection—“a fallacy to which philosophers and mathematicians are particularly subject”, as Turing remarked—is to wrongly “assume that there is no virtue in the mere working out of consequences from data and general principles” (1950, §6). The “working out of consequences” is and will remain an ongoing struggle in which many humans much be engaged. But this is partly a struggle with phraseology. With the advent of AI, Turing’s classification of “intelligence” as an “emotional” concept becomes ever more convincing. Chatbots are ever- more adept at responding with phrases that make us feel and respond in ever more life-shaping ways. Programmer Joshua Barbeau, isolated in the COVID pandemic, used his AI-shaped bot to carry out “artificial” conversations with his deceased fiancée in order to come to terms with her death. Her words were not “real”, they were “virtual”. His responses were tearful and cathartic.82 Though the “Player” was real, and the words of the chatbot “Machine”, their effects on shaping Barbeau’s identity in grief held Turing (1948), compare Turing (1950, §7). Wittgenstein (1921, 6.1251). I do not think Wittgenstein held in the Tractatus that there is a decision procedure for all of logic (Dreben and Floyd 1991). But he did think that the nature of logic had been fully clarified in schematic form. It is this that Turing’s work undercuts. 82 “The Jessica Simulation”: Love and loss in the age of AI”, San Francisco Chronicle, 7/23/2021, by Jason Fagone (2021), image at https://www.sfchronicle.com/projects/2021/jessica-simulation-artificial-intelligence/#chapter1 accessed 5/22/2022. 80 81
102
J. FLOYD
strong significance for his moral and sensible “reality”. What will this mean for future human beings’ ways of handling rejection and loss? Have we found a “happiness drug”, or rather a filament by means of which some people may try to find their way through the labyrinth of possibilities confronting them? It would seem to be the latter. Because of the spread-effect of such notions as “intelligence” and “thoughtfulness”, Turing argued in his (1950) that there is no way to prove a priori, once and for all, that a machine—or anything else—cannot think. We have to wait and see what we will say—and “we” may come to say differing things (“we” may not be a “we”). Logically speaking, to prove that something is impossible we must have necessary and sufficient conditions for what it would have been to have it be realized, and then show these conditions do not hold. This is what Turing did in his (1936) for the concept of a “systematic procedure”, but it cannot be done for “intelligence” (“privacy”, “love”, “reasonable”, etc.). This indicates that none of the usual objections to the idea that a machine might think prove anything. At best such arguments remark on dissimilarities between machines and humans, or make definitions. That is fine: they express preferences and proposed connections among concepts, our biologies, our histories, our theological ideals, and the projections of our concepts into our forms of life. But, in the end, they produce no more than Turing had, namely, a list of “recitations tending to produce belief” (1950, §7). That is what Wittgenstein had already suggested to his students in The Blue Book. Philosophers since 1950 have very much wanted their say on how we should speak. Consider Searle’s famous (1980) “Chinese Room” objection to the idea that computers can think, probably the most influential one ever offered.83 Searle intended to counter the idea of what he called “strong AI”, namely that syntactic processors such as computers are capable of “understanding” human natural language. Cole 2020 describes the test this way: Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room. Searle (1980, 2010).
83
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
103
This idea of “understanding”, Searle argues, is unpersuasive, because it is clear to him that as a mere operator of syntactic step-by-step routines, he has no grasp whatsoever of Chinese, hence no understanding of what is being communicated. The idea that there is any understanding produced at all in this situation is for Searle bogus. He concludes that human minds are not computer-like computational or information processing systems at all. Instead, minds must result from biological processes, which produce consciousness, and computers can at best simulate their actions. There have been an inordinate number of replies to Searle, and this is not the place to rehearse them all.84 Mine is a version of what is sometimes called the “systems” objection85: that we must look at the entire situation of Searle-in-the-room in order to determine the extent to which, and in which ways, Searle’s imagined language-game—for that is what the Chinese Room is—sheds light on our notion of “thinking”. From the fact that Searle himself, while inside the room, would deny that he understands Chinese it certainly does not follow that there is no understanding going on. Who is it who is passing notes under the door to Searle? What would they say if Searle emerged to confront them? What would happen if Searle came out of the room, smiled at the Chinese people who have been receiving the messages he put together, and in a friendly way shared a character or two with them in the sand? It is not difficult to imagine a future of Searle with the Chinese people learning their language. Of course these are humans in a social world, working with machinery. But that machinery may work more or less “intelligently” for the tasks at hand. Many philosophers who have attempted to counter Searle have imagined increasingly strained variants of his language-game: bringing to bear hypothesized internal causal/computational mechanisms in the brain, placing Searle’s room in the context of another room the size of India, populated with millions; bringing the computational processes inside Searle’s head with artificial neuron-replacements, or alien intelligences.86 But we should remember that Turing, following Wittgenstein, was much more restricted and careful in the way he set his Test up. Turing was not attempting to prove that machines can think. Thus it is no parry to him to assert that they cannot. What is most striking about the Chinese room (and many objections to it) is that philosophers have taken away the evolving meaningful social backdrop in which human language-processing, Cole (2020) gives a good overview. Cole (2020) and Copeland (2000, §7.2). 86 Cole (2020). 84 85
104
J. FLOYD
including that imagined in Turing’s Test, takes place. Searle refused to engage in the Wittgensteinian investigation of “What are we to say now?”, because he believes that “we” consists of a summation of individual brains- in-biological-bodies insofar as we mean at all. By contrast Turing crafts the Test carefully, placing it in a social world where there is already sufficient commonality between A who poses questions and B who offers responses to A. That is a fundamental in his whole point of view, and cannot be subtracted from his Test without loss.
Objections to the Turing Test Summarized Turing merely stipulates a definition of “intelligence”
The Test does not offer any a priori definition of “intelligence”.
Turing commits himself to behaviorism
Turing does not construe “intelligence” in terms of behavior alone, even including verbal behavior: the test is repeatable and open-ended, a “language-game” to explore our uses of the concept, which is a “family resemblance” one. “Intelligence” is a response-dependent, “emotional” concept.
Turing assumes merely unthinking operations with characters on a screen (“syntax”) is something meaningful; the symbolgrounding problem is unanswered in the Turing Test
Humans frequently operate with symbols and express their desires, intentions, and agency using text and other technologies, including voice. The social setting of the Turing Test assumes this is meaningfully in place. Symbolic systems are “grounded” in natural facts about us and our training, both our biologies and a social world, one that evolves with both human and machine activity, including the evolution of our concepts and phraseologies.
Turing fails to emphasize sufficiently the difference between humans and machines: machines are algorithmic and so infallible and impersonal, whereas humans are creative and surprising.
Turing gives human beings their creative say in a social and culturally evolving world, also in the programming of machines by emphasizing that the computational irreducibility of machine behavior means that stored program computers will surprise us. Humans are equally subject to the undecideability results concerning logical consequence (Turing) and the incompleteness results about arithmetic (Gödel), which entail no such thing as infallibility. And humans can act, with respect to computation, wholly impersonally, using themselves as “machines”. (continued)
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
105
(continued) Turing cannot explain consciousness
The Turing Test is not designed to decide the metaphysical question of what “consciousness” really is. Since no philosopher has an agreed upon account of this, the Turing Test is no worse off. Moreover, the meaningful application of such terms as “action”, “agency” and “intelligence” depends upon a host of grammatical and contextual factors that metaphysical theories tend to deny.
Intelligence is part of the soul, and inexplicable
Turing agrees that the Turing Test cannot dislodge this idea. But this does not imply that our uses of the notion of “intelligence” may not (and should not) be explored.
Some human beings are expert at detecting the authenticity of human expression, but there are no general techniques or algorithms or rules of expertise for this.
Turing grants this point in relation to particular encounters among individuals, but his Test minimizes reliance on such expert knowledge by drawing meaning-attribution into the context of a humdrum repeatable “language-game” anyone, not just an expert, can play.
The Theological Objection: Turing needs to say more here about “Souls”; he is wrong to falsely remark that Muslims believe women to not have We have “Souls” souls. The Heads in the Sands Objection: The consequences of machines thinking are too dreadful.
Turing is perhaps too dismissive of this: humans need to take this question on, on a human scale, attending to the climate and needs of the earth and all of its inhabitants, especially those who are most vulnerable.
Conclusion Nowadays many of us readily apply terms like “search”, “thinking” and “Artificial Intelligence” to processes and machines. We “command” Siri to perform a variety of tasks and are not embarrassed when, in public, Siri doesn’t quite manage to do what we say. Siri, we assume, will improve. Are these mere façons de parler, or metaphors? No, they are changes in our forms of life with language. It seems safe to say that there has been a shift in our grammar, and it signals shifts in our forms of everyday life, particularly in everyday social life, where, with the growing ubiquity of mobile technology—especially during the COVID epidemic—we communicate
106
J. FLOYD
and relate to one another increasingly on-line, rather than face-to-face. Conversely, with the entry of the Internet of Things, web-connected robots live with us at home, something that alters and changes the ways in which members of the domestic household relate to and articulate their lives with one another.87 After purchasing my first IRobot, I still paid cleaners to come to my home once a month. At first they seemed to resent the presence of the robots. But they soon came to use them while they cleaned elsewhere more deeply. Their labor was saved, and spent. All were satisfied. Turing’s Test pointed toward great future shifts lying ahead: the evolution of (our concept of) “intelligence” and other fundamental concepts (“friend”, “like”) as we respond to the power of computers that can learn and alter their programs in the face of their own responses. The fact is that responses are themselves “intelligent” in being plastic and opportunistic in cobbling routines together. If a machine is expected to be “infallible”, Turing wrote, “it cannot be intelligent…there are several mathematical theorems which say almost exactly that.”88 This implies that AI computers will always make “mistakes”, that we will always be building bridges as we go. The important thing to remember is that we are offloading—hopefully—to better interface with the lives of others on our planet. A 26-year-old user of a dating app knows that if she swipes quickly, she will be paired with others who similarly swipe. So she picks the pace she likes. Asked if she thinks older users of the app know this, she says, confidently, “No”. Asked if the pace of swiping will guarantee a good experience, she says “No”. That is what is important, what matters. Routinely speaking with my cousin every day, we begin to find that AI is “listening in” on our conversations. When I mention the name “Wittgenstein” several times, Netflix sends her The Oxford Murders (2008), a film in which an actor plays Wittgenstein. I then watch the film. Has my autonomy been degraded? In light of computational irreducibility, it seems not. AI has taken over much of the labor of delivery for us. What it cannot take over is to the labor of responding to the film. That is my job to do, as a person. I might post a satirical TikTok, poking fun at it. I might thank my cousin for our conversations. New forms of criticism emerge. But in the end the conversation the next day with my cousin— even if it is recorded—still belongs to me, to us. Agency and the moral effort at self-improvement are not impaired here. Indeed: who is to say 87 On more serious worries about sex robots and other robots entering the home, compare Mays (2021). 88 Turing (1947, 497).
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
107
that popular culture—e.g., the TV series and films we see on Netflix— does not give us plenty of space, plenty of language-games, to improve ourselves by? The “noisiness” in our responses is itself part of our search for meaning and culture (see Laugier’s contribution to this volume.) Since the advent of the web and mobile technology the “Masters”—as Turing tended to call the trainers of the machines, then mathematicians— have become us, a social multitude, constantly refracted through our offloading of tasks to computers. It is important that within every social multitude there are arguments about what is the right way to go, differences among members of the group. Sociologists and anthropologists have sometimes associated Wittgenstein with the idea that concepts are embedded in “practices”, but Wittgenstein was careful to avoid the term for the most part, substituting for it the more evolving and elusive notions of “forms of life” and “techniques” devised to creatively confront new situations. Even before publishing his Test, in his (1948) report to the National Physical Laboratory, Turing envisioned AI—what he called “intelligent machinery”—as raising the prospect of a social experiment involving all of humanity. He speculated about the different kinds of searching that would earmark the developments. His fundamental idea—like Wittgenstein— was that “intelligence” is itself manifested in understanding the differences among different kinds of searching. He predicted that before an attempt could properly be made at building a human-like robot, there would be three other forms of searching that would be paramount. First, the need to find new algorithms and proofs (“the intellectual search). Second, the search to find biological protection via computers (“the biological search”)—certainly an increasingly intensive focus of AI research, creating new ethical quandries every day (see AlphaFold89). But perhaps most importantly, Turing foresaw the fundamental importance of the human- to-human “cultural search”: … the isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment of other men, whose techniques he absorbs during the first twenty years of his life. He may then perhaps do a little research of his own and make a very few discoveries which are passed on to other men. From this point of view the search for new techniques must be regarded as carried out by the human community as a whole, rather than by individuals. (1948, 516)
https://alphafold.ebi.ac.uk/, an AI system using deep learning to predict the 3-D shape of a protein. 89
108
J. FLOYD
References Abramson, Darren. 2011. “Descartes’ Influence on Turing.” Studies in History and Philosophy of Science 42: 544–51. Anscombe, G. E. M. 2000. Intention. 2nd ed. Cambridge, MA: Harvard University Press. Avigad, Jeremy. 2022a. “Response to Opinion 182 of Dr. Z. Opinions.” February 17, 2022, https://sites.math.rutgers.edu/~zeilberg/JA182.html, accessed 7/24/2022. Avigad, Jeremy. 2022b. “Varieties of Mathematical Understanding.” Bulletin (New Series) of the American Mathematical Society 59, January(1): 99–117. Bender, Emily M., Timmit Gebru, Angelina McMillan-Major, and Schmargaret Schmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. Proceedings of the 2020 ACM Conference on Fairness, Accountability and Transparency: 610–23. Open source at https://dl.acm. org/doi/pdf/10.1145/3442188.3445922. Cavell, Stanley. 1979. The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy. Oxford: Oxford University Press. Chalmers, David. 2010. The Character of Consciousness. Oxford: Oxford University Press. Chalmers, David. 2022. Reality+: Virtual Worlds and the Problems of Philosophy. W.W. Norton & Company. Church, Alonzo. 1937. Review of A.M Turing, “On computable numbers, with an application to the Entscheidungsproblem.” The Journal of Symbolic Logic, 2(1 March): 42–43. Citron, Danielle Keats. 2014. Hate Crimes in Cyberspace. Harvard University Press. Citron, Danielle Keats. 2020. “Cyber Mobs, Disinformation, and Death Videos: The Internet As It Is (And As It Should Be).” Working Papers from Faculty Scholarship, Scholarly Commons at Boston University School of Law. Citron, Danielle Keats. and Calo, R. 2020. The Automated Administrative State: A Crisis of Legitimacy. Working Papers from Faculty Scholarship, Scholarly Commons at Boston University School of Law. Citron, Daniel Keats, and Leslie Meltzer Henry. 2010. “Visionary Pragmatism and the Value of Privacy in the Twenty-First Century.” Michigan Law Review, no. 108: 1107–23. Cole, David. 2020. “The Chinese Room Argument.” In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Winter 2020 edition. Palo Alto, CA: Metaphysics Research Lab, Stanford University. At https://plato.stanford. edu/archives/win2020/entries/chinese-room/, accessed 7/25/2022. Conger, Kate. 2022. “Elon Musk and Twitter Will Go to Trial Over their $44 billion deal in October.” The New York Times, Business Daily Briefing, 7/19/2022. Cooper, Barry S. and Jan van Leeuwen, eds. 2013. Alan Turing—His Work and Impact. Amsterdam/Burlington, MA: Elsevier.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
109
Copeland, B. Jack. 2000. “The Turing Test.” Minds and Machines 10: 519–539. Copeland, B. Jack, ed. 2004. The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus the Secrets of Enigma, Oxford: Oxford University Press. Davis, Martin. 2017. “Universality Is Ubiquitous.” In Floyd and Bokulich, eds. 2017, 153–158. Das, Veena. 2020. Textures of the Ordinary: Doing Anthropology after Wittgenstein. New York, Fordham University Press. de La Mettrie, Julien Offray. 1747/1996. Machine Man and Other Writings. Cambridge, Cambridge University Press. Dreben, Burton, and Juliet Floyd. 1991. “Tautology: How Not to Use a Word.” Synthese 87, no. 1: 23–50. Fagone, Jason, “The Jessica Simulation”: Love and loss in the age of AI”, San Francisco Chronicle, July 23, 2021, https://www.sfchronicle.com/projects/2021/jessica-simulation-artificial-intelligence/#chapter1 accessed 5/22/2022. Floyd, Juliet. 2012. “Wittgenstein’s Diagonal Argument: A Variation on Cantor and Turing”. In Epistemology versus Ontology, Essays on the Philosophy of Foundations of Mathematics in Honour of Per Martin-Löf, eds. P. Dybjer, S. Lindström, E. Palmgren, G. Sundholm, Dordrecht, Springer Science+Business Media, 25–44. Floyd, Juliet. 2013. “Turing, Wittgenstein and Types: Philosophical Aspects of Turing’s ‘The Reform of Mathematical Notation’ (1944–5).” In Cooper and van Leeuven, eds. (2013), 250–253. Floyd, Juliet. 2016. “Chains of Life: Turing, Lebensform, and the Emergence of Wittgenstein’s Later Style. Nordic Wittgenstein Review 5, no. 2: 7–89. Floyd, Juliet. 2017. “Turing on ‘Common Sense’: Cambridge Resonances.” In Floyd and Bokulich, eds., 103–152. Floyd, Juliet. 2018. “Lebensformen: Living Logic.” In Language, Form(s) of Life, and Logic: Investigations after Wittgenstein, ed. C. Martin. Berlin, deGruyter, 59–92. Floyd, Juliet. 2019. “Teaching and Learning with Wittgenstein and Turing: Sailing the Seas of Social Media.” Journal of Philosophy of Education 53(4): 715–733. Floyd, Juliet. 2021. “Selves and Forms of Life in the Digital Age: A Philosophical Exploration of Apparatgeist.” In Katz, Floyd and Schiepers, eds. 2021. Floyd, Juliet. 2022. “‘Surveyability’ in Hilbert, Wittgenstein and Turing”. Philosophies 8, no. 6. https://www.mdpi.com/2409-9287/8/1/6. Floyd, Juliet and Alisa Bokulich, eds. 2017. Philosophical Explorations of the Legacy of Alan Turing: Turing 100. Boston Studies in the Philosophy and History of Science, vol. 324. New York: Springer Science+Business Media. Frankel, Richard, and Victor Krebs. 2022. Human Virtuality and Digital Life. New York: Routledge.
110
J. FLOYD
Genova, Judith. 1994. “Turing’s Sexual Guessing Game.” Social Epistemology 8, no. 4: 313–26. Gershberg, Zac, and Sean Illing. 2022. The Paradox of Democracy: Free Speech, Open Media and Perilous Persuasion. Chicago, IL: University of Chicago Press. Grant, N., & Metz, C. 2022. Google Sidelines Engineer Who Claims Its A.I. Is Sentient: Blake Lemoine, the Engineer, Says That Googleʼs Language Model has a Soul. The Company Disagrees. The New York Times, 6/12/2022. Grasso, Isabella, David Russell, Abigail Matthews, Jeanna Matthews, and Nicholas R. Record. 2020. “Applying Algorithmic Accountability Frameworks with Domain-Specific Codes of Ethics: A Case Study in Ecosystem Forecasting for Shellfish Toxicity in the Gulf of Maine.” FODS ’20, Virtual Event, USA, ACM. At https://dl.acm.org/doi/pdf/10.1145/3412815.3416897. Hanna, Alex, and Meredith Whittaker. 2020. “Timnit Gebru’s Exit from Google Exposes a Crisis in AI.” Wired (Opinion), December 31, 2020. Hao, Karen. 2019. “Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes.” MIT Technology Review, June 6, 2019. At https:// www.technologyreview.com/2019/06/06/239031/training-a -s ingle-a i- model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/. Haraway, Donna. 1991. Simians, Cyborgs and Women: the Reinvention of Nature. New York: Routledge. Hodges, Andrew. 1999. Turing. New York, Routledge. Hodges, Andrew. 2012. Alan Turing : the Enigma: The Centenary Edition. Princeton, NJ, Princeton University Press. 1st edition 1983. Hodges, Andrew and Cassandra Hatton 2015. Turing Point. Bonham’s Magazine 42, Spring: 18–21. Johnson, Steven. “A.I. Is Mastering Language. Should We Trust What It Says?” The New York Times, 4/15/2022. Kahneman, Daniel, Olivier Sibony, and Cass R. Sunstein. 2022. Noise: A Flaw in Human Judgment. New York: Little, Brown Spark Hachette Book Group. Katz, James E., ed. 2003. Machines That Become Us: The Social Context of Personal Communication Technology. New Brunswick, NJ: Transaction Publishers. Katz, James E. 2014. Living Inside Mobile Social Information. Dayton, OH: Greyden Press, LLC. Katz, James E. and Mark A. Aakhus, eds. 2002. Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge, U.K. Cambridge University Press. Kennedy, Juliette. 2017. “Turing, Gödel and the `Bright Abyss’.” In Floyd and Bokulich, eds. 2017, 63–92. Makovec, Dejan and Stewart Shapiro, Eds. 2019. Friedrich Waismann: The Open Texture of Analytic Philosophy. Palgrave Series in the History of Analytic Philosophy. Cham: Palgrave Macmillan/Springer Nature Switzerland.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
111
Mays, Kate K. 2021. “Possibility or Peril? Exploring the Emotional Choreograaphy of Social Robots in Inter- and Intrapersonal Lives.” In Katz, Floyd and Shiepers eds. 2021, 57–74. McDermott, Drew. 2014. “What Was Alan Turing’s Imitation Game? Assessing the Theory Behind the Movie.” The Critique, Special Issue on the Alan Turing biopic The Imitation Game. http://www.thecritique.com/articles/what-was- alan-turings-imitation-game/, accessed 8/8/2022. Mehta, Ivan. 5/18/2022. “Musk and Twitter are stuck in a stupid stalemate about bots: So how many bots are there on Twitter?” TNW. https://thenextweb.com/news/elon-musk-twitter-bots-spam-fight-analysis, accessed 6/30/2022. Minsky, Marvin. 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York. Simon & Schuster. Neuman, Scott. 2018. “Uber Reaches Settlement with Family of Arizona Woman Killed by Driverless Car.” NPR, The Two-Way (https://www.npr.org/sections/thetwo-way/2018/03/29/597850303/uber-reaches-settlement-with- family-of-arizona-woman-killed-by-driverless-car). Proudfoot, Diane. 2013. “Rethinking Turing’s Test.” The Journal of Philosophy 110(7): 391–411. Proudfoot, Diane. 2020. “Rethinking Turing’s Test and the Philosophical Implications.” Minds and Machines 30: 487–512. Putnam, Hilary. 1960. “Minds and Machines.” In Dimensions of Mind, edited by Sidney Hook. New York: New York University Press. (Reprinted in Putnam 1975, pp. 362–385) Putnam, Hilary. 1967. “Psychological Predicates.” In Art, Mind and Religion, edited by W. H. Capitan and D. D. Merrill. Pittsburgh, PA: University of Pittsburgh Press, 1967, 37–48. (Reprinted under the title “The Nature of Mental States” in Putnam 1975, 429–440.) Putnam, Hilary. 1975. Mind, Language, and Reality: Philosophical Papers Volume 2. Cambridge: Cambridge University Press. Putnam, Hilary. 2012. Philosophy in an Age of Science: Physics, Mathematics, and Skepticism. Edited by Mario DeCaro. Cambridge, MA: Harvard University Press. Saygin, Ayse Pinar, Ilyas Cicekli, and Varol Akman. 2000. “Turing Test: 50 Years Later.” Minds and Machines (10): 463–518. Searle, John. 1980. “Minds, Brains and Programs.” Behavioural and Brain Sciences (3): 417–424. Searle, John. 2010. “Why Dualism (and Materialism) Fail to Account for Consciousness.” In Questioning Nineteenth Century Assumptions about Knowledge (III: Dualism), edited by R. E. Lee, 5–30. New York: SUNY Press. Simonite, Tom. 2020. “Behind the Paper That Led to a Google Researcher’s Firing.” Wired (Business), December, 8, 2020.
112
J. FLOYD
Sloman, Aaron. 2013. “Virtual Machinery and Evolution of Mind (Part 1)”. In Cooper & van Leeuven, eds., 97–102. Smith, Craig S. 2020. “Dealing with Bias in Artificial Intelligence.” The New York Times. Originally published 11/19/2019, updated 1/2/2020. Solove, Daniel J. 2008. Understanding Privacy. Cambridge, MA: Harvard University Press. Sterrett, Susan G. 2000. “Turing’s Two Tests for Intelligence.” Minds and Machines 10, 541–559. Sterrett, Susan G. 2017. “Turing and the Integration of Human and Machine Intelligence.” In Floyd and Bokulich, eds., 323–338. Sterrett, Susan G. 2020. “The Genius of the ‘Original Imitation Game’ Test.” Minds and Machines 30: 469–86. Turing, A. M. 1936. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 2(42): 230–265. In Cooper and van Leeuven, eds. 2013, 16–43, references to this edition. Turing, A. M. 1944/45. “The Reform of Mathematical Notation and Phraseology.” In Cooper and van Leeuwen, eds., 245–249. Turing, Alan M. 1947. “Lecture on the Automatic Computing Engine, a Lecture to the London Mathematical Society 20 February 1947.” In Cooper and van Leeuwen, eds., 486–498. Turing, Alan M. 1948. “Intelligent Machinery”. Report Written for the National Physical Laboratory. In Cooper and van Leeuwen, eds., 501–516. Turing, Alan M. 1950. “Computing Machinery and Intelligence.” Mind 59(October): 433–460. In Cooper and van Leeuven, eds., 345–357, references to this edition. Turing, Alan M. 1951. “Can Digital Computers Think? BBC Radio Broadcast, 15 May and 3 July 1951.” Edited by B. Jack Copeland, in Cooper van Leeuven, eds. 2017, 660–67. Turing, Alan M., Richard Braithwaite, Geoffrey Jefferson, M. H. A. Newman 1952. “Can Automatic Calculating Machines Be Said to Think? BBC Radio Broadcast, 10 January 1952.,” edited by B. Jack Copeland. In Cooper and Jan van Leeuven, eds. 2013, 651–59. Vardi, Moshe Y. 2014. “Would Turing Have Passed the Turing Test?.” Communications of the ACM 57, September, no. 8 (2014): 5. Wittgenstein, Ludwig. 1921/1981. Tractatus Logico-Philosophicus. C. K. Ogden, trans., London: Routledge & Kegan Paul. (First German edition in Annalen der Naturphilosophie 14, edited by Wilhelm Ostwald, 1921, 12–262. Available open-source in German with another English translation at https://people. umass.edu/klement/tlp/courtesy/of/Kevin/Klement.
REVISITING THE TURING TEST: HUMANS, MACHINES, AND PHRASEOLOGY
113
Wittgenstein, Ludwig 1969. Preliminary Studies for the ‘Philosophical Investigations’: Generally Known as the Blue and Brown Books. Oxford: Basil Blackwell. Wittgenstein, Ludwig. 1978. Remarks on the Foundations of Mathematics. Cambridge, Mass, MIT Press. Wittgenstein, Ludwig. 1980. Bemerkungen uber die Philosophie der Psychologie: Remarks on the Philosophy of Psychology, edited by G. H. von Wright and Heikki Nyman, translated by G.E.M. Anscombe. Oxford: Basil Blackwell. Wittgenstein, Ludwig. 1989. Wittgenstein’s Lectures on the Foundations of Mathematics: Cambridge, 1939, from the notes of R. G. Bosanquet, Norman Malcolm, Rush Rhees, and Yorick Smythies. Edited by Cora Diamond. Chicago: University of Chicago Press. Wittgenstein, Ludwig. 2009. Philosophische Untersuchungen = Philosophical Investigations, translated and edited by G.E.M. Anscombe, P.M.S. Hacker, and Joachim Schulte. Chichester, West Sussex, U.K./Malden, MA: Wiley-Blackwell. Wittgenstein, Ludwig 2015–. Wittgensteinsource: the Bergen Nachlass Edition: www.wittgensteinsource.org. A. Pichler. Bergen, Wittgenstein Archives, University of Bergen. Wolfram, Stephen. 2002. A New Kind of Science. Champaign, IL: Wolfram Media. Wolfram, Sephen. 2013. “Computation, Mathematical Notation and Linguistics.” In Cooper and van Leeuven, eds. 239–244. Wolfram, Stephen. 2019. “Testifying at the Senate About A.I. Selected Content on the Internet.” Includes video of testimony to the US Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation and the Internet (https://www.commerce.senate.gov/2019/6/optimizing-for- engagement-understanding-the-use-of-persuasive-technology-on-internet- platforms) “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms”. Published on the blog Stephen Wolfram, Writings, June 25, 2019, 2019, see https://writings.stephenwolfram. com/2019/06/testifying-at-the-senate-about-a-i-selected-content-on-the- internet/, accessed 7/24/2022. Zeilberger, Doron. 2022. “Opinion 182: Human-Supremacist and Pure-Math- Elitist Jeremy Avigad got it Backwards! The Same-Old (mostly boring!), currently mainstream, human-generated, and human-centrist “conceptual” pure math is DETRIMENTAL to Mathematics (broadly understood), and Experimental Mathematics is the Way To Go!” Dr. Z’s Opinions, 2/16/2022, published at https://sites.math.rutgers.edu/~zeilberg/Opinion182.html, accessed 7/24/2022. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: the Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs/Hachette Publishing.
PART II
Second Axis: Praxis
Interview with Stephen Wolfram Juliet Floyd and James Katz
[This is an excerpt of a longer interview and has been edited for clarity and concision. The interview was conducted via Zoom on August 9, 2022] JK:
We want to talk with you Stephen about the current world of interaction between what AIs [Artificial Intelligences] tell humans to do, what humans tell AIs to do, and other kinds of relationships between humans and AI. Allow me to begin. I notice that when I use a mapping app to go on trips I’m offered the most fuel-efficient route. I’m happy to be given that information, but there’s a moral belief behind that: that I should pick the most fuel-efficient route. Then I feel as if I’m a bad person if I don’t take the most fuel-efficient route. What used to just be a decision of getting from point A to point B now has a moral component. And it has an intrusive component: somebody somewhere thinks I should know about the most fuel-efficient route, and has the
J. Floyd (*) • J. Katz Boston University, Boston, MA, USA e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_6
117
118
SW:
J. FLOYD AND J. KATZ
potential to know whether or not I followed it. I have a moral decision to make about what used to be called an uninfluenced or naïve or rustic set of decisions. I think a good question from a technology-meets-humans point of view on AI is about cases like this, and the many other forms of autosuggestion that we see at work in our human world. It is a general question: Is this the way the AIs will take over? Because in the end there are a lot of good suggestions or potentially good suggestions being given to humans. And in the end humans may just decide, “Let’s simply follow what the AIs are telling us to do.” Imagine a world in which we have augmented-reality glasses, and we’re constantly being given a menu of “You should do this next, and then this”, and so on. While you’re talking to people wearing these glasses, they are receiving messages such as “Why don’t you mention this?” etc. What happens then to our sense of conversation?—Well, we already have many versions of this at work in everyday life. When you type things it is autosuggested how to complete a name or a sentence in an email, we have programs that suggest how to make our writing more grammatical, and more concise. As you point out, when you use GPS, and so on. There’s no Terminator [movie] scenario where the AIs are doing battle with the humans. It’s just that the humans decide it is a lot easier simply to follow what the AIs have to say. Now the question becomes this. You are trying to define what AIs should say, whether AIs should say in those augmented-reality glasses, “You should do this or that thing. You should drive your car this way or that way. You should solve the Trolley Problem [thought experiment in the psychology of ethics] this way or that way as you drive your car.” But what should be the basis for how the AIs make those suggestions? How shall we define success? I think there’s a belief in the technology world that, in the end, there is a perfect program defining which way you should drive your car in each situation. There’s a perfect ethics that can be found by the machines in some theoretical way. I tend to think that that couldn’t possibly be true
INTERVIEW WITH STEPHEN WOLFRAM
119
and that those kinds of decisions are cases where, in the end, it depends upon what people want to have happen. There is no perfect mathematical theory of ethical decisions about the world, almost by definition. The AIs will no more get one to that perfect set of decisions than anything else will. AIs adhere procedurally more closely to what people tell them to do than people might hope for. What about teaching AIs ethics? How will we do that? Can we simply have the AIs watch what the humans do? That sounds like a great idea until you realize, “No, that’s not what we want: we want AIs to do what humans aspire to do.” But how do we define what humans aspire to do? We’re in a situation where there’s no right answer, and we already know there’s no right answer. The same issue arises for an AI that controls anything from an autonomous weapon to a central bank. More generally, there is no intrinsic way that the AI can define what it is trying to achieve. It might be able to get further if you say the thing you want to achieve is to optimize this or that particular thing in a particular situation. For the AI can work out—perhaps better than humans, perhaps not—a chain of things that have to happen in order to achieve that particular piece of optimization. But a lot of what we as humans end up doing involves relating to how other people operate in the world. If some of the agents operating in the world are AIs, how does that affect how we choose to act? The quintessential example is when the AIs grade the essays written by students. What do the students then write in their essays? In what ways does our confirming to the suggestions of AI change us? What if the AIs discover a way to describe the world that they then impose on us? Do we adapt our human language to follow things the AIs noticed rather than what we have evolved to notice in the course of human history? If you look at a typical machine learning image identification system and ask, “How did it figure out that that was a cat versus a dog?” you can look inside. Most of what’s in there is not describable in a human narrative way, but we might identify that there’s some particular way of analyzing images that was really good for distinguishing cats and dogs. Maybe
120
JK:
J. FLOYD AND J. KATZ
in the future we will say, “Gosh, we can learn from the AIs. We will have the concept of ‘quibbling’ the image, and that’s the thing we have to do. Now we have a name for it, and we can start talking about it as something that is part of the human narrative of what’s going on. This is where, for example, the choice of what news to put in your feed arises. Because it’s a question of what you are trying to optimize. If you’re to optimize the extent to which people are engaged with that news feed, then, Yes, the AIs can learn things about us humans about how to optimize that. But if you then throw into the AI “And let’s make it be morally good,” nobody knows how to do that. There’s no ground truth. There is a ground truth for how much engagement—how long did the person sit there staring at the screen?—but “Was it morally good?” is not a similar kind of question. It’s not a question about which I think there exist objective ways to measure an answer. The things that we can tell the AI to optimize for are, by contrast, objectively measurable. One of the fearful scenarios that we do encounter, especially in tabloid media, is the fear of robots chasing people down and trapping or killing them. For example, this is a trope about the scary dogs that are built by Boston Dynamics. But there’s a different kind of war with AI that is going to occur long before any robots chase people, and that is using AI to control what people say, do, and think. The most recent example, which is now not going forward at present, was Google’s automatic detection in people’s writing of sexist and other kinds of language: Google would automatically detect these kinds of usage and prompt people not to use them. Of course, people have a choice not to listen to Google, but you can imagine very quickly universities and businesses will say, “No, before you send an e-mail, it has to go through this correction of Google that will get rid of these politically incorrect usages.” I’m thinking that then people who want the old structure of grammatical usage—they can get together to create their own different version of a Google
INTERVIEW WITH STEPHEN WOLFRAM
SW:
121
word corrector and begin to run their statements through it, making sure they don’t use woke terms. There could be an AI for them. Then you’ll see a war of AIs. I think the question is always: What are you trying to achieve? That is, you have to give the AI some kind of code, and I don’t mean a code in the sense of program. I mean code in the sense of a moral code, so to speak, of what is its ultimate objective. If you were writing a constitution for a country today, knowing that there were AIs around, what should the constitution say, and what about the AIs? What do we humans want the AIs, at an underlying level, to do? I think that only once you answer that question—and you might have many disagreements about the answer to that question—once you answer that question, then you can work through the technology of “Okay, what should actually happen in terms of the ranking of this content? What should actually happen in terms of banning this message?” As a practical matter, taking whatever moral code you might have and enforcing it a thousand times a second on different people around the world: that’s something newly possible with AI. That’s a different and an interesting case. As you suggest, one of the features of AI as it has been often implemented in the world today is that it’s very centralized. That is not a necessary feature of AI as a technology. It is an economic feature of the particular way that this technology has evolved a specific fact of business and economic history. If you imagine there’s an AI that’s deciding, “Is this message appropriate to go through?” Imagine that there is a non-centralized, personal AI that I have that has gotten pretty smart, and it’s figured out how to evade the censor. My little AI is then personally changing my message to evade the censors, getting it through the censors and onward, so to speak. Other people might have different filters. That hasn’t been, so far, mostly, the way that the last decade or so of AI has evolved, but that’s not a necessary feature of AI as a technology. There is an interesting philosophical mistake that gets made in modern times, particularly by folks in the technol-
122
JK:
SW:
J. FLOYD AND J. KATZ
ogy world. They say, “We’ve been so successful with science, especially with these kinds of formal methods. Surely in the end we must be able to solve everything, including any of these moral questions.” People imagine that with AIs, we’ll be able to automate everything, but the ultimate thing that I think is definitionally unautomatable is “What do you actually want the AI to do? What goals do you want the AI to have?” Defining the goals is not something subject to automation. I think that’s a thing people sometimes lose sight of. At the end of World War II the nuclear atomic scientists were lionized, and the public would ask the nuclear scientists, “What’s the secret to world peace? How can we solve the world’s problems?” The idea was that because they made such a tremendous breakthrough with atomic energy, they were endowed with superhuman insight into other domains that they actually had no expertise in at all. Boston Dynamics has made a promotional video showing their robot dog going around in a factory, identifying problem spots, and then going to the night foreman, waking him up, and pulling him, dog-like, to the problem area. In other words, the AI puts the human on the leash of the dog rather than vice versa. Thinking about how our life can be enriched through all these artificial intelligence choices and suggestions, which movies to watch, places to go, friends to meet, and so on, is there any diminishment in the quality of being a human as a result of the fact that our lives are being structured by these algorithms? Whatever the underlying ethos and calculus is, it’s still being artificially manufactured by unseen remote others. The story of technology is a story of how the human has to do less, the more activities are automated. That’s been the arc of the development of technology over the course of human history. I’m not sure that there’s a fundamental distinction between what’s happening with the automation with AI today and what happened with automation in the past. People might say, “You’re not a real human unless you’re chopping wood for yourself and doing this or that
INTERVIEW WITH STEPHEN WOLFRAM
123
thing.” It is a reasonable question: to what extent can we let things be automated for us and still have an “us” there? I’m not sure how to make a clear distinction between those things which get to the heart of the human condition today and those that do not. These are probably not the same things that we would have thought got to the heart of the human condition in times past. If you lived in a time when many of the things you did in a given day were determined by some kind of devotional ritual, for example. Is that more controlling than the autosuggestion by the AI? This is not obvious to me. You talk about an AI being controlled by unseen others. I think that’s more the relevant point. That is, AI is a transducer of, or a concentrator of, a point of view about how things should work. But it’s not as if we haven’t seen this before. Look at any of our major cultural beliefs. They, too, have this feature. Perhaps they were even invented by one person, or a few, and then deployed among billions. “Does technology rob us of humanity?” is, I think, a version of your question. Let’s take an extreme version of that. Let’s imagine that we have brain implants that are doing these suggestions right down at the level of individual neurons. At what point do we feel that that process has robbed us of humanity? Let’s suppose that we have a brain implant that is correctly emulating what our brain did when it was younger. Then do we feel that that’s a violation of our humanity or not? There’s no abstract answer to that question, in my view. I think one might say, “But, look, it’s just a machine controlling what I do. It’s not my brain itself controlling what I do.” That’s not adequately human, so to speak. But if we could know more about the science of the brain, perhaps we could see that there are a hundred billion neurons, they are firing in this pattern, and there are definite rules which determine what we do. Why is that different from a machine that also has definite rules that determine what it does? Why do we think there’s something intrinsically more human, or perhaps free, about the way that these hundred billion neurons are following the rules of biology than the
124
J. FLOYD AND J. KATZ
way these pieces of computer technology are following their rules to determine what they do? I suppose I would pose that as a philosophical question. How do we compare these things? I think the one thing that would make us immediately say, “Oh, it’s just a machine. We don’t have any kind of human-like freedom,” is if from the outside you could readily predict what the machine is going to do. In other words, if the machine determines what I say precisely—e.g., every time I say this word, it always makes me follow up with that word—then it feels as if we no longer have our human free will about what to do or say. From a scientific point of view, there is a belief that when things have been made properly scientific, they’re made predictable. When something has been reduced to its scientific primitives, then it becomes predictable. But this just isn’t a true fact about science. It’s something people believed for a long time. The big advances in science from the 1600s allowed us to mathematize the way the world works. And that led people to the idea that “There’s a formula that tells you the answer to what’s going to happen in each particular case.” But what we’ve learned more recently—and I’ve put lots of effort into this—is that when you deal with things which are computational, even though the rules that you specify for how the system works may be simple and known to you, it doesn’t mean you can readily predict what the system will do. I call this phenomenon computational irreducibility: in order to find out what the system will do, you have to trace through the same computations that it does. If you apply this to the example of the brain machine, then you can be in a situation where the brain machine is computationally irreducible. To know what it will do, you have to chase through every step that it follows, and you can’t jump ahead and predict what it is going to do. That puts you in very much the same situation that you would be in in predicting what a brain does. And then it’s no longer the case that you can say, “Look, it’s a machine. It’s technological. It’s somehow intrinsically less rich than the actual operation of the brain itself.”
INTERVIEW WITH STEPHEN WOLFRAM
JK:
125
At this point it becomes much less clear how to answer a lot of these questions about whether the actions are just those of a machine or whether it is ultimately the person’s brain. I think once we get into the situation where we’re dealing with computationally irreducible technological processes, that distinction becomes much less clear. The reason we haven’t been presented with this until now is that from the Industrial Revolution onward, we’ve been used to machines whose operation we can readily understand. We’ve been used to machines where we set them up so that you can see the cogs and gears work. They get this or that result. But that, I think, is a temporary phenomenon. For what we see today, using computation to make technology, is much more this computational irreducibility story: Even though you knew the rules that you set up, you can’t see what the system is going to do. Now you might say, “That’s a terrible thing. How could we ever operate in such a situation?” But after all, that is the situation we’ve been in with respect to nature throughout history. Nature does what it does. We don’t know necessarily how it works inside. We try and figure that out. We try and use science to make a human narrative to explain how nature works. Sometimes we succeed, and sometimes we don’t. The idea that sometimes we don’t succeed—that, I think, is computational irreducibility operating in nature. We have not been exposed to that in the technology we built before because we avoided it. But as we make more sophisticated technology, as we get closer to really letting computational technology achieve everything it can achieve, we’re going to be forced to confront computational irreducibility and forced to confront the fact that we can’t have a human narrative for what’s going on inside. We can’t readily predict what is going to happen. Is there going to be a similar profound crisis of human consciousness as there was when we discovered the fact that we were not the center of the universe, that we were just one piece of soil/rock rotating around a lone sun, and then among hundreds of suns, thousands, billions and eventually trillions of suns and planets? We’re different as humans from
126
SW:
J. FLOYD AND J. KATZ
the way we were when we thought we were literally the center of the universe. Do you foresee a similar crisis of failure of human insight and understanding when we confront computational irreducibility? I think that this question of the human place in the world is something where even though we now know we’re not physically in the center of the universe, we still have this feeling that our human intelligence is something deeply special. Eventually, we will realize that our human intelligence is only special to us, that there is plenty of computational sophistication in the natural world, in AIs, that is just as computationally sophisticated as what happens in our brains. But although it is “intelligence”, it is not like ours. There’s no necessary alignment of purpose—there’s no guarantee that we can empathize with these different intelligences. Should computational irreducibility make people say, “Oh, we should give up and stop trying to develop technology and so on”? Absolutely not. Imagine a world where there was no computational irreducibility. That would be a world in which you could know what the answer would be, and there would be no point in living your life. For you could just say, “Let’s jump ahead, apply the formula, and find out the answer is 43.” Computational irreducibility is what makes there be some richness to the progression of time. It makes something be achieved by the progress of time. It means that there are limitations to things we can say about what will happen in the world. There are patches of computational irreducibility, an infinite number of ways in which, in some specific situations, we can jump ahead. This implies, for example, there’s no end to the inventions we can make, allowing us to find another pocket of reducibility that allows us to jump ahead in this computationally irreducible world. Similarly, we’ll never run out of places where we can make a human narrative about what’s happening that is computationally reducible, because there is an infinite number of such things. The fact that there is a place we can’t get to—the full-on “we’ve solved everything” place—isn’t a loss. Because if we could
INTERVIEW WITH STEPHEN WOLFRAM
JF:
127
get there, then the progression of time would not really have any meaningful character to it: we could always jump ahead of time, so to speak. As a practical matter, computational irreducibility is what keeps the future interesting. There is also a question how we determine how AIs will work. One thing we might say is “Look, we’ve got the perfect constitution for the AIs. We’re going to define the set of axioms for how AIs work, and that’s all going to be good.” One feature of computational irreducibility is it shows that that process will never succeed. There will always be unexpected consequences. There will always be bugs. There will always be places where you haven’t successfully nailed things down with your finite set of axioms. That, among other things, is evidence that this idea that the code of ethics—the idea that we have to make up those three principles that the AIs have to follow and then every circumstance that might arise will have been covered—that can never work. This is another consequence of computational irreducibility. It forces us back to the idea that, in the end, we’re going to have to make choices. It puts things back, I think, on us. Stephen Thank You. I have just a few closing questions to pose. People have a need to make things simple to express, whether they are technology experts or laypeople. The public already feels that these imagined future ethical principles for AI’s don’t apply themselves. But we need monikers and markers in order to collectively act as human beings. One thing I draw from what Stephen is saying is the importance of human language and human phrasing and conceptualization in the face of computational irreducibility. Axioms aren’t going to be able to do the jobs of ethics. And, as with any other point in the history of human technology, it seems to me that certain tasks will be automated, but therefore other tasks will become very, very important that weren’t quite so important, as tasks, before. One of these increasingly important tasks is how to make it simple enough for your average human being to have some idea of how to talk about this with other human
128
SW:
J. FLOYD AND J. KATZ
beings, how to talk about what’s unintelligible and unfathomable versus what is fathomable. Of course, I’m the philosopher in the room. I feel this speaks to fundamental issues of democracy, the question of whether there’s an “us,” as you’ve already said. Isn’t human language and its creativity, and our learning to devise ways to have people do better than three principles or five principles—maybe there’s a massive challenge of philosophical labor for humanity, for people to develop levels of talking about this, to make things intelligible and known? The way I see it, there’s this ocean of computational possibility that exists. Computation is very powerful. It can do lots of kinds of things. On the other hand, there’s the set of things that we humans think about and have historically considered important. One of the things that has been part of my life’s work is trying to make a bridge between what is computationally possible and what humans care about. The bridge that I’ve been trying to make is this idea of computational language, the idea of taking things in the world and representing them computationally in a way that humans can understand and that can also tap into this ocean of computational possibility. Now it’s an interesting situation because computational language is something that, in the current age, people like me actually invent. It’s not like human language, where it’s been mostly a process of evolution. What I think one has to think about is: What should one describe in language in general? Ordinary human language, for example. There’s no point in describing and having a word for tables until there are lots of tables around in the world. But once you invent that word for tables, then you can tell people, “Make me a table,” and there’ll end up being a lot more tables in the world. I think this idea that when you have symbolized the world in language, then that determines something about how the world is then constructed—that very much happens in the case of computational language. That is, the things that one can put into computational language to describe how one should think about things computation-
INTERVIEW WITH STEPHEN WOLFRAM
129
ally—that becomes the way that people successfully think about things computationally. I think it’s interesting that in the case of mathematical notation—this happened in the 1400s, 1500s, 1600s. The notation streamlined the way people could think about mathematics. That allowed the invention of algebra and things like that. We’re seeing a similar kind of thing now with computational language defining how we humans can think about computational kinds of things. I realized just recently that, in a sense, Aristotle was onto this idea a long time ago because logic is a story of taking what would otherwise be human arguments and so on and putting a formal framework around those things. He didn’t have electronic computers to implement his logic, but, in a sense, the concept that he had is really the same concept that we’re using today in defining computational language. I think this process of defining computational language— that’s a way of locking in what we humans care about and giving it computational structure. Now, what does that imply for the way that people express themselves, let’s say, in a democracy? One of the thought experiments that is perhaps fun, if impractical, is to say, “When widespread literacy came in, it became possible for people to check ballots and vote in those ways. If widespread computational language literacy comes in, what will that enable?” Could somebody write essentially a computational essay that defines, in computational language, things that they want to be true about the world, and could a computational democracy be one in which you collect a hundred million computational essays about how people want the world to be, and then you feed them to a big AI and say, “Now figure out what to do from this”? I do always find it interesting that, in a sense, government operates a little bit like a machine or an AI. It has certain regulations and principles and, assuming it follows those, it’s just following its rules and doing what it does. Now, we could put an AI in place enforcing those regulations, but, in a sense, it’s not that different, I think, from the government enforcing those regulations.
130
J. FLOYD AND J. KATZ
But I don’t think that kind of approach gets one out of the box that political philosophy has been in forever. It might change the shape of the box a bit. And this for exactly the same reasons. If you’ve given the AI the hundred million computational essays describing people’s preferences, how do you want the AI to set things up based on those? Do you want 10 percent of the people to be extremely happy and 10 percent to be extremely unhappy? Do you want nobody to be unhappy but everybody to be only mediocrely happy? All those kinds of things. Those are not questions that that setup can, in any abstract sense, answer. Those are things that have to be answered from the outside, I think. JF: Yes. It seems likely that the AI will begin to discover all kinds of injustices to groups that we’ve never thought of as groups before. SW: Yes. JF: It’s perfectly capable of discovering that “Gee, we didn’t realize that people who have this kind of hair color and live in this kind of ZIP code suffer increased risks of x, y and z.” We will continue discovering these things, and then we have the problem of political philosophy. What are we supposed to do about those things? We’re still stuck with the question of what we care about, ultimately. SW: Right. I think that the extreme version of this is that the only computational system that will do nothing wrong is a system that does nothing at all. JF: [Laughter] There you go. Stephen, I’ve been at a Wittgenstein and AI conference in London at the New College of Humanities, where Northeastern has just started an AI and philosophy graduate program. They have people from Silicon Valley coming in. They want talk about how to handle what they now call “responsible AI”. They don’t want to talk about “ethics” in AI, probably for reasons having to do with some of the points you’ve made. I wonder if you have any thoughts about Google having a human ethics board or an external board that was disbanded within a week. There were two women who claimed that they were chased out for raising questions about car-
INTERVIEW WITH STEPHEN WOLFRAM
SW:
131
bon footprints, gender dynamics, and so on. What does it mean to be responsible? You talked earlier about how centralized economically the development of AI has been. Should companies have ethics boards, for example, to limit human nudging? Is this appropriate regulation? What’s going happen in terms of getting to a sense of responsibility in how the companies are run? In the case of the Northeastern program, they’re attempting to lay down templates for people who have start-up companies so that they can have oversight and discussion of ethical principles as the company develops. Is the right approach or not? I wonder if you have any ideas. I think the concept of having the ethics officer embedded in your organization is one that has a very Soviet kind of character to it. That is not to say that ethics aren’t important. It’s a question of what the best way to implement such a thing is. I feel that one approach is a kind of a market one where you say, “Be in a situation where no winner has taken all, and be in a situation where different principles are followed by different players and the market can decide what it thinks is right.” Now, of course, that has limitations, as any scheme has limitations. We have developed democracy before in the face of technology. I suppose an interesting question is: Is there a form of government that is more suitable for the world of technology than the mechanisms that we’ve had for society? Certainly, people in the blockchain world are fond of thinking about a re-running of those mechanisms. I think the most popular area there is DAOs, distributed autonomous organizations. This mean different things to different people, but at some level their common theme tends to be that it is computational contracts that determine what happens. There are, perhaps, humans voting on things, but the overall regulation, so to speak, is done computationally and automatically rather than being executed by governments and courts and so on. Now, the actual on-the-ground experience with this has been, to put it mildly, mixed. Because what does it mean if you say, as the original DAO did, “Everything is determined
132
J. FLOYD AND J. KATZ
by computational code”? Then somebody, as actually happened, transfers $50 million to themselves by running a program that is consistent with that code. And the person says, quite rightly, “What was wrong with what I did?” Other people say, “That wasn’t intended to be what would happen.” But the person can rightly say, “But you said that this was just determined by code, and I followed the code.” This idea that there can be mechanisms of government that are fundamentally different and don’t involve humans at all just seems almost definitionally hopeless. However, the machinery of implementing contracts and so on clearly will change. I think the most obvious thing is the idea of computational contracts, contracts written not in legalese but in computational language that can then be automatically executed by computers. That will happen, and it’s already started to happen. What will the consequences of that be? One of the curious consequences is that there will be a lot more contracts in the world, just as the paperless office led, at least for a while, to a lot more paper in the world. This, I think, will be even more extreme than that. I think what will happen is that computers will be executing contracts with each other, and every moment when we’re talking to each other, our computers will be executing contracts and making deals and so on in the background, just as they do whenever we go to a website that has ads on it and different companies are bidding to show us that ad. In the background, there’ll be lots of these little contracts being executed. I suppose, in a sense, that’s a whole society happening there but now among AIs. That’s lots of bartering and rules and this and that and the other, and some AI gets bigger than the others and so on and so on, but it’s all happening very quickly behind the scenes in the computational infrastructure and occasionally erupting to the point where we humans notice what’s happened. Again, this is not that different from the way that nature works. There are lots of things happening in nature that we don’t really notice, and every so often one bubbles up, and we notice it.
INTERVIEW WITH STEPHEN WOLFRAM
JF:
133
As to this question about what’s the right way to think about the ultimate management of AI, I suspect that it’s going to be pretty much the same old mechanisms that we humans have used in the past, perhaps implemented technologically in more streamlined ways. Now we can talk about those AIs down there in the computational infrastructure, and we can ask, “What governance methods are appropriate for them?” What they are trying to achieve is not a question we can meaningfully answer. They have no intrinsic purpose other than the purpose that they get from that potentially quite long chain of connections to us. Imagine yourself as an autonomous bot hanging out in social media, where you become an influencer. You’re making a name for yourself, accumulating wealth, doing all kinds of things. How should we think about the moment at which that bot should be thought of as having some rights or other human-like recognition in society? At what point does it become somehow morally wrong to kill that bot? How much stuff does that bot have to have accumulated? For example, you might argue, “It’s okay to kill the bot because there’s a backup of it.” But as a practical matter, that might not really be possible, because the thing might be a dynamic bot that is relying on all these different pieces, and there’s no way in which it could have a snapshot backup made of it. But you could ask the same thing about humans. I, for example, have been a big personal analytics enthusiast, so for the last 30 years I’ve recorded immense amounts of data about myself. I record every keystroke I type, and I’ve been recording endless livestreams and so on. They’ll come a moment when I’m sure that I will be, for all practical purposes, reconstructable from the kind of digital exhaust that I’ve left, so to speak. Does that change the ethics of letting me live my life happily, so to speak? By the time I’ve put enough stuff in the world that you can reconstruct a thing that behaves like me, does that affect my right to exist, or not? We might even be able to do a better job than you, have an improved you—
134
J. FLOYD AND J. KATZ
That’s the slippery slope. It’s like when you teach the AIs ethics and you show them only what people actually do, and then people say, “That’s not right.” You say, “There are an infinite number of possible directions to go in from what people actually do to what you think they should aspire to do. Which one would you like to choose?” I think it’s an interesting question. Suppose I imagine writing a computational contract that describes what a better me would be like and I were to say, “Take the bot’s worth of information about me, and then use my computational contract so that when you reconstruct me, the reconstruction will pull in the direction that’s specified by the computational contract”. I have this feeling that that would not end well— JF, JK, SW: [Laughter] SW: —because of computational irreducibility. As soon as you say, “Follow this set of computational rules. That’s what I want”—the problem is you don’t know the consequences of following that set of computational rules, so until you follow them, you don’t know what you’ll get, and you don’t know whether it’ll be what you want. It’s like a programmer saying, “I’m going to write this piece of code. It’s going to do exactly what I tell it to do.” Of course, that might not be what I wanted it to do. That’s just what I told it to do. JF: This is very interesting. You only have one life. I would certainly want to keep the residue you’ve left behind, Stephen, digitally. I would argue that’s valuable in and of itself. But we don’t want to alter it, because it’s one life, and history involves one life and the idea of one time. One time, one shot. That’s what circumscribes ethics as a phenomenon. We have to keep that in the picture, too, when we talk about improving ourselves. SW: Right. The thing I’ve increasingly realized is that the reconstructed you at some different time, as you say, is really not the same story. I sometimes see this as a tension between science and other things: science, in some people’s telling, seeks to talk about the world with humans in some objective way that has no relation to humans. One of the things that is, in a sense, disappointing for that point of view is, that what we are discovering, from the kind of computaSW:
INTERVIEW WITH STEPHEN WOLFRAM
JF:
135
tional science that I’ve been involved in recently, is that in the end there’s nothing fundamentally special about humans, nothing fundamentally special about our computational or intellectual capabilities and so on. That implies that the only thing that’s special about us are our details. If we say, “Forget those details. We want to clean off those details to get the perfect scientific story,” what will be left is nothing. People often feel that science is perfection, and this kind of clean scientific point of view is what you should always aspire to, but the end of that is inevitably empty. Beautifully said. Thank You.
Means vs. Outcomes: Leveraging Psychological Insights for Media-Based Behavior Change Interventions James Cummings
The Brave New World of Ubiquitous Sensing Technology Recent developments in tracking technology permit greater opportunity for user and citizen behavior design than ever before. Notably, the monitoring of all manner of selective mediated exposure and decision-making in our online lives—such as social media activity, website visits, purchasing behavior, and streaming video selections—combined with increasingly effective collaborative filtering techniques allows for micro-targeting by which persuasive messages meant to influence user behavior are personalized with respect to content, timing, and even framing. Similarly, thanks to a wide assortment of new sensing technologies in recent years, surveillance of our analog, off-the-screen lives is also on the rise. Ranging in scale from “quantified self” hobbyists to state-run social
J. Cummings (*) Boston University, Boston, MA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_7
137
138
J. CUMMINGS
crediting systems, physical behaviors are being increasingly monitored through a fleet of novel sensing technologies. Today’s surveillance capacities have expanded both outward and inward, with active and passive monitoring methods creeping into seemingly all the nooks and crannies of our everyday lives. We have sensors in public—such as London’s renowned CCTV security cameras and China’s facial recognition tools being incorporated into airports, traffic policing, and classrooms—that monitor who is where and when. We have sensors in the home—such as Alexa, Nest thermostats, and various Internet of Things smart devices—which track our daily schedules, energy consumption, and shopping lists. We have sensors in our cars—including cameras for parking assistance and Progressive’s Snapshot device for monitoring mileage, drive times, and braking patterns—which will only increase with the mainstreaming of lidar, radar, and other environmental sensing required for self-driving vehicles. We have mobile sensors on our bodies—smartphones with a slew of surveilling apps, Fitbits, Apple Watches, Nike Fuels—which can collectively track our global position, general levels of health and exercise, bodily posture, and an array of biometrics ranging from heart rate to blood oxygen levels. Perhaps most invasively—figuratively and certainly literally—recent years have seen the rollout of new sensors that go in our bodies, including pill- shaped cameras for visuals on internal states and electronic pills for monitoring body temperature after surgery or during chemotherapy.
Feedback Loops: An Old Idea with New Possibilities In all of the above cases, the underlying assumption is that if an individual actor, automated system, or other decision-making body is given information about the individual’s performance or behavior, corrections can be made. In other words, all sensing technologies are guided, to varying extents, by the notion that an actor’s behavior can be regulated through feedback. Specifically, they rely on negative feedback loops, which permit course correction through subtractive logic. At the most general level, negative feedback loops involve a decision-making body that (1) sets a desired state, (2) observes the current state, (3) calculates the difference between the desired state and current state, and (4) recalibrates settings so as to reduce that differential. This basic logic is how self-regulating mechanical systems work (e.g., a heating system regulates room temperatures by monitoring current temperature and turning itself on and off depending on how the current temperature compares to the preset desired
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
139
temperature). However, this same logic is what underlies the behavior change potential of sensing technologies such as a Fitbit or the Progressive Snapshot—human actors are expected to make decisions and recalibrate their behaviors based on a comparison of measured states and goal states, compelling them to take 1000 more steps, drive more efficiently, lose 5 more pounds, or consume less electricity. Wilbur Schramm’s bidirectional communication model (Schramm 1954)—one of the oldest in communication scholarship, in which a receiver of a message can in turn send a message back to the original sender—parallels the information pathways involved in a feedback loop. In many ways, we can think of feedback loops as systems communicating with themselves. Sometimes that system is purely machine, as in the case of home heating; however, increasingly new sensor technologies permit feedback loops between systems containing both machine and human components. That said, consideration of communication between elements of human-machine systems is not new. Norbert Wiener and other scholars were investigating information exchanges in such systems as far back as World War II. Notably, Wiener applied insights from information theory to understand the dynamics of anti-aircraft guns, which require a human operator to work with a machine director, the latter completing the complicated mathematics involved in extrapolating the trajectory of a fast-moving target. Wiener went on to cohere his work on such systems into the then-new field of cybernetics, which investigates the processes by which external machine elements aid human actors in gauging and achieving desired bodily states or performance (Wiener 1948). Although cybernetic systems and negative feedback loops are relatively old ideas, recent trends in incorporating sensing technology into various domains of daily life—health and exercise, workplace productivity, energy consumption, transportation, travel, education, law enforcement—present new opportunities for behavior design and regulation. The recent explosion of ubiquitous and relatively inexpensive sensors—accelerometers, pedometers, global positioning, cameras, and microphones—now permit measurement of various activity metrics and, notably, are often coupled with channels for conveying those metrics as feedback to the user. Cyborgs have arrived in the mainstream; however, rather than anything resembling the Terminator or Borg of science fiction, they take the form of joggers checking their Apple Watches so as to determine how many more calories need to be burned before reaching their goal for the day.
140
J. CUMMINGS
One convenient element of fully-machine systems is that the consequences of registering feedback are fairly predictable: mechanical and computing systems reflexively process physical states (e.g., temperature, capacity fill) and digital signals (zeroes and ones fed into if-then programming) in a manner that is usually pre-determined and reliable. However, such is not the case with human information processors, particularly in today’s media landscape. The question then becomes one of how to present that information back to a human user in a manner that can effectively push them toward particular types of decisions or behaviors. That is, how do behavior interventions render this feedback attractive, digestible, and actionable?
Grabbing Attention and Stirring Motivation in Users Human brains are subject to a slew of cognitive biases in processing incoming information; more, the ability to attend to sensor feedback is moderated by the potential for distraction and relative motivation of the user given other environmental and mediated stimuli. Given today’s attention economy, effective feedback is more than the presentation of information; it requires situating that information in light of user motivations and attentional capacities. In other words, feedback must be framed in a manner that renders it relevant and interpretable. Feedback information provided by sensing technologies is typically quantified, which on the surface may seem easier to interpret than purely qualitative assessments. However, while some numbers are straightforward (e.g., number of steps taken), in many cases feedback data are on a scale that make them difficult to contextualize (e.g., non-interval scales in which zero holds no true value) or are the end result of a process that is relatively opaque for the user (how exactly is my credit score determined?). Users may be less likely to engage a feedback system in which they cannot decipher the meaning of their quantified score, and actually disengage when presented with lackluster scores produced by a black box algorithm that fails to transparently depict the process for deriving performance evaluations. For all of these reasons, there may be cases in which feedback information stands to be more effectively conveyed if coupled with (or fully converted to) more qualitative formats. For instance, OPower has previously included images of happy or sad faces on utility bills to help contextualize monthly energy consumption levels. Similarly, the Nissan
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
141
Leaf hybrid car includes an “efficiency leaves” dashboard, in which a plant image grows more lush with efficient driving and withers in light of poor mileage. These examples highlight how images are often easier to conceptually grasp than numbers about one’s performance levels. It likely does not hurt that non-notational imagery is also less cognitively demanding than words and numbers, thereby permitting lower barriers for processing. Another relatively common approach to contextualizing feedback information is to leverage social comparisons. For instance, a monthly energy bill that reports consumption levels in absolute terms may be abstract and offer little motivational pull. However, the presentation of that information side-by-side with the average level in one’s neighborhood can structure a sense of normative behavior, anchor an understanding of one’s consumption as relatively low or high, and in turn potentially tap into a “Keeping up with the Joneses” response from the recipient that may steer future behavior. Social comparisons have been found to be a particularly powerful means for structuring information and raising compliance with prescribed behaviors (often more so than other behavior change strategies such as monetary rewards or appeals related to the collective good or one’s civic duty) and are often relatively inexpensive to leverage. Over the last decade, one of the most common designs for structuring motivation and behavior change has been gamification—most essentially, the application of game elements to non-game contexts (Deterding et al. 2011). This approach, even in its most rudimentary forms, often includes quantification of performance (through points and virtual currencies), social comparisons (through leaderboards) and the inclusion of qualitative feedback (through reward badges and user avatar customizations). The underlying premise of gamification is that re-contextualizing behavior and its consequences (often through a mediated interface and virtual thematic skin) can enhance attention to and motivation for tasks too difficult or too boring to otherwise elicit similar levels of engagement. More advanced forms may also include narrative or other game elements to structure and model desired patterns of behavior or levels of performance. In other words, gamification is a potential means by which to render feedback data, provided by sensor technologies, as relevant and interpretable to users. With these points in place—sensors, feedback, and relevance conferred through easily processed visuals, attentional engagement, and social context—let us review a case study in which colleagues and I empirically tested how a gamified media intervention could potentially drive behavior change (Reeves et al. 2015).
142
J. CUMMINGS
A Case Study on Sensors and Gamification: Power House Over the previous decade, the state of California had spent billions of dollars on smart grid and smart meter technologies, based on the premise that providing consumers with direct, temporally proximate feedback on their energy consumption patterns—as opposed to monthly billing statements alone—can permit households to make wiser, less wasteful decisions regarding energy usage and savings. However, despite the availability of rich, personalized data, there was still a problem: the information was dull and the interfaces were somewhat complicated for the average resident. As a result, the incentives for engaging with the data were unclear. Notably, these feedback interfaces were presented via a web portal accessed on laptops, tablets, and smartphones—devices on which these data were in direct competition with a wide array of more alluring content. Notably, some of this competing content is exponentially more informationally dense and complex. Indeed, the rich narrative structures of binge- able “peak TV” dramas and the sensory visual stimuli and status indicators bombarding players during a World of Warcraft raid are quite cognitively demanding, all the while eliciting extremely motivated attention and high levels of enjoyment. Taking such alternatives as our cue, we tried to embed the comparably boring energy data within a more engaging user experience. Power House was a gamified system by which we examined how game elements could motivate users to interact with and act upon their household energy data. Data collected by the users’ local utility company was ported into the interface, with real life energy consumption re-contextualized in light of a virtual significance. Players were able to play an interactive game in which they guided family members around a home so as to complete desired household activities by turning on and off various appliances, devices, and light switches. Player scores were contingent upon efficient energy use in the virtual home, with in-game benefits conferred based upon real world energy savings. Additionally, realistic kilowatt expenditures for each virtual activity were displayed, thereby modeling real world behaviors and consequences, through which players could implicitly learn which activities consume the most energy. Beyond the main dashboard and interactive game, the gamified experience included energy knowledge quizzes, cumulative scores extending over days of play, badges for particular real-world energy feats, and social comparison of performance and achievements with other players via a virtual neighborhood.
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
143
With respect to effecting real world behavior change, we found that energy consumption decreased significantly over the course of the 2-week intervention play period. However, this effect was temporary, with earlier consumption rates resuming as soon as the intervention ended. In other words, the desired effect was tied to daily gameplay and participation, only to disappear once the virtual rewards for energy conservation were removed. In tandem with the field study, we also completed a laboratory experiment. Half of the participants played the interactive Power House virtual home scenario for 10 minutes, without any integration of their personal utility account (that is, they were simply exposed to the game stimulus itself, modeling energy consumption and its consequences). The other half played a similar game with a different thematic skin, in which the player similarly frenetically clicked on the screen to navigate onscreen characters to different restaurant activities (in this case, seating patrons, taking orders, bringing food, busing tables, and the like). After the brief gameplay period, participants were then required to complete a short questionnaire. However, in a bit of deception, the experimenter announced they would need to leave early and asked participants to show themselves out after completing the questionnaire. After the session was over, the experimenter would return to the room and note which electronic devices (e.g., study computer, desk lamp, light switch, monitor) had been turned off by participants on their way out. Interestingly, those individuals that played Power House were 300% more likely to turn off the various electronic devices. During debriefing, participants playing Power House reported they had not guessed the purpose of the study or the connection between their behavior and gameplay, the implication being that the media stimulus had unconsciously primed— albeit only for the short-term—a desirable change in real-world behavior.
Motivational Design: What Is the Goal and How Should We Get There? The results of the field and lab studies described above present some important questions regarding the goals of behavior change interventions, as well as preferences regarding the processes by which target behaviors are elicited. What exactly do we hope to be the outcome of these media- based interventions? Is it merely overt behavioral compliance? More, how
144
J. CUMMINGS
might designing for long-term behavior change compare to strategies for eliciting more immediate, short-term effects? Additionally, presuming we want individuals to maintain autonomy in any decision-making exercise, how exactly do we define that condition? Is the lack of coercion sufficient, or is conscious awareness of an effect on one’s behavior required? Does a choice count as self-determined if reflexive and automatic, or is active deliberation on the part of the user (read: citizen, consumer, student, child) required? In addressing these questions, it’s important to note that gamification is but one of several types of motivational design techniques. Further, not all such designs are created equal. Gamification, interface-embedded nudges, injunctive social norms, and legal policy are all possible systems for motivating and structuring behavior and decisions, yet they vary in the extent to which they guarantee compliance (short-term or long-term), allow for autonomy, and prioritize each when the two are in competition. In the proceeding sections, we will review different media-based motivational design techniques, comparing how they deliver on these considerations of effectiveness, relative longevity of behavior change, and the role of users’ reflective self-determination.
Gamification and Extrinsic Reinforcement Many motivational designs have a chief aim of, appropriately enough, heightening users’ levels of motivation. Classically, psychological literature has dichotomized motivation into two distinct types: intrinsic and extrinsic motivation. Intrinsic motivation refers to an individual’s motivation to complete a task or engage in an activity because they find it to be inherently satisfying, enjoyable, or interesting. Decades of empirical work, particularly that investigating and expanding self-determination theory (SDT), suggest that across various social domains and contexts—the workplace, school, chores, entertainment and diversion—the tasks that help us to feel autonomous, competent, and related to other are likely to be experienced as intrinsically motivating (Deci and Ryan 2012). In contrast, when an individual engages in a particular behavior due to its instrumental value—it leads to an external reward or allows one to avoid a potential punishment—the person is said to be extrinsically motivated. Extrinsic motivators can be extremely powerful when it comes to tasks that a person does not intrinsically enjoy. In such cases, external rewards and punishments may reinforce the target behavior.
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
145
Notably, adding game elements such as points or badges to a boring or difficult task may lead to increases in compliance or productivity. While many gamification specialists have moved on to more elaborate considerations, including how gamified systems can specifically speak to basic psychological needs in line with SDT (Mekler et al. 2017), the majority of gamification designs are relatively crass reinforcement systems that, again, primarily rely on points or badges as rewards for particular performance levels or (often arbitrary or superficial) behavior milestones. This poses a major problem, of which gamification designers have become increasingly aware: that extrinsic and intrinsic motivation are not necessarily additive. Specifically, adding a layer of extrinsic rewards or punishments—as is common gamification practice—to an inherently enjoyable or interesting activity may diminish any baseline levels of intrinsic motivation. This find, known as the overjustification effect—the implication being that extrinsic reward structures cue individuals to perceive an activity as not intrinsically enjoyable or desirable—was first empirically established 50 years ago (Lepper, Greene and Nisbett 1973), though has only in recent years become a consideration of gamification practitioners. Given this potential dynamic between motivations, gamification of entertainment, learning (distinct from formal education), or other activities one finds intrinsically motivating may actually hinder compliance or lead to little net gain in target outcomes. But what about behaviors that are not likely to be experienced, at baseline, as intrinsically motivating? Indeed, various apps and sensors are meant to help individuals with dieting, managing money, committing to exercise regimens, or other practices that one may perceive as for their own good but not inherently enjoyable. In such cases, gamification design may prove successful in enhancing overall motivation. However, such solutions still may suffer other shortcomings. As was observed with Power House, desired changes in behavior were fleeting, with the intervention’s effect on energy conservation disappearing once the virtual layer of reinforcement was removed. In theory, the effect could have been extended by elongating the intervention into perpetuity. However, such an approach would be costly for opt-in interventions. If a gamification solution is initially successful, the effect may be due to the novelty of the reinforcements driving usage and participation (for instance, milestone-specific badges); extending this requires an ongoing innovation of systems and content. Particularly successful reinforcement-based designs may also be problematic for reasons beyond cost. The most well-known example of
146
J. CUMMINGS
external reinforcement of behaviors is the classic operant conditioning chamber (Skinner 1948), or “Skinner Box”, by which animal subjects can be trained to push levers or avoid floor panels through rewards (e.g., food pellets, juice) and punishments (mild electric shocks). To the extent that gamification and similar motivational designs leverage this general structure—external reinforcement mechanisms meant to elicit discrete target behaviors—we may make comparisons. Slot machines, once popular social network games (e.g., Farmville), and modern social media feeds such as TikTok and Instagram are all designed to elicit perpetual engagement through simple repeated actions. These behaviors are conditioned, through money, points, or novelty. In line with the wider literature on conditioning, “compliance” in these cases is very much dependent upon the reinforcement schedule, or how reliably a user can predict the outcome of each lever pull, click, or scroll. With Power House, we found that behavior was conditioned such that it ceased once external reinforcement was removed, with the lack of continued extrinsic rewards assured. In contrast, slots machines and TikTok “work”—that is, sustain engagement— because the user doesn’t know what type of outcome may present itself. In other words, the power of extrinsic reinforcement depends on the possibility (perceived, if not actual) of positive reward for one’s actions. Comparisons between these arguably successful motivational designs and the Skinner Box highlight another key question: Do users enjoy these activities? Are slots, Farmville, and TikTok fun, per se? Maybe, but likely not in a manner that many would describe as particularly meaningful. To illustrate the distinction between engagement and significance, at the height of social networking games such as Farmville, game researcher and designer Ian Bogost created the tongue-in-cheek CowClicker Facebook game, in which players simply click on cartoon cows so as to earn more opportunities to click on more cows. A poignant critique of Farmville and its ilk, CowClicker highlighted that successful motivational designs can also be particularly superficial experiences. The Skinner Box comparison similarly begets the question: is this level of engagement substantively autonomous? Can conditioned behavioral responses—whether clicking cows, pulling levers, or scrolling through feeds of bite-sized video content—be considered volitional? While not coerced, in the most extreme cases these behaviors may be not merely routinized, but actually reflexive and bordering on addiction (Alter 2017). Thus, when systems based on extrinsic reinforcement are particularly powerful, they may present as mindless stimulus-response schemes rather than
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
147
exercises in voluntary self-regulation. In turn, criticisms of the most successful gamification systems can take on a more Huxleyan tone rather than Orwellian, akin to Postman’s (1985) take on television in its primacy and us “amusing ourselves to death”. In sum, while potentially effective in terms of immediate compliance, particularly in the context of activities that are not intrinsically motivating, gamification and other motivational designs relying on extrinsic reinforcement may suffer from an impermanence of effects, the potential trivializing of behaviors, and an implicit obedience or lack of intentional self-regulation on the part of users. In comparison, as discussed below, other motivational design techniques may be preferred, particularly when wishing to avoid the costly nature of perpetual reinforcement and to promote user volition.
Mediated Nudges Nudges, particularly those embedded into digital interfaces, represent a second motivational design technique that can be leveraged by media- based behavior interventions. Compared to external reinforcement, nudges often offer a softer-handed approach for compelling particular behaviors—what Thaler and Sunstein argue represent a “libertarian paternalism” (discussed elsewhere in this volume) in contrast to reinforcement’s often explicit conditioning schemes, which may be construed as “smacks” (punishments) or “hugs” (rewards). Media-based nudges are increasingly prevalent and can yield real results, often through relatively inexpensive interface designs that leverage quirks of our brains’ hardwiring (“biases and blunders”, as termed by Thaler and Sunstein). Online shopping sales often present full, pre-discount prices as a means of anchoring evaluations. Food delivery apps often pre-populate tipping amounts so as to benefit from the inertia of defaults. Portals for online campaign donations may use alternate framings (“Help us build the grassroots organization it will take to win” versus “Help us defeat dirty special interest groups”) so as to speak to different concerns of a voter base. Beyond such cases, however, media-based nudges may also guide interactions with and decisions about media use itself. For instance, well-timed digital nudges can influence users’ social media posting habits. Users may be less likely to post certain content when, right before hitting submit, they are reminded that a post is public or are shown a random assortment of profile pictures of the individuals within their network likely to see the
148
J. CUMMINGS
post. Alternatively, notices informing users how a post is likely to be interpreted (based on a quick sentiment scoring algorithm) can curb rash or otherwise emotional posting of content. Additionally, interface cues can activate particular cognitive heuristics: in the context of social media, displaying the number of likes, shares, or retweets, platforms can trigger appraisal of a post’s credibility or value, and in turn influence the likelihood of being clicked (Messing and Westwood 2014). Digital nudges may be leveraged to orient and push particular types of content selection beyond social media as well. Services such as Netflix or Amazon nudge users toward certain selections through express recommendations based on decision tracking and collaborative filtering (e.g., “Because you watched …” or “Customers who bought this item also bought …”). Notably, Amazon also employs various nudging techniques for appealing to different types of users. Customers vary in terms of which informational cues they find most persuasive and the retailer provides product details that can influence individuals differentially susceptible to anchoring (by displaying a more expensive list price next to an item’s current price), social proof (in noting the number of positive and negative comments from other users, as well as providing crowd-sourced ratings), or scarcity (at times listing, in bold font, “Only X left in stock”). Some have argued that the capacity for driving desired consumer decisions would be enhanced were these services to provide recommendations or frames based not on particular ends (e.g., collaborative filtering of inventory, movies, and the like), but by tracking a specific user’s behaviors across platforms and creating profiles of the persuasive susceptibilities that structure one’s general decision-making across varied domains (Kaptein and Eckles 2010). Finally, in the most extreme cases, digital media-based nudges can enhance compliance with a desired behavior not just based on inertia of defaults, social cues, or particular presentation frames, but rather, by activating an availability heuristic. In this case, nudges influence judgments or even prime particular behaviors by activating underlying cognitive schemas about target phenomena. At times, this priming effect happens below the user’s level of conscious perception, as was the case with the Power House lab study described earlier: participants in that study were more likely to enact energy-conservation behaviors after briefly playing a game that models such behaviors, but were not aware of this influence. Interfaces with such effects highlight the importance of how we define autonomy in a world of libertarian paternalism, drawing distinction between enacted
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
149
behavior and deliberated action: notably, nudges capitalize on mental biases and blunders, which more readily influence the former and not the latter. These examples together demonstrate, quite saliently, that the most effective of nudges run the risk of manipulating users. In turn, many researchers and leaders in the motivational design space have sought to articulate the nature of manipulation (Kim and Werbach 2016) and typologize different forms of designer-user relationships with respect to such ethical considerations. Nir Eyal has notably suggested that “[i]f the innovator [of a new technology] has a clear conscience that the product materially improves people’s lives—first among them, the creator’s—then the only path is to push forward. Users bear ultimate responsibility for their actions and makers should not be blamed for the misuse or overuse of their products” (Eyal 2012). Arguably, placing accountability on the users fails to properly account for the nature of technological nudges, which by definition target and leverage hardwired biases and irrationality, of which the user may not even be aware or capable of regulating. In that case, should one continue with Eyal’s line of reasoning, the innovator or designer’s interpretation of the benefit-conferring nature of the product or service is critical. In other words, such a perspective suggests that when it comes to nudge-based motivational designs, even when collectively beneficial to both the user and the designer, this is the choice architect’s world and the user is just living in it. In the best of nudging scenarios—in which the desired outcome is predictable in light of cognitive biases and indeed in the user’s best interest— is this the world we want to create? Even if not fouled by manipulation and instead truly beneficial, what do we make of a world of choice architecture, one in which decisions are arguably not wholly agentic but instead reliably likely responses designed for and governed through user data monitoring and presentation so as to reduce the differential between a user’s current state and that desired by the designer? Combined with the performance tracking afforded by all manner of sensing technologies, such would be a society in which citizens, employees, students, children, and consumers modify their behaviors in a probabilistic and semi-mechanical manner, similar to the course correction of a ship’s autopilot. Though, notably, in this scenario, the course and destination are not selected by the crew. How might we instead leverage new media technologies in a manner that results in compliance not through mindless reinforcement
150
J. CUMMINGS
mechanisms nor by leveraging flawed cognitive wirings and heuristic processing, but instead through reflective self-determination? That is, how can media- based behavioral interventions yield intended outcomes through more desirable psychological means?
New Media as Tools for Internalization To accomplish the above, an intervention needs to focus on minds, not behaviors—it must strive for users’ acceptance of a behavior and its value, for conscious and deliberate decisions and not merely for compliant actions. This is a process by which individuals personally accept externally stipulated goals or agendas—or what psychologists call autonomous self- regulation (Reeve et al. 2008)—and not something typically included in reinforcement schedules or nudge-based designs. It is the process of internalization. Beyond the classic and relatively simplistic dichotomy of extrinsic and intrinsic motivation, psychologists have in recent decades refined the taxonomy of human motivation. Recall, intrinsic motivation relates to inherent enjoyment or interest driving engagement in a task, while extrinsic motivation is most typically conceptualized as motivation related to instrumental value of a behavior. Some psychologists have come to reconceptualize extrinsic motivation as instead consisting of a spectrum of motivation types, varying in the degree of autonomy experienced by an individual and relative internalization of the behavior’s value, ranging from reinforced compliance to personal commitment (Ryan and Deci 2012). On the low-autonomy end of this spectrum is the external regulation of behavior. In this case, one completes an activity so as to obtain or avoid certain external consequences. This type of motivation is the type most akin to the classic definition of extrinsic motivation. Examples characterizing external regulation would be studying because one’s parents will give a cash reward for each A received on a report card or practicing the piano because one’s parents will reduce screen time allowances if one does not. A slightly more autonomous form of extrinsic motivation is the introjected regulation of behavior. In this case, one completes an activity or task to avoid guilt or anxiety or to gain esteem or pride. While not intrinsically motivated, the individual is also not presented with explicit, tangible rewards or punishments. Motivation fitting this form of behavior regulation include studying because it feels good when your parents put a
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
151
high-scoring test on the fridge or practicing the piano to avoid possibly making an embarrassing mistake during one’s upcoming recital. Even more autonomous or self-determined is the identified regulation of one’s behavior, in which one completes the activity because they personally accept the value of doing so, even if not intrinsically enjoyable. For instance, the perspective of “I don’t like it, but I know I should study because I need to understand basic math to get by in life” would be an example of the identification form of extrinsic motivation. Finally, integrated regulation of behavior comes when one has fully internalized the value of the externally-sourced motivation for a behavior, resulting in self-determined behavior with no sense of conflict. At this point, a student may complete long and difficult hours of readings and practice because they believe doing so will make them an ever more well- rounded, capable person. Notably, integrated regulation is still of a form of extrinsic motivation: though completing the behavior has become a fully autonomous act (autonomous self-regulation has been achieved), it is still not intrinsic as the behavior is completed for instrumental purposes. Through this internalization process, extrinsically regulated behaviors can become self-regulated, and in turn less likely to fade over time or with the relaxing of external pressures. Environments can be specifically designed in a manner to heighten and facilitate this process—for instance, in the context of education, classrooms and coursework can enlist particular pedagogical techniques to cultivate autonomy regulation in students (Black and Deci 2000). Such efforts are usually more nuanced and demanding than a points leaderboard or nudging users through a default system setting, as internalization is a gradual process of acceptance—in which values are entrained and attitudes oriented—rather than simply a matter of prodding behavioral compliance. However, this process may be well worth it. In comparison, nudges can indeed yield high levels of compliance but are often one-shot occurrences needing case-specific design for each decision setting. More, nudges arguably only allow for agency—the ability to take non-coerced action, albeit at times automatic or unconsciously influenced—rather than true autonomy—that is, the capacity for self-determined action. As for reinforcement systems relying on extrinsic rewards: as noted earlier, while such designs arguably provide both autonomy and compliance, each is relatively limited: compliance may be fleeting for complicated behaviors (such as effective energy conservation practices) and decision-making may be superficial and reflexive for simpler ones (say, scrolling through one’s social media news feed). In contrast,
152
J. CUMMINGS
internalization—particularly at the stage of fully autonomous self-regulation—can permit greater behavior change effects in terms of magnitude and duration and, by definition, maintain autonomy in the implementation of those changes. How then, can new media be leveraged for interventions seeking to guide users through internalization, so as to shape one’s perspective on a given practice? Rather than gamification, which includes piecemeal elements of games, one approach may instead be turning to full-fledged games. Digital games house what Bogost refers to as a procedural rhetoric (2007), which permits them to be incredibly powerful tools with respect to the internalization of externally-derived values. Through procedurally rendered depictions of cause and effect, games make implicit arguments about the way a system works. Of course, quite often the system of focus may be one of dragons and warriors or plants and zombies. However, systems depicted in digital games can just as readily reflect and reproduce arguments about a wide assortment of real-world processes, such as the effects of urban planning decisions on city growth and citizen satisfaction (SimCity), the advent and nature of human cultural evolution (Civilization), or even how best to overcome the varied obstacles and tribulations met by pioneers seeking to make their way west (The Oregon Trail). In other words, games are inherently rhetorical in a manner that can lead a player to internalize the causal arguments being depicted. Taking this a step further, games can be designed in a manner so as to be not merely educational, but overtly persuasive—that is, designers can construct games intentionally meant to serve as rhetorical artifacts, in which the arguments procedurally enacted by players intentionally reflect the particular and potentially partisan attitudes or beliefs of their creators. Bogost along with other designers have in recent years produced such “persuasive games” meant to articulate specific arguments about prison reform, corruption in big pharma, and unethical business practices in the fast food industry. Similarly, in 2016 the New York Times released its own online game titled The Voter Suppression Trail. Parodying the general mechanics and aesthetic of The Oregon Trail, the game interactively presents an argument about voter demographics, policies influencing ballot casting, and resultant consequences for representation and democracy. Again, the idea is that a game, designed with a particular value, claim, or message in mind, can rhetorically lead a player to understand those arguments and then plausibly integrate them into their own perspective. In turn, the effects on behavior may be longer lasting that those conferred by
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
153
gamification and more autonomously regulated and multi-faceted than those stemming from media-based nudges. Of course, games, being interactive in nature, often require a learning curve or pre-existing skill set. In contrast, another new medium that may facilitate the internalization of external values but present a smaller barrier to entry is immersive storytelling. Photo and video journalism are long established message formats that permit most individuals a natural literacy with no training required and minimal cognitive effort (due, as noted earlier, to the ease and comprehension afforded by non-notational imagery). Immersive storytelling—including immersive journalism and entertainment narratives—permit audiences to not simply see a visual story but to actually experience it dynamically from within. Such experiences usually have users navigate pre-recorded or live real-time 360° video streams, taking the first-person vantage in the midst of the unfolding story. Users navigate camera angle through rotations of a phone screen, cursor movements on a traditional computer monitor, or, perhaps most powerfully, a head- mounted display or virtual reality headset that tracks their visual orientation passively and naturally. In terms of content, immersive storytelling—similar to more traditional counterparts—covers a variety of narratives and themes. However, some of the most moving and lauded immersive news stories and cinematic films to date have been those that specifically put the user face-to-face with individuals experiencing plights wholly dissimilar to their own daily lives, including the victims of war-torn countries, occupants of refugee camps, and houseless populations. These immersive experiences permit a sense of spatial presence and self-location within a story, allowing for literal perspective-taking in understanding events depicted. Additionally, compared to print, photo, and regular video, this format of storytelling affords a heightened level of social presence with—and, in turn, empathetic response to—these onscreen persons. That is, through spatial and social presence, immersive storytelling may lead audiences to more strongly identify with the experiences of others, in a way that can then perhaps influence attitudes or values related to the events portrayed.
Modeling Values, Not Measuring Behaviors New and emerging media technologies stand to serve as immensely powerful tools for effecting behavior change. In recent years there has been a great deal of excitement around the potential for relatively new approaches
154
J. CUMMINGS
leveraging these technologies—such as sensor-based gamification and media-embedded nudges—to motivate, structure, or otherwise regulate behavior. However, as discussed above, these approaches and the psychological processes through which they operate present certain shortcomings with respect to ensuring long-term compliance, preserving autonomy, and even guaranteeing user welfare. Many would agree that when designing motivational systems for eliciting particular behaviors, the goal at hand is not compliance derived primarily through control and conditioning—as reflected, at its extreme, in Orwellian concerns surrounding modern state-run surveillance and policing systems. Similarly, most would likely agree that we as a society do not seek superficial obedience due to trivial preoccupation with novelty or egoism. Instead, if desiring lasting influence over behavior while maintaining the autonomy of those being targeted, rather than enlist strategies seeking to maximize compliance rates through new tracking technologies or bias-leveraging interfaces, designers of media-based behavior interventions may do well to focus on generating systems that facilitate the internalization of the value of a desired behavior. In that pursuit—assisting the autonomous self-regulation of desired actions or decisions—it is key for such interventions to present model behavior and associated consequences in a manner that allows the individual to identify with an argument and integrate it into their own perspective. Again, new media formats such as digital games and immersive storytelling can be powerful tools in this endeavor. However, to be certain, rhetoric and argumentation do not have to be interactive, personalized, procedural, or immersive to be effective. While these defining characteristics of emerging media technologies may be alluring, some of the best tools for instilling behavior come from much older media formats and content types. One of the most persuasive media personalities ever, Fred Rogers, shaped world views and codes of conduct for multiple generations of viewers through a unidirectional mass medium with standard definition video and low fidelity audio by today’s standards. Notably, in any given episode of his television series, Rogers walked his viewers through scenarios, decision points, and consequences in a manner strongly aligned with an autonomy-supportive perspective. This is no coincidence; an experienced and masterful educator, Rogers carefully crafted these behavior-shaping messages guided by certain behavior design principles. As part of testimony in front of the US Supreme Court related to educational programming, he noted:
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
155
Very frankly, I am opposed to people being programmed by others. My whole approach in broadcasting has always been ‘You are an important person just the way you are. You can make healthy decisions.’ Maybe I’m going on too long, but I just feel that anything that allows a person to be more active in the control of his or her life, in a healthy way, is important. (Sony Corporation of America vs. University City Studios, Inc., 1983)
Preserving and Leveraging What Makes Us Human The success achieved through efforts guided by Roger’s approach, the robust and pervasive findings yielded by decades of SDT research, and the practice of designing interactive simulations not just for entertainment but instructional and persuasive ends all together highlight one key conclusion: some of the most effective psychological tools at the disposal of those designing for behavior change are actually a) social learning through the modeling of the value of target behaviors and b) affording individuals the opportunity to make an informed decision for themselves. One of the most established and widely applicable frameworks in all of psychology, Albert Bandura’s social learning theory (SLT; 1977) suggests that we are equipped with specific cognitive capabilities. More, these capabilities—the ability to understand symbols like text and images, to observationally learn without direct experience, to reflect on the effectiveness of our actions, and, finally, to self-regulate behavior—are said to be distinctly human capacities. Put another way: being able to vicariously understand the consequences of particular actions and to modify our behavior in turn are arguably key hallmarks of what it is to be human. Notably, “humanness” can be construed in two different metrics (see Haslam, 2006, for a full review). First, characteristics like those described by SLT are what make us uniquely human, or distinct from non-human animals. Additionally, humanness also relates to our human nature, or those deep-rooted essential elements of personhood—such as emotionality, flexibility, fallibility, and individuality—that differentiate us from machines. Interactions, systems, or depictions that diminish either of these elements of a person—that is, one’s unique humanness or one’s human nature—are said to be, by definition, dehumanizing. Therefore, in comparing the different tools, technological and psychological, discussed throughout this chapter—ubiquitous sensors, cybernetic feedback loops, reinforcement conditioning, and the sly behavioral economics underlying mediated nudges on the one hand; social learning of
156
J. CUMMINGS
model behaviors, internalization, and autonomous self-regulation on the other—we are arguably not merely drawing comparison between the relative effectiveness of different media-based solutions for motivating behavior change. While the latter are indeed more likely to lead to stronger, more durable, more robust changes in behavior, they are also the types of approaches by which designers, parents, teachers, corporations, and governments would treat their charges—users, children, students, employees, citizens—as unique, capable individuals rather than sheep to be herded or cyborgs to be programmed. Thus, behavior change solutions that allow users to observe, deliberate, explore, reflect on, challenge, and eventually internalize the relative value of an action and its consequences may not only preserve our shared humanity but would appear to draw their power by harnessing precisely what it is that makes us human.
References Alter, Adam. 2017. Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked. New York, NY: Penguin. Bandura, Albert, and Richard H. Walters. 1977. Social Learning Theory (Vol. 1). Englewood Cliffs: Prentice Hall. Black, Aaron E., and Edward L. Deci. 2000. “The Effects of Instructors’ Autonomy Support and Students’ Autonomous Motivation on Learning Organic Chemistry: A Self-Determination Theory Perspective.” Science Education 84, no. 6: 740–756. Bogost, Ian., 2007. Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT Press. Deci, Edward L., and Richard M. Ryan. 2012. Motivation, personality, and development within embedded social contexts: An overview of self-determination theory.” In The Oxford Handbook of Human Motivation, edited by R. M. Ryan, 85–107. New York, NY: Oxford University Press. Deterding, Sebastian, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. “From game design elements to gamefulness: defining” gamification”.” In Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments: 9–15. Eyal, Nir. 2012. “The Art of Manipulation.” TechCrunch. https://techcrunch. com/2012/07/01/the-art-of-manipulation/. Haslam, Nick. 2006. “Dehumanization: An integrative review”. Personality and social psychology review, 10, no. 3: 252–264. Kaptein, Maurits, and Dean Eckles. 2010. “Selecting Effective Means to Any End: Futures and Ethics of Persuasion Profiling.” In International Conference on Persuasive Technology: 82–93. Berlin and Heidelberg, Germany: Springer.
MEANS VS. OUTCOMES: LEVERAGING PSYCHOLOGICAL INSIGHTS…
157
Kim, Tae Wan, and Kevin Werbach. 2016. “More Than Just a Game: Ethical Issues in Gamification.” Ethics and Information Technology 18, no. 2: 157-173. Lepper, Mark R., David Greene, and Richard E. Nisbett. 1973. “Undermining Children’s Intrinsic Interest with Extrinsic Reward: A Test of the “Overjustification” Hypothesis.” Journal of Personality and social Psychology 28, no. 1: 129–137. Mekler, Elisa D., Florian Brühlmann, Alexandre N. Tuch, and Klaus Opwis. 2017. “Towards Understanding the Effects of Individual Gamification Elements on Intrinsic Motivation and Performance”. Computers in Human Behavior 71: 525–534. Messing, Solomon, and Sean J. Westwood. 2014. “Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation When Selecting News Online.” Communication Research 41, no. 8: 1042–1063. Postman, Neil. 1985. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York, NY: Penguin. Reeve, Johnmarshall, R. M. Ryan, Edward L. Deci, and Hyungshim Jang. 2008. “Understanding and Promoting Autonomous Self-Regulation: A Self- Determination Theory Perspective.” Motivation and Self-Regulated Learning: Theory, Research, and Applications: 223–244. Reeves, Byron, James J. Cummings, James K. Scarborough, and Leo Yeykelis. 2015. “Increasing Energy Efficiency with Entertainment Media: An Experimental and Field Test of the Influence of a Social Game on Performance of Energy Behaviors.” Environment and Behavior 47, no. 1: 102–115. Ryan, Richard M., and Edward L. Deci. 2012. “Intrinsic and extrinsic motivations: Classic Definitions and New Directions.” Contemporary Educational Psychology 25, no. 1: 54–67. Schramm, Wilbur. 1954. “How communication works.” In The Process and Effects of Communication, edited by W. Wilbur Schramm, 3–26. Urbana, Illinois: University of Illinois Press. Skinner, B.F., 1948. “‘Superstition’ in the pigeon.” Journal of Experimental Psychology 38, no. 2: 168–272. Sony Corporation of America vs. University City Studios, Inc. (1983) 104 S.Ct 774. https://www.supremecourt.gov/pdfs/transcripts/1982/81-1 687_01- 18-1983.pdf. Wang, Yang, Pedro Giovanni Leon, Kevin Scott, Xiaoxuan Chen, Alessandro Acquisti, and Lorrie Faith Cranor. 2013. “Privacy Nudges for Social mMedia: An Exploratory Facebook Study.” In Proceedings of the 22nd International Conference on World Wide Web: 763–770. Wiener, Norbert. 1948. Cybernetics or Control and Communication in the Animal and the Machine. New York, Technology Press.
Nudging, Positive and Negative, on China’s Internet Lei Guo
In early June 2020, Miao Kexin, a fifth-grader at a school in Jiangsu Province of China jumped to her death shortly after a writing class (Chen 2020). The tragedy is still under investigation as of this chapter writing, but one known fact is that in that writing class Miao’s teacher criticized her essay—a reading response to a Chinese classic novel—for lacking “positive energy.” Miao wrote: Don’t be deceived by the appearance and the hypocrisy. In today’s society, some people look kind on the surface, but they are gloomy on the inside. They will use all kinds of despicable means to achieve their unspeakable purposes.
The essay was widely circulated on the Internet, as well as the screenshots of a WeChat1 group conversation where many parents from Miao’s class voiced support for the teacher. In their eyes, the writing is full of 1
WeChat is one of the most popular social media applications in China.
L. Guo (*) Boston University, United States, Boston © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_8
159
160
L. GUO
“negative energy.” That is, the expressed thoughts and attitudes speak the opposite of an uplifting Chinese society in which they live in. Many Internet users, on the other hand, denounced that “positive energy” killed the girl. The case and the debate about “positive energy” became a trending topic on China’s Internet for days before the “correct” use of “positive energy” quickly re-dominates the news headlines and people’s lives. This is in fact one of the few times “positive energy” has been under public scrutiny in China—at least for a while. “Positive energy” (Zheng Neng Liang), a catchphrase developed under the Xi administration, has been successfully used to encourage online and offline expression in line with the Party2-State’s ideological systems. Unlike several catchphrases promoted by China’s previous political administrations that largely invite ridicule, the language revolving around positive and negative energy has been successfully incorporated into Chinese people’s everyday conversation, with the general public helping to reinforce the importance of maintaining social stability on a daily basis. Miao’s tragedy and the injection of “positive energy” into Chinese society illustrate a unique information environment. China’s Internet has long been conceptualized as a political battlefield, where on the one hand, the government tightly controls online speech and, on the other, the new media provides a space that facilitates grassroots activism. More recent research recognizes that China’s Internet is rather ambivalent and fragmented (e.g., Han 2015; Fang and Repnikova 2018; Guo 2020). On China’s Internet are state-sponsored Internet commentators labeled as the “fifty-cent army” because of the allegation that they are paid the fifty-cent fee for every post. There are also “voluntary fifty-cent army,” who comment online to defend the regime without any monetary incentive. China’s Internet is full of state-directed nationalism, but also spontaneous nationalism among Chinese youth that may potentially threaten political stability (Liu 2006). The government’s anti-rumor campaign has been found to fight against “rumored” dissident voices, while a large portion of online rumors is not political but is created and distributed mainly for attracting online traffic—just like what is happening in western democracies. Among numerous examples that depict a multi-faced Chinese Internet, I discuss the phenomenon that a large group of Chinese Internet users voluntarily spread “positive energy” to defend the status quo from the bottom up. Different from the “voluntary fifty-cent army,” which is a 2
The Chinese Communist Party (CCP).
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
161
small, dedicated group of Internet users, any average Chinese citizen may partake in the production of “positive energy” online, making the trend even more remarkable. The discussion draws upon recent literature and empirical evidence that I collected from a series of focus group discussions and a national survey. I argue that the user-generated “positive energy” is a result of the government’s “nudging,” a way of steering people’s choices without relying on formal regulations and policing (Thaler and Sustein 2008). The article seeks to shed light on China’s changing political and media environment, which is more complicated than the typical control- vs-resistance narrative emphasized in the earlier scholarship.
Nudging for “Positive” In Thaler and Sunstein’s (2008) conceptualization, a nudge alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. The government, for instance, can provide “nutrition facts” panels on food instead of banning junk food or fining people for eating it. Leveraging behavioral economics and social psychology, nudging can be as effective as banning or fining to encourage pro-social behavior, while giving people the liberty to go their own way. Claimed as a “liberty-preserving” approach, nudging seems to be an ethical alternative for promoting behavioral change. However, it has been argued that some nudges are manipulative (Wikinson 2013). Governments (or other institutions and individuals) may well have manipulative intentions and use nudges to influence people’s decision-making process. Whether it is out of good motives or not, they may still infringe upon the target’s autonomy, even if the target has the formal freedom to opt-out. The Chinese government’s use of nudges to direct public opinion provides an interesting case for discussing nudging and manipulation. In China, maintaining social stability has always been at the top of the government agenda. On the Internet, this means expressions that speak positively of the society are encouraged, while dissent voices are not. To this end, all Chinese political leaderships have been making great efforts to control Internet speech through carrot and stick, as well as through all kinds of nudges. Using catchphrases to pivot the public opinion is an example. Based on one specific nudging technique—social norm nudging, recent Chinese administrations have created several catchphrases that tap into one of the most dominant philosophies in ancient and modern China—Confucianism.
162
L. GUO
Originating from the teachings of the Chinese philosopher Confucius, Confucianism was a moral system and a way of life for the ancient Chinese and its influence continues today in China and elsewhere. Among other things, Confucianism emphasizes interpersonal harmony, such as the harmony between the ruler and the ruled, between parent and child, and between friends. Following Confucianism, Chinese people have always been taught to embrace harmony and avoid contention (Creemers 2017b). In 2004, the Chinese government drew upon Confucianism and proposed the concept and catchphrase “Harmonious society” (He Xie She Hui). This signals a shift in the Chinese leadership’s goal from achieving unchecked economic growth to overall societal stability and harmony. Since then, the government has frequently used the term to remind Chinese citizens of the importance of preserving traditional Chinese social norms and therefore pursuing harmony rather than conflict, which they can practice in every aspect of their lives including communicating on the Internet. The Internet users, however, criticized the use of “Harmonious society” as an excuse for Internet censorship. They developed Internet slang and memes such as “River Crab,” which sounds similar to the word “harmonious” in Mandarin Chinese, to mock the catchphrase (Nordin and Richaud 2014). The word “harmonious” itself can now be used as a verb for “to censor.” As it turns out, this nudge was not quite effective. In 2012 when Xi became the head of the CCP, his administration put forward the idea of “positive energy.” Like “Harmonious society,” this concept also borrows from Confucianism. That is, a good Chinese citizen should spread “positive energy” to help maintain social harmony and stability and avoid distributing “negative energy” that would harm society. Unlike “Harmonious society” and other catchphrases, “positive energy” is not an officially coined term. Before 2012, people already used the term in their daily conversations to refer to optimistic attitudes without any political connotation. For example, one can use the phrase to cheer someone up, “Don’t always complain, and get some positive energy!” Seeing the popularity of the term and its parallel with the idea of “Harmonious society,” the Xi administration appropriated it to encourage people to talk about positive and hopeful aspects of Chinese society and politics, that is, speech that expresses nationalism, patriotism, and core socialist values. Likewise, “negative energy” may be used to refer to pessimism in general or in a political context. (See Yang and Tang 2018 for a detailed account of the term’s history.) Also different from his predecessors who focused on censoring contentious online speech, Xi has been taking a proactive
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
163
approach to ask all Chinese citizens to participate in the production of positive messages online. This may in some ways serve to distract public attention from the negative side of the government and society. Since 2012, stories and the discourse of “positive energy” have been dominating Chinese traditional and social media. The concept has also been promoted through a series of national activities such as competitions to select model citizens and projects to recruit “Internet civility volunteers” who excel in spreading positive energy online (Yang 2018). Perhaps due to its unofficial origin or because its original meaning touches upon humans’ basic psychological needs for positive feelings, the term “positive energy” has not received any major attacks (Yang and Tang 2018). The aforementioned case of Miao is a rare exception and, again, the debate about the meaning of the term did not last long. People keep talking about “positive energy” today, online and offline. They still often use it referring to its original meaning and are not against the political use of it. Overall, “positive energy” has been well received. Unlike many other top-down approaches to enforce digital authoritarianism, the current Chinese government subtly encourages user-generated expressions that voluntarily promote official discourse. The government nudges for “positive” and, to the leadership and many Chinese citizens, for good as well. However, do all Chinese citizens agree they should faithfully spread “positive energy”? For those who believe in the right to disagree, does the government genuinely want them not to act under the influence of a nudge? Let us revisit the conversation about nudging and manipulation. Wilkinson (2013) argued that nudgers who sincerely want people to opt- out of unsuitable nudges are not imposing their will, and are thus not manipulative. In the Chinese context, it seems the intention underlying the promotion of “positive energy” remains the same as that motivates Internet censorship. Collecting data to discern how the Chinese leadership perceives their strategy about “positive energy” is beyond the scope of this chapter. The remaining article will discuss the findings of the two empirical studies that I conducted, which may inform the public reaction to the term.
Study 1: The “Positive” WeChat (for Chinese Elderly) In July 2015, I conducted four focus group discussions with Chinese older adults in Shanghai, China, hoping to understand how they used WeChat (Guo 2017). Given that WeChat is a mobile-based application and thus is
164
L. GUO
more accessible than earlier computer-based social media, my original expectation is that WeChat may allow older adults to get access to content that is alternative and even critical to what they usually read and watch from the mainstream media. I did find some evidence for this hunch. However, a more predominating theme that emerged from the conversation suggests that the participants were very critical of the “alternativeness” of WeChat. To many of the older adults, the unofficial voices circulated on WeChat represent “negative energy.” They, instead, recurrently used the phrase “transmitting positive energy” to describe their activities on WeChat. To reiterate, it is not my original intention to examine their perception of the catchphrase, so the conversation was entirely unprompted. For example, in line with the government’s expectation, the participants considered information related to protests as “negative energy,” which would sometimes bypass the censorship and spread on WeChat. In response, one participant said (Guo 2017, 421): I recently received the news about that street protest in Jinshan from one of my (We)chat groups. I would just take a look at news like this and would not distribute such “negative energy” any further. I believe that the government will solve problems like this appropriately. There is no need for us to comment on this.
In addition to filtering out negative content, the older adults also actively contributed to the production of “positive energy” online through WeChat. Another participant shared: I like reading news about what the Xi administration has done for the public and their vision for China’s future. I tend to share content like this on my Moment,3 and I often see my friends repost my stuff.
Other participants used WeChat to post nationalist or patriotic comments, and moral stories such as “what makes a good woman.” Some of them are “positive” in terms of politics and society, and others are personal. The boundary between different uses of “positive energy” is unclear, but it is not important. What is noteworthy is that the majority of the focus group participants seem to genuinely welcome the idea of “positive energy” and they did implement it through WeChat. Some of the participants even felt the “responsibility” to transmit the political “positive energy”: 3
WeChat’s social networking feature, which is similar to Facebook’s news feed.
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
165
Negative content can be very deceptive. A lot of such information is simply rumors, perhaps originated from Western countries seeking to hurt China’s social stability. Many people, especially the younger generation, can be easily deceived. You know, kids have a different worldview from ours. As parents or people of integrity, we have the responsibility to transmit “positive energy” via WeChat.4
Findings from the focus group discussions provide evidence that the government’s nudging is effective. Nevertheless, it may be that nudging Chinese older adults is not that hard in the first place. The average age of the 35 participants I talked with was 60 years old, ranging from 51 to 71. This group of people has received decades of socialist education, and the nudge for “positive” should be well in line with their existing value systems. Then, what about the younger generation, the group who created “River Crabs” and other memes to embarrass the Party-State? The aforementioned focus group conversation also alludes that young people might be less susceptible to the nudge. With this question, I did a follow-up study surveying the general online public in China.
Study II: A National Survey About “Positive Energy” I conducted a two-wave Chinese national panel study during China’s 2018 Two Sessions, the most important annual political event in China, through an international survey firm Survey Sample International. After data cleaning, the final analysis included a sample of 1199 respondents who completed both waves of the survey. The sample is representative of the Chinese online population in terms of age and gender. In particular, the participants’ average age was 36.1 (SD = 10.5), representing a much younger group than the WeChat study. The survey asked a series of questions to gauge the participants’ thoughts about “positive energy.” Consistent with the finding from the WeChat study, the national survey also suggests that most people embraced the idea of “positive energy” and many implemented it online. The results show that 73% of the respondents agreed or strongly agreed “citizens should use social media such as Weibo and WeChat to distribute ‘positive energy’ to the society.” In practice, 72.5% of the respondents at least “sometimes” posted or reposted information that represents “positive 4
The quote is not included in Guo’s (2017) study.
166
L. GUO
energy” for the society on Weibo, and 68.8% did so through WeChat’s Moment channel. To better conceptualize the phenomenon, I have also examined the relationship between Chinese citizens’ informational use of social media and their thoughts and activities related to “transmitting positive energy.” Rather than relying on its traditional propaganda tools, the Chinese government has been taking advantage of social media to promote the official discourse. In 2014, media convergence became a national strategic plan. Accordingly, the majority of Chinese traditional media organizations— including the Party organs such as People’s Daily and Xinhua News Agency—have set up accounts on Weibo and WeChat. In other words, social media may serve as a platform for the government to implement the nudge. Those who use social media heavily for mainstream news may be more likely to be nudged for “positive.” Specifically, I consider thoughts that are supportive of the idea of “transmitting positive energy” as the Chinese mainstream citizenship norms, while beliefs in the significance of free deliberation of public policy as the democratic citizenship norms (Dalton 2008). Correspondingly, I define activities to promote “positive energy” online as the Chinese-style online political expression, which is different from the democratic-style one, that is, voluntary expression of politics not ordered (or nudged) by a ruling class. Finally, I distinguish between using social media for consuming news and information from mainstream media and for accessing alternative voices. Of course, the boundary between the two types of citizenship norms, online political expression, and informational use of social media is by no means clear-cut. I make the distinction for analytical purposes only. It appears that the results reveal a dual-path online political participation model in China (Guo 2019; see Fig. 1). On the one hand, using social media for mainstream news and Chinese mainstream citizenship norms each had a significant effect on the society’s mainstream political participation—that is, online expression to transmit “positive energy.” The study also suggests that, in this path, the citizenship norms did not moderate the relationship between social media use and online expression. In other words, people who embraced “positive energy” would distribute “positive energy” online regardless of the information exposure on social media. This may indicate that the government’s promotion of “positive energy” elsewhere can also stimulate citizens’ online behavior in support of the government agenda. Taken together, the findings indicate that the
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
167
Chinese mainstream citizenship norms
Using social media for mainstream news
Chinese mainstream political participation
Using social media for alternative news
Democratic political participation
Democratic citizenship norms Fig. 1 The dual-path online political participation model in China
government’s nudging for “positive” has been effective among the general Chinese public. On the other hand, the second path suggests that simply because a Chinese person supports the norms of democratic political participation, this does not necessarily mean the person would practice the norms in reality. In a controlled environment, even if a citizen believes political expression is a good thing, he or she would not do so because of the potential ramifications the expression may yield. Further, the study found that using social media for alternative news led to increased democratic political participation, but only among people who embraced a low level of democratic citizenship norms (see Fig. 2). In other words, alternative voices on social media such as user-generated self-media and independent bloggers are particularly powerful in stimulating political expression among people who did not support the democratic norms in the first place. Among those who already internalized the democratic citizenship norms, the more they consumed alternative news on social media, the less likely they would express politics online. While the second path is less relevant to this article, it indicates that the effectiveness of the nudge for positive largely relies on China’s unique authoritarian context. While other
168
L. GUO
Fig. 2 Democratic political participation as a function of using social media for alternative news and democratic citizenship norms (Note. All the three variables— democratic political participation, using social media for alternative news, and democratic citizenship norms—are composite variables measured on a five- point scale)
parties can also use nudges to advocate for causes alternative to “positive energy,” they would find it hard to succeed because of the reality constraints.
Implications and Contributions Nudging has been increasingly employed in different realms to influence human behaviors. Of interest to this chapter, it also has been a major trend in policymaking around the world. Government officials in many countries including the United States, the United Kingdom, Canada, and the Netherlands have used nudges to implement public policies, achieving their policy goals while allegedly preserving freedom of choice (Sunstein 2016). Nudging is also not new to China. In addition to the case discussed in this chapter, Hägele (2019) provides another example and shows that the Chinese government uses nudges to implement its environmental
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
169
policies and many of its measures have contributed to increased green awareness and participation among Chinese citizens. Just as in the corporate world, to nudge or not to nudge in the public sector is also controversial. On the one hand, supporters suggest that, in essence, the motive of nudging is no different from traditional government regulations to change behavior. Individuals are surrounded by government regulations anyway; nudging based on behavioral science simply makes some of the policies more effective to implement. From the perspective of the public, research shows strong majorities favor nudges as they promote health and safety and have no ethical complaints (Sunstein 2016). Likewise, in the Chinese context, my research findings also reveal that Chinese citizens generally welcome the nudge for “positive.” On the other hand, scholars have argued that nudging can be manipulative, especially in the era of Big Data when nudge comes to shove thanks to the amount of information companies and governments know about individuals and the number of channels available to target them (Sætra 2019). That is, the argument of favoring nudging based on liberty can be misleading given the unprecedented technology influences on human decisions. Indeed, when Big Brother uses nudges to advance certain unwanted political agendas, citizens can be more vulnerable because of the effectiveness of this approach. Back to the context of this writing, what does it mean to Chinese citizens when the government not only uses authority but also nudges to shape public behavior? As in any other society, it is generally believed that when the government nudges for common good, citizens make better decisions and improve their lives. Nevertheless, in the example discussed here, whether promoting “positive energy” can be deemed as nudging for good is somehow debatable. While many Chinese citizens genuinely embrace the idea and it does prompt altruistic behavior as my more recent research (Guo, 2022) suggests, others began to cast doubt on it. Before a consensus is reached, the government has added nudging to their toolkit of governing and as a result, citizens are implementing it and many are likely not aware of the nudge. The goal of this writing here is not to make any normative judgment. Instead, this chapter provides a case study to illustrate the nuances of government nudging in a controlled society. Furthermore, nudging for “positive” based on social and cultural norms presents a unique approach to nudging, contributing to our understanding of the wide applications of the concept.
170
L. GUO
Concluding Remarks The Xi administration publicly claims that the Internet is the most important battlefield for a new “public opinion struggle” (Liu and Wang 2017, 208). Unlike previous leaderships who saw the Internet as a source of risk, the new Internet authorities in China seek to harness the potential of the Internet as a new propaganda platform (Creemers 2017a). They do not just take actions to control harmful speech but also use nudges to encourage “positive” expressions online. As I have articulated, the nudging strategy proves to be useful. Ordinary Chinese citizens, young and old, are collectively producing “positive energy” online and offline. The discussion and the empirical evidence presented here suggest that China’s Internet is not politically black and white, but mixed and complicated. As of this writing, China further tightens Internet control over the COVID-19 outbreak. Of course, “positive energy” helps to fight against the pandemic. Recent research shows that COVID-19-related content with “positive energy” is prevalent on Chinese social media and users deem the positivity desirable and necessary due to its positive impact on their emotions (Lu et al. 2021). Again, the original meaning of the term referring to optimistic attitudes and positive behaviors naturally aligns with the government’s strategy to maintain a positive information environment during the crisis. Beyond the pandemic-related information, “positive energy” remains prominent in China’s online discourse. In October 2021, the Cyberspace Administration of China published an updated list of Internet news providers whose content may be reprinted by other sites. The notice cites President Xi’s note: Positive energy is the main goal, keeping control (of the information) is the absolute principle, (and) the effective use of it (i.e., positive energy) is the real skill (正能量是总要 求, 管得住是硬道理, 用得好是真本事). That is, the government continues to nudge for what it defines as positive behavior. Thinking ahead, how long this “positive energy” will sustain is a question. As Miao’s case—the beginning example—reveals, when emphasizing “positive energy” led to unexpected outcomes, voices that critique “positive energy” have emerged, though briefly. The government may have to come up with new nudges. However, given the unique link between the official and unofficial uses of the term as detailed above, nudging “positive energy” may or may not be replicable.
NUDGING, POSITIVE AND NEGATIVE, ON CHINA’S INTERNET
171
References Chen, Y. 2020. “Trending in China: Tragedy of Fifth-Grader’s Suicide Sparks Debate Over Harsh Teaching Methods.” Caixin Global, https://www.caixinglobal.com/2020-06-17/trending-in-china-tragedy-of-fifth-graders-suicide- sparks-debate-over-harsh-teaching-methods-101568626.html. Creemers, Rogier. 2017a. “Cyber China: Upgrading Propaganda, Public Opinion Work and Social Management for the Twenty-First Century.” Journal of Contemporary China 26, no. 103: 85–100. https://doi.org/10.1080/ 10670564.2016.1206281. ———. 2017b. “Cyber-Leninism: History, Political Culture and the Internet in China.” In Speech and Society in Turbulent Times: Freedom of Expression in Comparative Perspective, edited by Monroe Price and Nicole Stremlau, 255–73. Cambridge, England: Cambridge University Press. Dalton, Russel J. 2008. “Citizenship Norms and the Expansion of Political Participation.” Political Studies 56, no.1: 76–98. https://doi. org/10.1111/j.1467-9248.2007.00718.x. Fang, Kecheng, and Maria Repnikova. 2018. “Demystifying ‘Little Pink’: The Creation and Evolution of a Gendered Label for Nationalistic Activists in China.” New Media & Society 20, no. 6: 2162–85. Guo, Lei. 2017. “WeChat as a Semipublic Alternative Sphere: Exploring the Use of WeChat among Chinese Older Adults.” International Journal of Communication 11: 408–428. ———. 2019. “Social Media Use for News, Citizenship Norms, and Online Political Participation: Examining a Dual-Path Participation Model in China.” Washington, DC. ———. 2020. “China’s ‘Fake News’ Problem: Exploring the Spread of Online Rumors in the Government-Controlled News Media.” Digital Journalism. ———. 2022, “The Impact of Social Media on Civic Engagement in China: The Moderating Role of Citizenship Norms in the Citizen Communication Mediation Model.” Journalism and Mass Communication Quarterly, 99(4), 980–1004. Hägele, R., 2019. “Chapter VIII: Nudging with Chinese Characteristics: An Adapted Approach from the Global North to Achieve a Sustainable Future?” In Reassessing Chinese Politics: National System Dynamics and Global Implications, edited by Nele Noesselt, 172–199. Baden-Baden: Tectum Wissenschaftsverl. Han, Rongbin. 2015. “Defending the Authoritarian Regime Online: China’s ‘Voluntary Fifty-Cent Army.’” The China Quarterly 224: 1006–25. Liu, S. D. 2006. “China’s Popular Nationalism on the Internet. Report on the 2005 anti-Japan network struggles.” Inter-Asia Cultural Studies: 7, no.1, 144–155. https://doi.org/10.1080/14649370500463802.
172
L. GUO
Liu, Mingfu, and Zhongyuan Wang. 2017. The Thoughts of Xi Jinping (in Chinese). American Academic Press. Lu, Z., Jiang, Y., Shen, C., Jack, M. C., Wigdor, D., & Naaman, M. (2021). “Positive Energy” Perceptions and Attitudes Towards COVID-19 Information on Social Media in China. Proceedings of the ACM on Human-Computer Interaction 5(CSCW1): 1–25. Nordin, A. and Richaud, L., 2014. “Subverting Official Language and Discourse in China? Type River Crab for Harmony.” China Information: 28, no.1, 47–67. Sætra, Henrik Skaug, 2019. “When Nudge Comes to Shove: Liberty and Nudging in the Era of Big Data. Technology in Society 59: 101130. Sunstein, Cass. 2016. The Ethics of influence: Government in the Age of Behavioral Science. Cambridge, England: Cambridge University Press. Thaler, Richard, and Cass Sustein. 2008. Nudge: The Gentle Power of Choice Architecture. New Haven, CT: Yale University Press. Wikinson, T.Martin. 2013. “Nudging and Manipulation.” Political Studies 61, no. 2: 341–55. Yang, Guobin. 2018. “Demobilizing the Emotions of Online Activism in China: A Civilizing Process.” International Journal of Communication 11: 1945–65. Yang, Peidong, and Lijun Tang. 2018. “‘Positive Energy’: Hegemonic Intervention and Online Media Discourse in China’s Xi Jinping Era.” China: An International Journal 16, no.1: 1–22.
Nudging Choices through Media: User Experiences and Their Ethical and Philosophical Implications for Humanity James Katz and Elizabeth Crocker
Introduction The term “algorithm” may have once been jargon used primarily by experts in computer science and related fields. But today the term is so commonplace that the 2021 children’s film “Space Jam 2” featuring US National Basketball Association (NBA) superstar LeBron James had a villain named Al G. Rhythm played by Oscar-award winner Don Cheadle. In the movie, Al spies on humans using connected technologies such as phones, thermostats, computers, and even fax machines and uses his intelligence to predict and manipulate humans. At one point, the nefarious algorithm relays his desire to copy people from the “real world” into the digital one where he would be in complete control (Lee 2021).
J. Katz (*) Boston University, Boston, MA, USA e-mail: [email protected] E. Crocker American Geophysical Union, Washington, DC, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_9
173
174
J. KATZ AND E. CROCKER
This example from a hit movie, drawn as it is from popular culture, spotlights several points about the way the public understands algorithms as they are applied to social media and data records. First, the broad concept is generally comprehensible to a cross-section of American audiences, and presumably internationally ones as well. This applies not only demographically but in terms of age cohort, ranging from youngsters through middle-age. The concept of algorithms as ubiquitous and commonplace. Further, the trope of “machines taking over” and the possible exploitation of personal data for manipulation is also broadly understood. Though clearly recognized, and though obviously something to be resisted and feared, it is also a target of humor and satire. Third, algorithms would be the basis through which interactive nudging would be carried out, so it would be appropriate to examine nudging through the lens of user perceptions of algorithms. This prominence and perspective should come as no surprise given that numerous popular news articles detail the ways that algorithms impact our daily lives in unseen ways. A popular Mashable article from 2020 titled, “12 Unexpected Ways Algorithms Control Your Life” has examples that range from getting hired at a job to who you might date to how much things might cost individuals (Lekach 2020). Academic research about algorithms that has been reported in mass media ranges from racial biases in healthcare decision algorithms (Gawronski 2019), algorithmic impacts on unequal rideshare pricing (Murphy Marcos 2022) and amplifying moral outrage. These reports of unseen and sometimes problematic decisions and nudges paint a picture of a hapless, helpless and manipulated public. Doomsday predictions regarding algorithms and their influence on our behaviors also made their way to the US Congress. On December 9, 2021, the Subcommittee on Communications, Media and Broadband hosted a hearing titled, “Disrupting Dangerous Algorithms: Addressing the Harms of Persuasive Technology.” In this hearing, James Poulos, the Executive Editor of the American Mind, argued that, “the main purpose of algorithms, like digital programs and datacenters more broadly, is not to make money or influence thoughts, but to control people—in a direct and alien way hostile to our core beliefs and principles.” (Poulos 2021). Rose Jackson, the Director of the Democracy and Tech Initiative at the Digital Forensic Research Lab, added to the hearing with her own concerns that, “while people often think of harms in this category as related to speech, solely focusing on content misses significant pieces of the puzzle.
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
175
Microtargeting tools and recommendation engines are amplifying these dangerous messages and delivering them to those most susceptible.” She went on to say that, “as study after study has shown, because content that outrages leads people to keep watching and clicking, companies will keep showing harmful content to users regardless of the consequences for the users or our society.” (Jackson 2021) The overall concern from the speakers at the hearing was that unregulated algorithms were a risk to autonomy, democracy, and ethics. However, in all of this discussion and research little attention has been paid to how users experience and feel about algorithmic nudging. We have only a glimmer of how aware users are of how they might be nudged towards certain behaviors and decisions through algorithmic processes. Following the tropes of popular media, hinted at above, we could expect that users might see themselves as mere pawns of Big Tech social media companies who devious deploying algorithms to manipulate them. But it is worthwhile to ask if they actually feel this way (and of course if they are so manipulated)? We also know little about how users themselves encounter and navigate the seemingly ubiquitous nudging algorithms, especially considering such nudges are not universally negative in their outcomes but can even be deemed positively beneficial by users. This chapter looks at how users make sense of algorithmic nudging. Based on a series of conversations with users and researchers, we explore how users perceive and react to the process of algorithmic nudges in digital media. These media include social media, Internet experiences, and both ambient and worn digital monitoring devices. We present an informal, journalistic accounting of the views and concerns that arise from algorithmic nudging. We draw on comments from the people we spoke with as a springboard for reflecting on the perceived senses of individuality, feelings of humanity (which is to say a feeling of being human), and direct and subtle influences on behavior. We also draw on the imaginarium perspective of algorithmic nudging to understand users’ anxieties and also map their intellectual schemata on to a set of civil and personal concerns. Our goals are modest: we claim neither scientific certitude nor comprehensive coverage through our discussions. We believe that an investigation of this nature is an important complement to the large-scale operations of algorithms and the nudging that they entail as it allows us to gauge the human dimension of what is happening and what is at stake, at least in a microcosmic light. Such an investigation also complements the large-scale policy responses that are now taking
176
J. KATZ AND E. CROCKER
place and many areas of the globe. In addition to congressional hearings such as the one mentioned above, China (Huang 2021), the US (Wired Magazine 2021), and the European Union (Lomas 2021), among other governmental entities, are considering major reconfigurations or restrictions on the kinds of algorithms that social media companies can deploy. As regulatory entities make decisions about algorithms, we suggest it is vital to consider the on-the-ground experiences of users and larger ethical implications. To gather the array of views, we spoke with sixteen people representing a variety of academic specialties as well as what might be conceived of as ordinary users (while admitting that technically there is no such ideal type. These interviews were conducted in late 2021 and early 2022. (We omit real interlocutor names except with permission, and not all those we spoke to are quoted.) In this chapter we interweave excerpts of their comments with observations about those comments as well as our general perspective on the framing issues concerning the human consequences of nudging technologies. The goal is to provide a landscape view of some of the ways in which users and specialists are engaging with the topic of algorithmic nudging. This is not intended to be a representative study, obviously, but rather to provide insights into new avenues for more in-depth and representative research on this topic. We should mention that others have probed this area (for example, see Karizat et al. 2021). Some of these research initiatives have looked only at college students or small convenience samples (Karizat et al. 2021). However, by contrast, the most thorough exploration to date was conducted by researchers Ytre-Arne and Moe (Ytre-Anre and Moe 2021). Via a nationally representative survey of over thousand Norwegians, they collected ideas and opinions about algorithms. By analyzing the responses to their openended survey, they concluded that there were five major themes arising from the way the respondents characterized algorithms. These they called folk theories of algorithms. (The term folk theory refers to the way people make sense of their experiences and develop conceptualizations and rules of thumb on that basis.) These theories were (1) Algorithms are confining, (2) Algorithms are practical, (3) Algorithms are reductive, (4) Algorithms are intangible, and (5) Algorithms are exploitative (Ytre-Anre and Moe 2021). These folk taxonomies have been derived using views of people from various nationalities and cultures. Thus far, we are not aware of studies systematically probing cultural differences, though such a study would surely be interesting. In our case, by having journalistic-style discussions
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
177
with our respondents, we believe we were able to probe more deeply, and thus bypass popular media portrayals of algorithms to tap into the emotional and psychological dimensions of their perceptions. Thus, while respondents might generally endorse some of these folk theories, their relationships to algorithms were more dynamic and reflective.
Attitudes Towards Nudging Question:
Have you had recent encounters with algorithmic nudging seeking to affect your behavior? (Asked just after the US Thanksgiving holiday)
Answer by Michael Beam: We just had the Thanksgiving holiday, where you have the day after Thanksgiving Black Friday and then… Cyber Monday, so I’m getting bombarded by advertisements that are pretty spot on based on my past behaviors for the types of things I’d like, though. I try my best to keep my own email filters up, and I try to filter all sorts of commercial mail into a separate folder that I only look at when I want to. So I’m not constantly seeing [prompts]… But stuff gets by those filters and I see them. Interviewer: Do they have any influence on you? Do you they affect your behavior? Interviewee: As the target, I guess my professional assessment regardless of what I’d like to think is yes.… I don’t want to be manipulated, but at the same time, when the filters are good and they give me the information I’m looking for—and I think they’re pretty good at that—I will engage in that information. If those filters hadn’t nudged me toward it, I would have probably not otherwise done it, so sure. [Interview with Michael Beam, 2021] In the above exchange, Dr. Michael Beam, the Director of the Kent State School of Emerging Media and Technology, suggests the ambivalent relationship that many interviewees had towards algorithms. As established above, many are quite aware that there are algorithms working behind the scenes to rank, recommend, and prod users towards certain activities. They deploy a variety of responses and reactions towards these
178
J. KATZ AND E. CROCKER
algorithms though even experts like Dr. Beam recognize that awareness and mitigation measures can only do so much. This attitude was reflected in conversations with non-experts as well. Generally, respondents were quick to list out examples of nudges and ways they knew that algorithms played a role in the kinds of content they saw and experiences they had with apps, social media, cell phones and other media. An American mom who regularly uses social media told us, “I have no illusions. Social media is showing me what the algorithms feel like will make me click more and ‘engage’ more.” She and other respondents gave examples of promoted content, notifications, and content that showed up in their feeds as examples. Many of the ordinary users we interviewed also created filters and tried to reduce the amount they were being influenced just as Dr. Beam did. (This observation concerning the at least moderate ability to intervene with the algorithms suggests that we were talking to the more skilled social media/Internet users.) Push notifications, alerts, and obvious advertisements were the most discussed kind of content especially unwanted ones. In regard to management of nudges, we found a variety of views reported. A grandmother living in the Southern part of the U.S. said, “I’d say 85% are not asked for.” She explained that while she was better able to block unwanted pushes and notifications on her phone she was less successful with those that came via social media. A man who holds a post- doctoral position in the United States said, “With ones [notifications] that I’ve signed up to receive, it’s content that I care about and don’t mind the notifications. The ones pushed by the creators of those platforms feel invasive and intrusive, because sometimes it’s stuff that I don’t care about and most of the time it’s pushing me to try and purchase services, which feels invasive on something as personal as a cell phone.” This discomfort led many of the people we spoke with to discuss turning off push notifications or deleting services as a common reaction to annoying or invasive nudges. The post-doctoral individual went on to explain, “I’m the type of person that will delete apps if they notify me too regularly through push notifications I don’t want.” This attempt to control unwanted nudging via deletion was commonly expressed among interviewees. For example, an American woman living in Europe said, “I indeed have several apps/social media/games that send me push notifications or nudges either inside or outside of the app. I don’t find the majority of them useful so I often turn off most notifications on my phone so as not to be constantly distracted by irrelevant and
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
179
uninteresting information.” Their descriptions suggest an acute awareness and frustration with notifications that are overtly attempting to sell or encourage them to spend more time on an app or platform. These responses suggest that nudges are seen as something that must be controlled and therefore looked for and managed. In other words, it requires cognitive effort and is an additional layer of awareness that’s required when using these digital services. Interviewees frequently suggested that there was a need to find a balance between invasive nudges and useful or benign ones. But this labor could be annoying to the point taking active steps to remove themselves from the service altogether. Others indicated they tried to ignore nudges especially if they could not avoid them. For example, many used ad blocking services that hide ads on websites yet many websites have started requiring users turn off such services in order to view their content. Users then have to make a choice to either go elsewhere or try and ignore the ads. Yet, purposefully ignoring something still requires paying enough attention to recognize it and decide that it should be ignored. Not everyone will want to put in the work to manage them nor even necessarily have the digital knowledge to do so effectively. Even when apps and services provide information and choices the sheer amount of information users are expected to navigate and even simply to read can be overwhelming. Nicolas Mattis, a communication PhD candidate at the University of Amsterdam, noted, “It also remind me a little bit of the [EU’s General Data Protection Regulation regime regarding] cookies that we have in Europe where, every time you opened a website, you have to consent, and you can change all the options. Of course, what happens even if you’re very motivated, is that, after some time, you just click okay ’cause you wanna get to the service or the website or whatever.” Transparency about nudges, privacy, and algorithms were welcomed by interviewees but managing it all was exhausting. Additionally, they all acknowledged that there were more subtle forms of nudging that they might miss noticing or simply be unable to avoid. Interviewees had mixed attitudes towards just how much they were immune to more subtle forms of nudging and how they could circumvent it. For example, one woman who self-identified as “fairly liberal” said she recognized she had a bias so she tried to follow conservative voices on the social media app Twitter. She estimates she follows about 4,000 accounts on Twitter altogether but said, “the algorithm knows I don’t engage with those [conservative accounts] so they don’t show those to me.” To adjust
180
J. KATZ AND E. CROCKER
for this problem she stated that, “on occasion, I’ll just force Twitter to show me what’s chronological so I remember what I’m missing and it’s a really different experience!” This is a setting that removes personalization by forcing Twitter to show content by most recent regardless of how popular or relevant to user interests. However, the next time she opened the app it defaulted back to the ranking the algorithm thought she’d enjoy. Even for someone very aware of the algorithmic bias that can encourage people into ideological bubbles and highlights how it can be hard to break out of them. Other interviewees suggested they were able to avoid impacts of advertising nudges such as one woman who stated, “I think that most of the time I’m pretty in control of what I consume/purchase as I’m not a big consumer.” She suggested that she was not influenced by ads that popped up on her social media feeds. However, she went on to suggest, “on certain social media apps, I get tricked into that loop of watching nonstop videos that are suggested to me.” When asked to elaborate on what she meant by “tricked” she said, “oh it absolutely feels manipulative… All of a sudden while scrolling through there’s a new distraction. So even if I curate a fairly solid group of accounts that I follow for specific reasons, I get roped in by the moving images or the videos that play automatically one after the other.” Often this is funny or entertaining content but another concern from her and other interviewees was “doomscrolling.” This is when a particular event or series of events in the world is distressing but users feel compelled to keep up with what is going on. Some research suggests that so-called doomscrolling (also called doomsurfing, i.e., devoting immoderate time to reading and assimilating negative news and commentary on social media) and generally engaging with angry, upsetting, and/or vitriolic content on social media can have a negative impact on mental health (Buchanan et al. 2021), as well as the role that algorithms play in pushing users towards more and more extreme content (Ribeiro et al. 2020). Many of our interviewees were reflective about this process. One woman stated, “I know if some of this stuff impacts me, it must impact others.” Another woman lamented, “I worry about the potential for manipulation or harm that algorithms and other systems of nudging create in myself and others.” One interviewee likened it to rubbernecking when driving by a traffic accident. Another interviewee recognized it had a negative impact on their mental state and had to make a personal rule not to scroll on Twitter or other social media
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
181
before bed. The cumulative impact of these nudges were felt on their state of mind and their relationships with friends, colleagues, and family. Many of our interviewees shared examples of how they felt people they knew were being negatively influenced by algorithms in ways they did not realize. Sometimes this was introduction of misinformation and disinformation. One woman shared, “I know three individuals who regularly share suggestions, videos etc. that have come to them from extreme groups and these people apparently never vet any of them before sharing.” Another woman suggested that she was, “fairly certain that social media is largely to blame for the stark divide in the US right now. In concerns to politics but also the [COVID-19] pandemic.” Some suggested they tried to intervene when misinformation was shared but many also grew weary of fighting online and resorted to “blocking and moving on,” as one person put it. However, it was not just extremist or misinformation content that was a concern. One woman shared, I have a friend who is way down the rabbit hole. She gets lost in her phone. In any of her screens. She goes from one application to another to another, even when she’s in the presence of other people. I feel like I’m losing her to a loop of easy ways to distract herself and not deal with her problems.
Examples of children, friends, co-workers and family members who were “lost” to their devices can feel alienating and lonely. It also creates existential concerns about the self, presence, and those we love. Which dimension of connection and relationship is “real”? It can be hard for the low-stimulation of an in-person gathering at a coffee shop to compete with the high-stimulation of memes, cat gifs, and snarky videos. If someone is with us physically but engaging with someone else virtually that sense of lost connection and meaning is palpable for her and other interviewees. Looking at some of the longer-term potential consequences, we can address the nature of people’s mental engagement with the world, and the possibly pernicious effect of nudging a matter how well-intentioned it may be. One line of critical argument is that pervasive algorithmic nudging will sap our human will. By not having to think about things, and actively make choices instead of simply responding to the suggested prompts, our personalities may weaken. By weaken, we mean no longer have a forceful will, and be able to operate internal decision-making in ways that cause
182
J. KATZ AND E. CROCKER
discomfort or go against the social grain. If a philosopher such as Socrates or Montaigne were to visit us, and witness our contemporary reliance on clinical psychologists and therapeutic personality interventions, perhaps they would be amused. They might well ask themselves how could someone else listen to another person tell them how they needed to change psychologically? Can it truly be considered self-improvement if all of the improvements are suggested and guided by someone else? In a roughly parallel line of reasoning, we could ask how, in a world of persistent and unrelenting nudges, how many people could resist the dayin and day-out nudging not just for a day or a week, but for years? And what would be the cumulative impact of people who have been nudged into happiness, socially acceptable behavior, and avid but sustainable and green consumption? And how could societies protect against efforts to nudge people in different directions such as anxiety, rejection of particular authorities, or increased consumerism? What happens to senses of collective identity and belonging if micro-targeted nudging efforts pull individuals within communities along entirely different pathways? It may well be that, much like exercise to promote healthful physical and even biological functioning, active decision-making and engaged cognitive functioning is also important to one’s mental health and condition. While there are many experiments and cross-sectional studies to support this line of argument (Di Rienzo et al. 2016), it nonetheless remains an equivocal point. If we assume for a moment that requiring people to make evaluative judgments and be pro-active in their lives helps sustain various cognitive talents and functions, then it could be argued that a plethora of nudging could lead to negative mental consequences. Following this assumption, it might be that by turning our lives over to nudges, even if they are well-intentioned and otherwise salubrious, the mere fact that we have accepted these blandishments means that we will be less of ourselves. If we also attach a value judgment on this process, it could be said that from utilitarian viewpoint the outcome is positive. From a humanitarian viewpoint, and what might be considered self-actualization, the outcome should be judged negatively. Still, despite the fact that the advocates of maximum human potential might not wish for this outcome, it might be said in the utilitarian’s favor that people will be happier and better off on the material level. The above admittedly pessimistic view towards nudging is not entirely original with us, to say the least. Although far from identical, a similar moral contrast was made by Anthony Burgess in his novel A Clockwork
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
183
Orange, and which was brought to life in a memorable movie by Stanley Kubrick. In that story, Alex, a young man who likes performing violent deeds, is forced via behavioral conditioning to become no longer capable of taking any violent action. Alex is now a completely non-violent person. Yet the question remains whether he has changed or simply no longer has the option of being violent no matter what he might wish, or now even be incapable of wishing as well as acting? Certainly the minister in the story who is overseeing Alex’s presumed moral rehabilitation is convinced that without free-will the young man cannot be considered changed. If Alex has no free will, the moral problem of Alex’s essentially violent personhood remains unaddressed. The minister would argue that while the utilitarian problem of violence may have been solved, the question of the fundamental moral qualities of a human being has not. A similar case could also be made for the fraught moral dimension of nudging, which is eloquently set forth in other chapters this volume.
Self-Actualization: The Nudged Self Question: Answer:
How well do you think these algorithms know you? So one example would be when I was in grad school: On Sundays sometimes I would make an hour drive to Salt Lake City to a coffee shop or restaurant or brewery to work in for the day on dissertation stuff, to get out of the house. I remember numerous occasions when I would get in the car and it would suggest directions to the exact place I was heading to. It was useful, but also creepy because I really felt it acutely that I was being tracked and analyzed by an algorithm. A few times I wasn’t planning on going anywhere but then I’d get a notification for directions to a place I frequently went to and it would give me the idea to go and I’d follow it. [Interview with Cooper]
One of the goals for many platforms and apps is to create algorithms with learning capabilities that can intake user activity and create tailored suggestions that will enhance experiences thereby encouraging more frequent and longer use of the service. When users search for certain kinds of content, interact with users, post, dislike/downvote or simply spent more time watching or reading content the algorithm uses that to craft a
184
J. KATZ AND E. CROCKER
personalized experience. This personalization requires users give up information about themselves, but it can also be a way for users to benefit from good recommendations, find one another, and even tailor nudges to reach goals. Interviewees regularly mentioned algorithms recommending they follow content creators they ended up enjoying or the ways in which they could set reminders to exercise, eat healthy, or meditate. However, as indicated by the example Cooper shared above, when the algorithm knows you a bit too well it can be unnerving. This is particularly true when considering the sense of self and decision-making. We generally like to think that we are in control of decisions such as where we go to socialize or what we want to wear. Yet, nudges can push us towards certain behaviors and activities that we might not have otherwise engaged in. When his app would suggest visiting somewhere he had enjoyed in the past it was well tailored to his interests. But he might not have otherwise gone that day. Michael Beam gives the example of music and how nudging via a popular music service called Spotify is affecting musical taste and the musical and even quasi-cultural environment in which people live. Cumulative nudging can affect popular culture and the way in which millions of people engage with artistic performances and entertainment. With reference to music generally and Spotify in particular, he comments: when you open Spotify now, you get a daily mix recommendation, a playlist recommendation, and they also recommend various podcasts for you to do, which is another way you can experience news, and so the fact that people are using the algorithms to them discover related music and then dive in, that changes the way fandom happens, I think. That also changes the way the economic model.… Music feeds culture and so if you’re seeing the whole music industry change, then that’s having an impact on cultural movements. [Michael Beam]
Now there is an argument in favor of nudging which goes beyond the individual benefits of making prudent choices and helpful decisions. It also goes beyond the utilitarian argument of adding up good functional outcomes. And that has to do with the unintended consequences of nudging. This point is made by Michael Beam who has studied the social consequences of new technology on news and journalism. He points to the socially positive outcomes for individuals that were presumably not part of the original algorithmic design. Referring specifically to accessing information news, he highlights some of these benefits:
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
185
In a lot of cases, these algorithms and algorithmic newsfeeds provide an avenue for people to experience more diverse topics or give people on ramps to new information that they wouldn’t have otherwise had. While those algorithms might be problematic in many ways…there’s so much opportunity—especially relative to diverse experiences and diverse information—that algorithms might be able to motivate curiosity in topics that they wouldn’t have had motivated before. And that curiosity could then create more user agency to then go explore that area that they would have never had thought of exploring prior to the nudge. [Michael Beam]
Dr. Michael Beam clearly sees some positive advantages to algorithms. They can inadvertently open up vast new vistas and in perhaps surprising ways and to user agency. In this way, algorithms can indirectly provide greater freedom and opportunity to the people are recipients of their nudging. Indeed, interviews with everyday users suggest this is the case. One woman noted, “I’ve followed a few cool people as a result of pushes” and others said they were able to reconnect with high school friends thanks to algorithmic suggestions. Others shared examples that were more profound. One mother of two shared, “I follow a lot of Black women on Twitter because I actively want to learn from them (I’m a white woman) without them expending too much effort… I think I learn a ton on Twitter about how to be a good ally, how to apologize when I make a mistake, how to be inclusive.” Seeking out content on one topic also creates a cascading effect of recommendations for related topics and other content creators. Thus, she was able to expand the diversity of voices she was listening to and incorporate a wider perspective on a variety of issues. Tensions arise, however, when platforms or services seem to be poor both at personalization and privacy. Facebook was frequently cited as a platform where advertising nudges were poorly tailored. An academic gave the example of how each year their students do a project on pharmaceuticals. Because she helps the students research scholarly articles about the topic, now, “the Facebook algorithm (though not Twitter for some reason) just thinks I have all the diseases—from a slew of random cancers, schizophrenia, Alzheimers, HIV, etc.” The advertising is so disconnected from her actual interests and personal needs that they stand out. She continued, “the ads have calmed down because I’m not supporting that course anymore, but over the last few years because of my line of work, and what I Google with students, my searches have really confounded the Facebook algorithm and shown me things that are truly not relevant to
186
J. KATZ AND E. CROCKER
who I am.” When the algorithm does not reflect back what you see in yourself or want it may create a disconnect. An interview with Dr.Judith Moeller highlights this issue. “One of the main concerns was feeling pigeonholed, being miscategorized,” she said. But of even of greater importance is being “under-categorized.” By this she noted that her research subjects they were feeling that only certain aspects of their personality were being picked up correctly by recommender systems, but those aspects were “not all of who they are. Especially, [the recommender system] doesn’t include their aspirational self, who they want to be.” The behavioral self is, maybe, not the best reflection of who we want to be, and, maybe the recommender should not only serve who we are, but also who we want to be.” [Moeller] Interviews with everyday users echoed this problem. One group conversation included participants sharing how they wished algorithmic recommendations were better because it would make certain mundane tasks much easier. One person shared that the plethora of video streaming options created a dynamic where they felt paralyzed by so many choices. It was particularly stressful when trying to select something to watch with a partner or friend. If streaming services had better tailoring algorithms they could expand to new content they did not know existed but which they would enjoy. Another person shared that as a graduate student they felt like they had no time to keep up with fashion. Nonetheless, most of the existing sites that tried to create recommendations such as Amazon or subscription boxes did not have options that were well-suited to academic settings. They would enjoy an option that allowed them to remove that mental load but had not yet found something that worked. Yet some users shared that they purposefully disrupt algorithms. A grandmother shared that, “sometimes just to screw things up I’ll take an online survey and give all sorts of incorrect answers. That’s just for the fun of, I doubt if it changes anything.” However, she noted that, “I do find though that if you put your income at a really high number the quality of the products being thrown at me goes up” suggesting that her actions have at least some impact on the algorithm. Another woman shared that Amazon was good at providing tailored recommendations but for things she had already purchased or was not researching for herself. “Amazon often sends me suggestions to buy something that is similar to something I just bought from them so I’d say they have a bit of work to do on their algorithm,” she explained. For months
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
187
she received recommendations to purchase something she had already purchased and had no intention of ever buying again. It did make her realize that her searches were part of these algorithms. “I guess what I’m saying is that everything I research online is given a commercial value and I am swamped with suggestions that suggest I spend money for their product,” she said. Recognizing that her searches were being monetized made her acutely aware of how these processes worked behind the scenes. Disruptions break the ability for the algorithm to tailor content and capture all of who someone is. However, it does give control back to the user and for some that might be worth the loss. An American woman living in France shared, “What I found to be very different about social media usage here is that people often alter their information in order to protect themselves. People use fake names on FB [Facebook, now Meta]. And I mean there’s the whole GDPR [General Data Protection Regulation] that actually protects people a lot better.” When asked to elaborate on how the GDPR has impacted her own interactions on social media she shared, “in the last several years I’ve started to post less and less on social media. I’m more aware of what my digital identity represents as it follows me throughout life.” It is interesting to consider the ways that policy may impact how users on-the-ground reflect upon their digital self and their being-in-the-world. This topic of creating self-perceived identities and consciously choosing to avoid them, or at least limit the possibility that one could be perceived by others as having such an identity is reflected in the following quote from Dr. Michael Ananny. We asked him about how he personally reacts to nudging, using the example of the algorithmic nudging that one encounters on Amazon’s website. He began by pointing out that his generally reserved position towards nudges is heavily guided by the company itself. Unsurprisingly, he feels less constrained in dealing with companies of which he has a favorable opinion and contrariwise. [It is noteworthy that well-regarded companies gain consumer trust more easily, a well- worn chestnut of the advertising industry, but worth repeating in this context of algorithmic nudging. Also, one can see in Dr. Ananny’s comment the realization of the explicitly commercial dimension of the sponsored/recommended list but also that he is open to benefiting from the “wisdom of the crowds” in that the algorithm can automatically find works of potentially great interest that he would not ordinarily have encountered.
188
J. KATZ AND E. CROCKER
… It really depends on the company for me. Actually, just before our conversation, I was looking for a book that I could not quite remember the title of. So I went online and Google brought me to Amazon, where there was sponsored results related to the book. But there was also this category of people who browsed this book also browsed these other books. There were these two different forms of recommendation. I often do find stuff and think, “Oh that’s interesting, I didn’t know that.” Whereas before I was a little skeptical of those. So I both like them, and follow them, and use them, and have bought stuff based on those recommendations. [Michael Ananny]
Yet he is concerned about how if he avails himself of the algorithmically offered titles—despite the fact that they may be useful—he may be presenting himself to colleagues as a parasite or puppet of Amazon. But I do so with this nagging feeling in my mind which is—especially in academic work—where I am nervous about putting myself in a small intellectual box, a too small of a box. I don’t want to be doing work that stems from reading that has been defined by this Amazon recommendation, even though they are great books and the people who write them are great people, and it’s all good stuff in a way… But I worry that I just am existing in this box of recommendations. So that is a kind of anxiety that I do have. I have an ambivalence that I don’t have a resolution to. I just have an awareness of this box, but also an appreciation of the box somewhat. I would say that anxiety or ambivalence probably characterizes my relationship to this stuff. [Michael Ananny]
Inadvertently Shaping the World and the User Via Algorithmic Nudges: Self-perceived Ethical Responsibility Interestingly, many interviewees also shared that they saw nudges as something they could participate in, too. From sharing that they voted to changing their profile photo to indicate they were vaccinated they saw these as ways to influence their personal networks into taking related actions. One woman shared an interesting example. She began weekly Facebook posts encouraging activism in people’s communities. Over the course of years, her posts gained traction and the algorithm began showing more and more of her content to her network even if they didn’t comment or like the weekly post. She noted, “I realized I was cultivating almost a brand. Which came with its own kind of pressure! So now my Facebook feed is something that is personal but I almost think of it
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
189
differently—it’s more of an advocacy tool.” This interplay between user actions, the algorithm, and impacts of social media networks can shift the ways that people consider their own responsibilities and usage. She went on to say, “I do specifically post political things to influence my Facebook network and get them to take political action. And I am consciously doing that on Facebook because it has that reach.” Users aren’t just at the mercy of nudges created by algorithms—they can mobilize those same systems to nudge their own networks to take actions, share concerns, or support efforts. Most studies focus on nudges as unidirectional and users as mostly passive consumers. However, this qualitative data suggests that is not the full picture. But again, in this case since Amazon is likely to be seen by the interviewee as a big and morally compromised high-tech firm so the sin is amplified of appearing to be boxed intellectually by intellectual its algorithms. These concerns are at the immediate and perhaps visceral level. But what is the larger concerns that one has about responding to algorithmic nudges? Are there concerns that go beyond the pragmatic and impressionistic levels and go to deeper philosophical questions about the direction of society and one’s roles and responsibilities within it? Once again, Michael Ananny has revelatory comments: What worries me about being in that box is contributing to a social world that has been shaped by Amazon commercial imperatives. I know I’m probably being shown that book not through some completely transparent and well- designed Algorithm that wants to show me new things. I’m subject to some logic of Amazon algorithms, and I don’t know what those Algorithms are. I’m highly aware that I’m subject to it, But I don’t understand i.e. also understand what I’m subject to is different than what you are subject to.… I’m also worried that I will land in a “Mike specific” box, which is reproducing me. I also don’t like that I’m feeding corporate America data Patterns. But I recognize I’m probably losing that battle; I need to make peace with that. But also I like the aesthetic of being surprised by something new and different. [Michael Ananny]
User Evaluation of Algorithmic Nudging: A Structural Analysis It is worth pointing out that from our conversations we can argue that, from an algorithm user’s viewpoint, algorithms themselves can be seen as presenting nudges that are perceived by the user along a combined three- dimensional space. The first dimension or axis of perception of nudging
190
J. KATZ AND E. CROCKER
can be considered the usefulness to the current or stimulated needs of the user. Here the nudge can range from disruptively irrelevant through neutral to highly useful. Excessive repetition of the nudge also falls into this category; that is, the repetition disturbs the routine or repose of the recipient thus provoking irritation. A second axis can be considered novelty, ranging from the less desirable end, namely irritatingly obvious or repetitive through to intriguing, pleasantly surprising or amusing. The third axis by which a user perceives an algorithmic nudge is whether it represents an ego threat or reward. The threats can come in the form of disturbing familiarity with the user suggesting a loss of privacy or reflecting knowledge about the user that is embarrassing. Other threats can loom if the algorithm’s nudge seems to affiliate the user with an unwanted or negatively perceived grouping, organization or cause. They can also include the nudge’s urging of support for a negatively perceived group. On the other hand, flattering or ego-building nudges can be seen as welcome. Along this axis is the interesting case of reminders for physical fitness activities or habit reminders. When the user is efficiently pursuing their own goals, these reminders can be welcome. The contrary case also obtains. This evaluative schematic analysis serves as a springboard for future analysis. If justified through additional exploration, it could provide a useful tool for broadening the understanding of how people react to nudging choices in the media encounters that they have. It may also be seen as a useful juxtaposition to the five-dimensions that arose through the research of Ytre-Arne and Moe (Ytre-Anre and Moe 2021).
Conclusion In summary, we have had an opportunity to talk with a range of people, including some experts, concerning their perceptions of algorithmic nudging. We examined their perceptions and experiences across multiple domains and arrived at a potential analytical framework. Perhaps one of the more surprising findings is that while there is plenty to criticize about the use and potential abuse of algorithmic nudging, and we pointed to some of these issues in our chapter, these nudges are nonetheless something that many find extremely useful. Indeed, they can do much to amuse, inform and delight just as much as they can frustrate or concern. The key question then is how to protect fundamental values and preserve meaningful human choice while also providing a service that can be of both quotidian and profound assistance to people.
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
191
To return to the points brought up by Ytre-Anre and Hallvard’s work on Norwegian folk theories of algorithms we find some similarities but also differences, particularly in terms of what we found to be a more dynamic and interactive relationship. Their study suggested that users had five primary categories into which they saw algorithms falling. These were: (1) algorithms are confining, (2) algorithms are practical, (3) algorithms are reductive, (4) algorithms are intangible, and (5) algorithms are exploitative (Ytre-Anre and Moe 2021). Many of our respondents clearly felt that algorithms were at times confining, reductive, and even exploitative. They expressed frustrations that algorithms only seemed to reflect a part of their self rather than their whole selves and thus provided incomplete or inaccurate feedback. They also were aware of the ways that algorithmic nudging can push people to consume content that is harmful or simply sucks them in with endless content but ultimately seems a fruitless waste of time. Of course, they also had positive views and saw cases of their utility. One example of a practical aspect was how algorithms can keep them on track for self-directed activities like exercise or by suggesting places to visit. However, the discussions with users revealed that often experiences with algorithms are less unidirectional than those that the Norwegian categories suggested. None of the people we spoke with claimed to fully understand the details of how the algorithms worked and therefore recognized to an extent that their operation was intangible, certainly to the users. Yet they all saw algorithms as interactive tools that were not only responded to their specific actions but that reciprocally influenced them, too. Users described a kind of dialectic dance between pushes and nudges and one which they felt that they could at least partially direct, guide, and even manipulate. (Importantly, though, we hasten to add that this sense of agency was a perception rather than a demonstrated fact.) Perhaps the anthropomorphizing of algorithms in popular media and friendly icons on apps aids in the sense that everyday users are engaged in a bi-directional experience with algorithms. And perhaps the intangibility of algorithms works well with this attitude. After all, many of our interlocutors throughout the day harbor unknown aspects of intent, feelings, and goals. Like interactions with co-workers and roommates, when our dialectic dances are well attuned it can result in an experience that is truly enjoyable and productive while one full of miscommunication or cross-goals creates the opposite. This perspective also provides a framework for assessing the ways in which we can examine the role of algorithms and the self philosophically.
192
J. KATZ AND E. CROCKER
To return to the example of improving the self, perhaps rather than examining it as a unidirectional set of pushes from a single source we should examine it as a waltz. While there is a lead, who directs the follow with body language and other subtle indicators, for the dance to work the follow must be in sync and communicate, too. When the communication works well the dance can feel like magic. But the follow has the potential to back lead—push back—and to hijack the controls. Like dancers, our users have agency and the dance they are creating is a constantly improvising act. Interventions such as those presented in Congress may be important but should take this into account. In sum, we have sought to complement other analyses in this volume, especially those that address free-will and autonomy, by discussing users’ sense-making when they are confronted with algorithmic nudges.
References “Americans Need a Bill of Rights for an AI-Powered World.” Wired Magazine, October 8, 2021. https://www.wired.com/story/opinion-bill-of-rightsartificial-intelligence/. Buchanan, Kathryn, Lara B. Aknin, Shaaba Lotun, and Gillian M. Sandstrom. “Brief Exposure to Social Media during the COVID-19 Pandemic: Doom- Scrolling Has Negative Emotional Consequences, but Kindness-Scrolling Does Not.” PLOS ONE 16, no. 10 (October 13, 2021): e0257728. https://doi. org/10.1371/journal.pone.0257728. Di Rienzo, F., Debarnot, U., Daligault, S., Saruco, E., Delpuech, C., Doyon, J., et al. (2016). Online and offline performance gains following motor imagery practice: a comprehensive review of behavioral and neuroimaging studies. Front. Hum. Neurosci. 10:315. https://doi.org/10.3389/fnhum.2016.00315. Gawronski, Quinn. “Racial Bias Found in Widely Used Health Care Algorithm.” NBC News, November 6, 2019. https://www.nbcnews.com/news/nbcblk/ racial-bias-found-widely-used-health-care-algorithm-n1076436. Huang, Zheping. “China Plans Control of Tech Algorithms U.S. Can Only Dream Of.” Bloomberg, August 20, 2021. https://www.bloomberg.com/news/ articles/2021-0 8-2 7/china-p lans-c ontrol-o f-t ech-a lgorithms-u -s -c an- only-dream-of Jackson, Rose. Disrupting Dangerous Algorithms: Addressing the Harms of Persuasive Technology (2021). https://www.commerce.senate.gov/2021/ 12/commerce-c ommittee-a nnounces-a lgorithms-h earing-o n-d ecember- 9-2021?et_rid=330008163&et_cid=4024840
NUDGING CHOICES THROUGH MEDIA: USER EXPERIENCES AND THEIR…
193
Karizat, N., Delmonaco, D., Eslami, M., & Andalibi, N. (2021). Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. 5(CSCW2). https://doi.org/ 10.1145/3476046 Lee, Malcolm. Space Jam 2: A New Legacy. Comedy. Warner Bros Pictures, 2021. Lekach, Sasha. “12 Unexpected Ways Algorithms Control Your Life.” Mashable, September 3, 2020. https://mashable.com/article/how-algorithms- control-your-life. Lomas, Natasha. “Europe Lays out Plan for Risk-Based AI Rules to Boost Trust and Uptake.” Techcrunch, April 21, 2021. https://techcrunch.com/ 2021/04/21/europe-l ays-o ut-p lan-f or-r isk-b ased-a i-r ules-t o-b oost- trust-and-uptake/. Murphy Marcos, Coral. “Was Your Uber, Lyft Fare High Because of Algorithm Bias?” USA Today, July 20, 2022, sec. Tech. https://www.usatoday.com/ stor y/tech/2020/07/22/uber-l yft-a lgorithms-d iscriminate-c harge- more-non-white-areas/5481950002/. Poulos, James. “Disrupting dangerous algorithms: addressing the harms of persuasive technology (2021). https://www.commerce.senate.gov/2021/12/ commerce-committee-announces-algorithms-hearing-on-december-9-2021? et_rid=330008163&et_cid=4024840. Ribeiro, Manoel Horta, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira. 2020. “Auditing Radicalization Pathways on YouTube.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131–141. FAT* ’20. Barcelona, Spain: Association for Computing Machinery. Ytre-Anre, Brita & Hallvard Moe (2021), Folk theories Of Algorithms: Understanding Digital Irritation. Media, Culture & Society 43(5), page 813–14.
Building Compliance, Manufacturing Nudges: The Complicated Trade-offs of Advertising Professionals Facing the GDPR Thomas Beauvisage and Kevin Mellet
Introduction In Spring of 2018, new dialog boxes popped up all over the web, asking European Web users, in various formats and terms, for permission to collect their personal data (mainly in the form of cookies, hence their labelling as ‘cookie banners’). These interfaces offer choices: that of accepting or refusing cookies, or that of managing personal data collection and usage. But most of these interfaces are designed to secure a positive sign, a consent, on the part of the users. They are a typical instance of nudges,
T. Beauvisage Orange Labs, SENSE, Châtillon, France e-mail: [email protected] K. Mellet (*) Sciences Po, Center for the Sociology of Organizations, CNRS, Paris, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_10
195
196
T. BEAUVISAGE AND K. MELLET
or even dark patterns, that is, intentionally misleading or unbalanced interfaces, relying on low-level cognitive strategies intended to influence the supposedly free choice of Internet users. Many empirical research studies emphasize the massive and systemic nature of these practices (Kampanos and Shahandashti 2021; Krisam et al. 2021; Mathur et al. 2021; Santos et al. 2019). These interfaces are the direct consequence of the General Data Protection Regulation (GDPR), ratified in April 2016 and entered into force in May 2018. The text is intended to cover all the issues related to the protection of personal data: its main ambition is to moralize the practices related to the collection and processing of personal data (Albrecht 2016) after long years of controversy and disputes. To this end, the regulation grants a central place to the consent of individuals, prior to the collection and use of personal data. In the field of online advertising, the GDPR marks a break with established conventions and practices. Before the implementation of this framework in 2018, most of the collection of personal data took place there with the tacit consent of the user. However, while the GDPR clearly establishes the need to obtain “free, specific, informed and unambiguous” consent for advertising, it does not explain the concrete conditions for its implementation. Experienced as a threat by online advertising players, the obligation to obtain consent has been the subject of a series of translations, interpretations, and transpositions during the first year of the GDPR. At the heart of these translations, consent interfaces have become the focal point and the place of crystallization of a series of normative, infrastructural, cognitive, and moral issues that shape the scope and effective practice of personal data protection. By studying the organizational processes and decisions leading to the development of consent interfaces, we have the opportunity to examine the generative side of nudging. In the specific case, consent interfaces incorporate arbitrations between legal, economic, and ergonomic rationalities: we show that nudges do not emanate from a malicious, unequivocal intention, but rather from tensions between contradictory rationalities and moralities. This is what we examine in this chapter. The first two sections of the article present our approach and give a brief reminder of the emergence of regulation by consent in the field of online advertising. In the third section, we describe the process of constructing conformity and the result of
BUILDING COMPLIANCE, MANUFACTURING NUDGES: THE COMPLICATED…
197
this process. The fourth section questions the moral scope of the model of regulation by consent and more generally of the integration of moral concerns in the market.
Our Approach Between April and July 2019, we conducted 15 interviews with professionals involved in bringing their company into compliance with the GDPR. They work in organizations that occupy three positions in the online advertising value chain. Publishers of websites and mobile applications (n = 6) are in direct contact with users; they are responsible for designing and managing consent collection interfaces, for themselves and for their partners (or “vendors”), who constitute the second category of actors (n = 4). The latter occupy a position of intermediaries in the online advertising market. We also interviewed four managers of Consent Management Platforms (CMP), which market software solutions for collecting and storing consent for publishers. This new category of economic actors is a direct consequence of the GDPR. The diagram below helps to situate publishers, advertising partners and CMPs in the simplified advertising value chain (Fig. 1): The second empirical material on which this study is based is a corpus of banners. We have designed a tool capable of automatically collecting an image copy of a web page. We applied this tool to the home pages of the 500 most visited French sites in France according to the ranking of the company Alexa in France, in March 2019. This dataset was then analyzed manually to measure the prevalence and visibility of consent interfaces, and the choices offered in first intention, this method leaving aside the secondary interfaces (settings and detailed information, cookie and privacy policy).
Advertisers, agencies
Advertising intermediaries (vendors)
Publishers
Consent Management Platforms
Fig. 1 Simplified online advertising value chain
Interner users
198
T. BEAUVISAGE AND K. MELLET
Regulating Advertising Through Consent In Europe, legislation on personal data was put in place from the mid- 1990s. The 1995 directive on the protection of personal data (95/46/EC) and article 8 of the Charter of Fundamental Rights of the European Union constitute the protection of personal data as a fundamental right. The so-called ePrivacy Directive, adopted in 2002 and amended in 2009 (2009/136/EC), directly targets the online advertising sector, requiring that the use of personal data in this sector be based on consent—taking up a request formulated by associations for the defense of individual freedoms and privacy (Christou and Rashid 2021). In practice, the entry into force of this European directive, in 2011, led many websites to add a discreet banner informing users that continuing to browse constitutes consent to the collection of personal information in the form of cookies. As Jockum Hildén (2019) shows, this first attempt to regulate online advertising through consent is perceived as a failure, both by the legislator and by privacy associations: consent is defined too vaguely, and the penalties are not significant. In his investigation into the development of the GDPR, Hildén underlines the importance, in the eyes of the legislator, of correcting these shortcomings. The GDPR thus strives to define the criteria characterizing valid consent, first and foremost its “explicit” nature. In addition, it introduces significant penalties—up to 4% of global turnover—in the event of non-compliance with the regulations. Voted in April 2016 and entered into force on May 25, 2018, the GDPR offers a regulatory model that actively and directly involves individuals, which represents a novelty for advertising, which is more accustomed to discussing with consumer representatives than with consumers themselves. The regulation thus established is clearly part of a liberal perspective of market extension and individual accountability, while recognizing the fundamentally asymmetrical nature of market relations (to the detriment of consumers) and the need to put in place mechanisms to rebalance these reports by confirming or extending individual rights: consent, right to information and access to data, rectification, and erasure, right of opposition. Respect for these rights should compensate for the imperfections and excesses of market practices (predation, abusive exploitation, misleading and fallacious practices, etc.) and contribute to moralizing the market.
BUILDING COMPLIANCE, MANUFACTURING NUDGES: THE COMPLICATED…
199
The Crafting of Consent Interfaces For online publishers and advertising vendors, GDPR compliance is built in two distinct arenas. The first compliance arena is managed by advertising vendors, within a professional association (the Interactive Advertising Bureau, or IAB) and has led to the publication of a set of standards and technical specifications: The Transparency and Consent Framework or TCF. It only interests us indirectly here, because it does not concern the design of nudged interfaces, but rather the technical and logistical conditions ensuring the proper circulation of consents in the complex and sophisticated infrastructure of online advertising (Mellet and Beauvisage 2020). In fact, the TCF is mainly focused on the issues of circulation of consent, and it completely puts the issue of producing consent on the side (it just ensures that a consent given to a publisher also applies to its countless vendors). Thus, it is up to the publishers of websites and mobile applications, in direct contact with Internet users, to bear the heavy and difficult task of designing consent collection interfaces, at the risk of non-compliance. Website publishers are responsible for the conditions for obtaining consent, under strong constraint: neither the regulations nor the Data Protection Authority (the CNIL in France) specify exactly how, in practice, to produce consent defined as “any freely given, specific, informed and unambiguous indication” (article 4), answering a question presented “in an intelligible and easily accessible form, using clear and plain language” (article 7). In addition, publishers are under strong pressure from advertising intermediaries, who are waiting for consent to feed their advertising value chain. Our survey shows that publishers only engage in the development of these interfaces after May 2018. They do so in a dispersed, relatively isolated manner. Analyzing our landing page dataset provides a good quantitative insight into publisher banner practices. Based on the visual examination of the 450 sites actually accessible, we assessed two elements of the first interfaces: the choice options, and the visibility of the banners. The first observation is that 32% of the sites examined do not include consent or cookie-related interfaces. These sites include e-merchants (Amazon, Ikea, etc.), pornographic sites, a few major web players (Whatsapp, Apple), and sites notoriously engaged towards user privacy (Qwant, Mozilla, Wikipedia). If for the latter, we can assume that they simply do not collect
200
T. BEAUVISAGE AND K. MELLET
personal data, for the others, it is likely that the sites are not in compliance with the law. The second result is that most consent banners make it much easier to agree than to refuse. In the most extreme case, 28% of banners leave no choice to the Internet user: only a cross to close the consent interface is offered, often associated with a text stipulating that continuing to browse implies consent—this format corresponds to the interface model that spread after the ePrivacy directive came into force in 2011. For the 72% of sites offering a choice, a great disparity in the presentations and formulations (“OK”, “I understand”, “I accept”, “Accept”, etc.) complicates the apprehension of each of these interfaces, by increasing their attentional cost. And above all, only 7 sites out of the 306 (2%), offer a refusal button next to the acceptance button; and even in this case, the refusal button is less visible than the other (typically: white background on white for one, and colored background for the other). The third result of the analysis of the corpus of home pages is the very great heterogeneity of the interfaces, increasing the cognitive cost for the Internet user. First of all, on the 306 sites displaying a consent banner, the visibility is very variable (Fig. 2): the consent interfaces are not very visible Banner visibility coding (highlighted in red) Low visibility: 23 %
Medium visibility: 50 %
Web page
Web page
cookie banner
cookie banner
High visibility: 15 %
Full page (blocking) interface: 12 %
Web page cookie banner
Fig. 2 Banner visibility coding (highlighted in red)
cookie banner
BUILDING COMPLIANCE, MANUFACTURING NUDGES: THE COMPLICATED…
201
on a quarter of the sites, moderately visible on half of the corpus, and a last quarter of the sites have chosen to make them very visible, through a full page, blocking, banner. For publishers, the implementation of consent collection interfaces is not a straightforward operation, it is the result of arbitration, compromise, decisions. As the analyses within the field of Law and Society1 have shown, the activity of interpreting the law is not like an operation of “neutral” application, but an activity leading to the convergence and collision of various normative registers. This “legal pluralism” is particularly visible when the law must be articulated with other bodies of rules—technical norms, management standards, etc.—which is the case with online advertising. Consent banners are the stabilized result of a combination of exogenous regulatory imperatives, and endogenous logics of maximizing the collection of consents, and integration into the commercial infrastructure of online advertising. When configuring interfaces, publishers are confronted with three normative orders that they find difficult to reconcile: the regulatory interest, represented by legal advisers and legal intermediaries within the organizations themselves (corporate lawyers, and data protection officers, DPO, whose function is established by the GDPR); the commercial interest, generally represented by the sales and marketing teams; and the interest of the user, represented by the developers and graphic designers in charge of the ergonomics of the sites and the design of the user experience (or UX design). In this balance of power, our investigation shows the primacy of commercial interests in these arbitrations. The attention of publishers is mainly focused on a key metric, systematically mentioned during our interviews: the consent rate, i.e., the proportion of Internet users giving their consent. For publishers deriving the majority of their income from advertising, any drop in the consent rate translates into a drop in income: “There are even companies that largely prefer to pay 4% of turnover [the
1 The Law and Society perspective shows that the legal regulations in force are the product of a certain interpretation of the legal rules of reference, operated by the organization and the actors that compose it (Edelman and Suchman 1997). Lauren Edelman’s theory of endogenous law starts from the existence of “inevitable ambiguities” of any legal rule resulting, following the interpretation carried out to implement it, in forging compliance (Edelman 2007). It leads to looking at the activity of interpreting the law, not as a “neutral” application operation, but as an activity leading to the meeting of various normative registers (Bessy et al. 2011).
202
T. BEAUVISAGE AND K. MELLET
maximum amount of foreseen sanctions], rather than losing 80% of the consents. The calculation is done quite quickly”. (Vendor 1, Manager).
The Moral Embeddedness of Nudges One might be tempted to interpret the supremacy of economic interests as being the strict reflection of a maximizing economic rationality in a context of insufficient enforcement of the law. This interpretation is correct, but needs to be supplemented, insofar as it is embedded in various moral justifications. The interviews thus reflect the embarrassment of professionals in search of a position of balance between contradictory interests: maximizing the consent rate while preserving the legal markers of conformity (uncertain, in the absence of clear recommendations from the regulatory authority), and considerations for the interest of users. First, compliance corresponds for them to this point of balance between consent that is not too vitiated, the legitimacy of economic interests, and the risk of sanction: In my humble opinion, today we are compliant, but we can do much better. That is to say that there are things to optimize […], there are things on which we can put more effort. For example, unchecking boxes. Today we are checked by default. That can be considered a limit. (Publisher 2, chief technical officer)
In a second register of discourse on consent, most interviewees oppose a justification of the commercial use of personal data, emphasizing the need for advertising revenue (and, according to them, user data collection) in the digital economy, and more broadly as a form of equity: “I don’t think we realize what an Internet would be like without ads, it would be much more closed, it would be limited to an elite who could pay for it” (Vendor 2, Product Manager). Several respondents thus highlighted other moral principles pertaining to justice and equity, competing with that of privacy: the importance of advertising as a means of financing content, or the virtuous nature of the “open” programmatic advertising market as opposed to the closed “walled-gardens” of Google and Facebook, in particular. The moral norm relating to the protection of privacy is thus hybridized with that of freedom of expression, fair competition between firms, or equality before the law.
BUILDING COMPLIANCE, MANUFACTURING NUDGES: THE COMPLICATED…
203
Finally, professionals systematically raise a question: what if this regulation, materialized by consent banners, was experienced as an obstacle by Internet users themselves? As part of a liberal and contractualist tradition, the GDPR makes a double assumption. First, to regulate the data economy, the consumer must be brought into the game of the market—not as a participant in the market exchange but because the possibility of exchanges depends on her goodwill. Secondly, this means that, endowed with new rights, the Internet user would act as a “free and enlightened” citizen, as imagined by the regulations. But is that really the case? Two figures of the Internet user collide in the process of designing interfaces. The user-as-a-clicker who authorizes or not the collection and processing of data is the main object of the attention of website publishers, and it is essentially through the click-through rate that she comes to existence. The user-as-a-citizen, on the contrary, remains an enigmatic figure: does she understand what is done with her data? Is she interested in it? To what extent should we explain to her how online advertising works? Little information exists on the point of view of Internet users. Surveys typically portray Internet users concerned with their privacy in general, among whom only a minority, between a quarter and a third according to the sources, declare that they take the trouble to bear the additional cost of clicks necessary to refuse the use advertising of personal data. Publishers, for their part, find themselves forced to interpret the effects of the variations made on designs to build their own representation of the interest and understanding of Internet users. Several respondents thus expressed doubts about the way in which Internet users understand and perceive these interfaces which intervene in their browsing and create a screen between them and the pages they are trying to access. For an interviewed marketing manager, what some consider to be a misleading design actually results from a desire not to over-solicit the user, in other words to offer her “the best experience” by making choices for her. The horizon of simplicity and transparency for the user refers to a major problem of personal data protection and privacy policies, described by Helen Nissenbaum as the “paradox of transparency”, under which the privacy policies of digital services will be less read and understood by users if they are more detailed. For activist Richard Stallman, “to restore the right to privacy, we must stop surveillance before it even comes to ask our consent”.
204
T. BEAUVISAGE AND K. MELLET
Conclusion Regulating through consent is not restricted to a top-down enforcement of clear-cut rules by regulatory authorities: banners, by the discomfort they cause and by the background investments they require, materialize and delimit a new space of concerted action. They put at the heart of the advertising marketplace, and at the interface of professionals and consumers/citizens, the question of personal data protection. They are the last link in a long compliance process involving many players, and not just the regulatory authority and the lawyers of the concerned companies. This process of compliance is characterized by uncertainty, ambiguity, and the confrontation of contradictory norms and interests: the rules of law, business, and user experience. Thus, consent banners can be considered, at least in the field of personal data protection, as impossible designs seeking to integrate contradictory objectives and moralities. However, exhibiting these contradictions is part of their role. To regulate the use of personal data by the online advertising industry, public authorities operate as they did in the fields of nutrition (Frohlich 2017) or sustainable consumption (Dubuisson-Quellier 2017), by relying on the intersecting interests of consumers and firms, by nudging the stakeholders, and by creating the conditions for their alignment.
References Albrecht J.P., 2016, “How the GDPR will change the world”, European Data Protection Law Review, 2, p. 287. Bessy C., Delpeuch T., Pélisse J., 2011, Droit et régulations des activités économiques : perspectives sociologiques et institutionnalistes, Paris, LGDJ. Christou G., Rashid I., 2021, “Interest group lobbying in the European Union: privacy, data protection and the right to be forgotten”, Comparative European Politics, 19, 3, pp. 380–400. Dubuisson-Quellier S., 2017, “Capture as a lever for public intervention in the economy”, Revue francaise de sociologie, 58, 3, pp. 475–499. Edelman L., 2007, “Overlapping fields and constructed legalities: The endogeneity of law”, World Scientific Book Chapters, pp. 55–90. Edelman L., Suchman M., 1997, “The legal environments of organizations”, Annual review of sociology, 23, 1, pp. 479–515.
BUILDING COMPLIANCE, MANUFACTURING NUDGES: THE COMPLICATED…
205
Frohlich X., 2017, “The informational turn in food politics: The US FDA’s nutrition label as information infrastructure”, Social studies of science, 47, 2, pp. 145–171. Hildén J., 2019, The Politics of Datafication: The influence of lobbyists on the EU’s data protection reform and its consequences for the legitimacy of the General Data Protection Regulation, Doctoral Dissertation, University of Helskinki. Kampanos G., Shahandashti S.F., 2021, “Accept All: The Landscape of Cookie Banners in Greece and the UK”, arXiv:2104.05750 [cs]. Krisam C., Dietmann H., Volkamer M., Kulyk O., 2021, “Dark Patterns in the Wild: Review of Cookie Disclaimer Designs on Top 500 German Websites”, European Symposium on Usable Security 2021, pp. 1–8. Mathur A., Kshirsagar M., Mayer J., 2021, “What Makes a Dark Pattern… Dark?: Design Attributes, Normative Considerations, and Measurement Methods”, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–18. Mellet K., Beauvisage T., 2020, “Cookie monsters. Anatomy of a digital market infrastructure”, Consumption, Markets and Culture, 23, 2, pp. 110–129. Santos C., Bielova N., Matte C., 2019, “Are cookie banners indeed compliant with the law? Deciphering EU legal requirements on consent and technical means to verify compliance of cookie banners”, arXiv preprint arXiv:1912.07144.
The Emergence of the ‘Cy-Mind’ through Human-Computer Interaction Richard Harper
Introduction One of the reasons computer systems cause so much excitement and controversy is that they represent what they compute (Agre 2008). These representations say something about us, our society, and what we want (or expect) our technology to do (Amoore 2020; Frischmann 2018). It is no wonder, therefore, that the past few years have seen huge interest in the remarkable developments within machine learning and associated techniques that are enabling what has come to be called the New AI. The New AI is said not only to reflect but supplement human reasoning, even substituting it in some contexts, with its powers being amply demonstrated in the capacity of AI machines to beat humans at those rule-based activities that can become incredibly complex, such as the game ‘Go’. In this game, it is not the rules that are complex, as the choices that they allow. It is AI that can now determine the best ‘stratagems’ to win—its powers are simply beyond a human’s. In the longer term, AI will be at the heart of
R. Harper (*) Lancaster University, Lancaster, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_11
207
208
R. HARPER
self-driving cars, human-less factories and service industries ‘populated’ by artificial assistants. What we will be is bound to what AI will itself become. Much of these claims are hyperbole, some are simply overexcited. Delight in advances of the field of AI are justified, but this does not mean it is easy to define the wider implications of these advances.1 What makes AI so unsettling and different, in our mind, is not how the powers of AI alter what a computer (or computers) might do; it is in how both parties, the human (the person who becomes a user) and itself, the AI enabled- computer, can co-evolve. In this view, each is somehow constituted in that interaction. As a person ‘acts’ with AI, so the AI alters its datum, and this alters what the person can do. Their resulting next acts create new datum for the AI. And so on, and so forth. In short, both change through interaction. But this means that ‘the user’, the name applied to a fixed entity, a stable expression of an individuated human purpose, now looks dated. ‘Their’ purposes are now co-constituted with AI. The old-fashioned user, if one can put it that way, is being replaced by something new: a phenomenon that is jointly defined by human action and ‘machinic intelligence’ in ways that is dynamic and altering, and which recasts what both parties do in their interaction with each other—the cyborg combines with the human brain, making something new—a ‘cy-mind’, if you like, distributed between flesh and silicon. There are many implications that derive, some already mentioned— ‘who’ or ‘what’ should do the learning in an AI society?; ‘who’ or ‘what’ gets the awards that intellectual work leads to?; and so on. What we are concerned with is how, though the ‘user’ might be a joint product, the way that is experienced by the actual human in what one might say is the ‘dance with AI’ might not adequately foreground this.2 This is because, in part, key ‘solutions’ that the discipline known as Human Computer Interaction or HCI has provided for human-computer interaction years 1 One source of these complaints has to do with some basic notions, like what a concept might be. The AI community is often criticised for its understanding of such things as ‘concepts’, fundamental to understanding human affairs, see (Shanker 1998), especially 185–249. But we think it is better to say that the AI community treats concepts to do with human reason in ways that reflect the engineering concerns of AI, and not, say, philosophical concerns with the relationship between concepts and action. As a result, these debates (and the criticism they convey) often seem to entail sides talking past each other. Agre (2008) though old, is good on this. From within AI, see (Marcus and Davis 2019). 2 This is contra some of the views in HCI who argue that the human is disappearing altogether to be replaced by the ‘post-user’ (Baumer and Brubaker 2017).
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
209
ago are still operative in current systems and interfaces, including AI ones. It is these that frame the dance between the human and the technology. This obscures the emergence of the cy-mind mentioned above, the joint product of both the human ‘as user’ and the AI. Indeed, the ‘jointly created user’, this cy-mind, is regularly instantiated through what can best be described as arcane interactional forms—many derive from Xerox PARC and hence are at least forty years old which will never allow this mind to show itself. Though many in HCI thought that these old modes of interaction would disappear with the passing of time and perhaps be replaced with ones more apposite for new technologies like AI (Rogers 2009), the opposite seems to have happened: they are more deeply entangled in the interaction between persons and machines than ever, though many of these entanglements have nothing to do with the ‘old HCI’ and more to do with ‘data-driven’ possibilities in today’s technological landscape. The point is that, in the Age of AI, people are interacting with computers as if in the Age of the Beanbag. This is an allusion, of course, to one of the emblematic features of Xerox Parc, the beanbags spread around its shared spaces.
Looking Back to What Has Become Users and WIMP Machines To understand this we begin with some history—the history of HCI. This is rich and complex, an evolution of techniques from psychology and social science (including naturalistic approaches) that has allowed the exploration and shaping of the interactional processes we see today. What is sure is that HCI might have taken a different path than it has; indeed, it might have taken many, and not just one. What is definitely apparent is that the interactive computer systems we use today were not a given in the past; what was expected has not always happened.3 When the authors’ of Being Human: HCI in 2020, were writing (Harper et al. 2008), they were convinced, for example, that what is called the WIMP interface (and the premises that underscored this mode of interaction) would not apply a decade or more later. A different future beckoned. But this has not 3 As with AI, there is an immense literature and we certainly do not propose to review it all. If it was the Dartmouth Conference that started AI, one might say it was research at PARC that started HCI and this was demarcated with the publication of (Card and Moran 1983).
210
R. HARPER
materialized. Indeed, this type of interaction has persisted, and the motivations behind the HCI that came up with WIMP have thus lingered too. The acronym WIMP stands for Windows, Icons, Mouse and Pointer. The interaction techniques these enable were devised by Xerox Parc.4 The original goal of WIMP interaction was to provide tools for a new type of computer ‘user’, the office worker, who would be able to create, format and publish documents in ways that was hitherto unimaginable.5 This included tools for layout, for cut and pasting, and for printing. These were to come with WYSIWYG solutions (What You See is What You Get) which would ensure that what a user thought they were creating (and which appeared to them on a screen) was indeed what got produced by a printer in paper. The digital and the paper were meant to be married (Sellen and Harper 2002). Many people using computers today take this interaction for granted and may not even have heard the acronym WIMP (nor WYSIWYG). What is crucial with WIMP, though, is that it intentionally allowed—and indeed encouraged—human creativity with documents. Via WIMP systems, a ‘user’ could put whatever arguments or content they wished into the documents, while the computer, controlled through the interface (and related ‘grammars of action’—write and read, cut and paste, edit, etc.), provided an enabling process for those aspirations, supporting all the mundane but necessary computational tasks constitutive of ‘document publishing’. This included everything from layout through to storage and circulation. Whether this mode of interaction made for good documents when the mode was first invented or now, all those years later, is something of a moot point. The upshot of HCI research that led to WIMP interaction were systems that let people (‘users’) be creative with their machines. This is still the case. Putting things rather crudely, most people think the desktops and laptops they buy are for this—their creativity. They are not for science, nor data processing, nor administration and archiving, so much as creative making—the making of stuff to send and keep, to wonder at and to mock, and to celebrate as an art of a kind. A person might use their computers for other things, but this is their essential appeal. These exaggerated terms convey the relationship between computing and the human as designed through the kind of HCI research For a review of the first comprehensive WIMP system, the Xerox Star, see (Smith 1982). This is to ignore the ambitions that lay behind Englebart’s famous demo of the mouse, pointer and keyboard which he thought showed the way to augment human intellect. Xerox was more interested in office life. See (Sellen and Harper 2002). 4 5
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
211
that years ago sought to create a future. That ‘imagined’ future6 is the one which we are now living; one that allows the individual, in the guise of a user, to create content. However, and again as we noted, this interface and its underscoring elements has been and continues to be altered and reshaped. These changes are not just about the interface as they are about status of content and the relationship between content and the persons who created it. It is data about these concerns that AI is building upon. It is to look at these in more detail we now turn. The User in a Nest of Meanings There are two key dimensions to the changes that have led us to the current situation, if we can treat the reliance on the WIMP interface and associated premises as a given. The first can be illustrated with the affect search engines have had on content. One might note that search engines were unimagined at the time Xerox devised WIMP systems. Search engines were originally designed to make traversing and finding content over networked systems faster and easier, rather than on the web. To do this, the engines needed ways of mapping search queries to instances of stored content. Developers of the technology persuaded their initial users to tag their content, using commonly agreed terms and labels, which could then be used as indexes. When a query was made, engines searched through these indexes (and not the content) to provide the lists of URLs in the search entry results page (SERP). Users were willing to tag their content for these indexes since they hoped that, as a result, their content would be found more often. When these users where inside firewalls, this willingness could be relied on. With the emergence of the web, a whole new set of motivational questions had to be solved, but they were, and what had been a technology for narrowly confined tasks, came to be open to the public at large.7 As this happened, the emerging search engine companies supplemented their indexes with techniques that measured the traffic between instances of content (which were now ‘web pages’) to weigh the relevance of some content to a query. That is to say, the behaviors they initially enabled with their indexes provided new behaviors that 6 Imaginaries is an important concept in social studies of technology, labelling the target of invention. It is regularly used in relation to AI too. See for example (Yuet-Ming Wong 2020). 7 Again, this is not meant to be a literature review nor history: as with our discussions of the WIMP interface, this is a sketch designed to see issues pertinent today.
212
R. HARPER
themselves could be further resources for more effective ‘mapping’ of query and content. What they did became as important as what they tagged. This is the essence of PageRank, for example, which led Google to have its dominance in the field.8 The specific mechanisms and algorithms aside, these developments resulted in the content itself (a web page say) coming to have increasingly important external properties. At first, these were made up by the index terms, then behaviors with that content, such as frequency of access. Further developments were then inevitable: if frequency could be used to define relevance, the quality of experience afforded once content was found could be measured by such things as lingering times—how long a user remained logged into some page. Following on this, the ‘quality of the experience’ came to be further elaborated by the posting of comments on content. All this further added materials around content until external data has come to get an importance all if its own—and this has come to be called metadata. The importance of this data is not necessarily equal to the content, since metadata offers different dimensions and features. The value of these, just as with content itself, will vary dependent on how it (or they) come to be used. If the first set of changes had to do with content, the second had to do with the individual, made incarnate as the user. It can be illustrated with the effect of social media platforms. This technology developed some time after search and so long after WIMP systems but nevertheless, this technology was devised with the same premise: that individuals create content. But these platforms added a new type of metadata: the author. For social media, who creates content was (and still is) treated from the outset as important as content. And this in turn allowed another innovation in metadata: connections could be made between different authors and different instances of their content and this, in turn, could be used as a resource to characterize both the content and its author. Just as Google learned to use the frequency with which individuals travel between sites on the web as an indicator of content, so social media companies like Facebook learned to use movement between different personal pages and content on those pages (i.e., postings) as an indicator of the value of both the content 8 It is worth noting that when the algorithm was patented it was not labelled AI; it was simply described as a technique. The current fashion for AI has meant that today it is often renamed as AI; the parent company of Google, Alphabet, is rather fond of saying all it does is ‘AI’. For them, AI is ABC, so to speak.
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
213
and its author. Social media graphs calculate these values. On this basis, an author can be as important as content, and the movement between different authors and different content equal to that in turn. These graphs are, of course, the engine of value for social media companies, allowing the sale of screen real estate viewed by ‘users’. So, if it was the case that when WIMP was being devised, the goal for systems design was to allow individuals to create any content, now computer technologies are used to wrap that content in a web of meanings that is external to it. These can be at least equal to, if not more important than, the content itself. At the same time, these values may not all be visible to or well understood by the content creator, the individual. Indeed, some of the computational processes that those values can be subject to, such as aggregation and population level analysis, graph analyses (i.e., distances between ‘types’), can be beyond the power of the individual to comprehend. In any event, the individual is unlikely to have access to those data nor have the tools for their analysis if they did. But the important point is that though individuals might create content, metadata about it can end up being, as it were, out of their hands. Just as the relationship with content has altered (through having metadata about that content), so has the nature of the ‘author’ of that content altered, too. For while someone may create content, metadata about that content and about the person who produced it, is combined with data about others who have access to and read that content, as well as when these acts (viewing or not viewing), are undertaken. Even the duration of these acts is documented. All this creates a new phenomenon: the ‘user’ not as an individual but as a point on a map of individuals and their behaviors with myriad instances of content. Who an individual might be is no longer understood as a function of the commands they issue to some computer, as a single person commanding a machine, but in terms of where that ‘person-as-user’ can be said to be when located on a topography with hundreds, perhaps millions of other individuals all equally identified as ‘users’, each with their own relationship with multiple instances of content, all mapped to each other, all being treated as ontologically the same, points in virtual space. Individuals may have no access to the map, this social graph, nor have the tools to make their location in it a tool for their own endeavors. They are no longer ‘their own person’. They have become a ‘user’ not knowing that means something other than them; their creations are likewise
214
R. HARPER
independent, articulating meanings beyond the powers of their own hand.9 They act, but do not easily see what those acts might come to mean. What they mean is something other than the individual; they represent something new, something beyond or separate to the human. As we say, an amalgam of human and AI acts. Insofar as these acts express intentions, then one might call them the acts of a post human, post AI mind, one made of both digital and human materiality. This is the situation in which the emerging functionality of AI has arisen. AI is not a single type of technique since it entails many different methods and algorithms, with deep learning particularly becoming increasingly relevant in recent years—hence the term New AI mentioned above. Nor is AI a unitary thing when a person interacts with applications on the web or on their local machines. AI tools are essentially discreet, optimized for one task, not all, even when that task is, for instance, aggregating user traces across websites and hence across distinct AI applications.10 There is some argument that AI will become generalized (Russell 2019), but at the moment this is not the case.
The World of Today The Dance of Interaction in the Age of AI When individuals start engaging with contemporary computers and applications, they find themselves in what we have been calling a kind of dance. Their inputs to the computer in front of them (or in their hand) are combined with data about ‘users’ that are already stored somewhere in networked computer systems (most probably on the Web but certainly stored in cloud farms) and this results in them being offered certain moves or 9 The emphasis we are wanting to make is different from the post-user theorists, like Baumer and Brubaker, who want to drop any notion of the human as distinct from other entities. As should be clear, despite people feeling as if they are disappearing as a category in these terms, their attitude nevertheless persists in asserting that, despite this, they are somehow still agential, somehow still the user they used to be, a ‘self’ that creates. Even if they are an individual newly acquainted with technology, the modus operandi of use of the primary tools they have encourages this sense of self—hence our claim about the lingering of old premises in current systems. 10 This is most often obscured by AI researchers, their view being that the bespoke nature of their current tools is slowly moving towards more generalised forms and Turing theoretic generality. This is something we shall return to later. For a lively discussion of this very problem, see (Taylor, P. 2021)
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
215
opportunities—ideally to the benefit of both, that individual and the data processing provider of the service in question. In the case of Facebook, say, the initial moves are towards a certain location in a social graph, with predefined others. Once there, subsequent moves build on that starting place, but each in turn is nevertheless a composite, there always being two partners in the dance: the individual and the computer systems with their data about ‘users’. Hence our argument that the user as defined by the creative approach to HCI, the one that places the creative individual as governing the functions of a computer, is no longer in existence; a historical entity, no longer ‘alive’. If, in the past, HCI was able to devise a manner of interaction between the human and the computer that allowed the human to focus on their own creativity, it is our view that today there is a need to devise the means whereby individuals can interact with AI to do new things through various kinds of joint endeavor (Grudin 2009; Ren 2016; Ma 2018; Shadbolt 2018). The difficulty here is what this means, of course. If a person seeks to be creative, for example, then what they understand by creativity, what acts of theirs helps produce creative content, is partly shaped by the way AI interprets those phenomena, and thus jointly shapes what that creativity comes to be. Creativity in this situation is of a different kind than before, and the collaboration that leads to it unlike anything before too. This might be good or it might not, though our view is that this is exciting. Be that as it may, how this and other new joint endeavors might be enabled through the ways that ‘the user’ is made, is now in the process of emerging, and while some of these are more defined than others, there is still much work to be done. The Language of Use These concerns show themselves in many ways—including the everyday language. People happily use the term ‘post’, for instance. They are familiar with the distinction between the created artifacts they use as a currency in the sociality of social media from those entities they think of as their own and which they store in their ‘private’ PCs. To post is to make something public, to move it away from the constraints of one’s own digital store. This new usage does not mean, though, that the doings evoked by the usage are entirely clear. If something is ‘posted’, who or what is thereafter responsible for it? Let us confine ourselves to the ‘who’ for the moment and presuppose that is a human. Can the original ‘poster’, the individual creator of the post, remove that file once posted? If another
216
R. HARPER
person wants that post to remain, what say does that other have in that removal? Can they stop it? Besides, and leaving removal aside, can files once posted become shared? What does ‘shared’ mean other than rights to view (or to put it in computer parlance, rights to read)? Does it mean right to copy? What about rights to move to somewhere else? The individual poster and the individual viewer (though there may be many of the latter), do not always know how to solve these questions. These concerns are varied; different social media platforms have slightly different approaches to the matters in question, each with their own ‘grammar of action’ with posts. The way that AI interacts with these posts is another set of concerns. On some platforms, a post might be identified as relevant for sharing by AI, though most platforms do not allow the human creator of the post to know the scale of this distribution, or how AI solves questions of interleaving and sequencing of posts. Calculations about other posts and their competing demands for screen collateral and what is sometimes called ‘attention’ are also made by AI. Leaving aside particulars, what is clear is that people somehow navigate their way around these concerns as best they can. The values of posting and sharing, liking and not liking, the role that AI might or might not have in all this, are sufficient to make posting worthwhile—metaphorical headaches notwithstanding. Individuals are certainly aware of the number of likes a post gets, and indeed celebrate and announce high numbers (or rue them, in certain cases) and they know, too, that in some part, and on some social media platforms, AI has played a part in this.11 There is a further point one might note. Individuals get some sight of how AI is functioning when they note who has accessed their postings and find, at first glance, that these people seem ‘strangers’. Initially an individual might think this a mistake (the AI has got it wrong, they might say to themselves) though they can treat it as something they can investigate. They can navigate to these strangers’ digital spaces, their Facebook page, say, with a view to discovering if there are any reasons why these persons have been identified as ‘connected’. One imagines that they go there in good faith, by which we mean they assume that there is a reason and seek to uncover what it might be. They want to make sense, as the 11 We are focusing on external data, but with AI tools another way of creating such metadata is through examining the internal features of a file and noting closeness between files themselves. See (Richard et al. 2020).
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
217
ethnomethodologists would put it. ‘It was the school they went to’ or a ‘mutual friend’ and hence not a stranger.12 The upshot of such behaviors, presented here anecdotally of course, is that the power of human memory would in effect explain the reasoning of AI. Whether this alters how people act thereafter, seeking to post different content, say, or even becoming more pleased with the AI and hence the value of the social media platform that runs the AI, is something of a moot point, though some of the strongest advocates of AI think that any alteration in their actions after a ‘suggestion’ from AI is proof that the human mind and AI are the same (Russell 2019). In our view, this seems egregious; a better way of interpreting this is to recognize that content sharing practices are changing with AI, and that, in many respects, that change is ‘reflexively constituted’ by AI and human intelligence. When an individual is doing things with content (other than making that content), they are partly expressing themselves and partly interacting with AI that helps share that expression. The human sometimes uncovers what they think the AI is doing in this process, or at least uncovers reasons that they, the person, think the AI is doing something. Whether this is them chasing the AI or moving in tandem with AI is unclear. To go back to our metaphor, there is a dance, but the moves are not fully articulated either way. This is important and returns us to some of the general claims made about AI. Many advocates of AI suggest that explainability is requisite if people are to effectively engage with AI (Guidotti 2019; Selbst and Barocas 2018). The way this term is used does not quite fit the scenario we have just described. It evokes an asymmetric relationship, one where AI explains to the human, a kind of epistemic exchange.13 What we are seeing is that, in the context of content creation, some of the activities of AI do not need explaining. Or rather, we are seeing that individuals seek to explain to themselves how something appears to function (in this case, something that entails sharing), and given that, they then alter what they 12 For further explanation of this notion of making sense or a related concept, reasoning procedures, both derived from Garfinkel’s seminal 1967 text, Studies in Ethnomethodology, can be found in (Harper, Licoppe, & Watson, Skyping the Family, 2019). This explores how ‘users’ of computer communications tools, such as skype, strategically use and interpret the use of such technologies such that that use is in itself a resource for sense making. Here too in the case of the AI and social media, the user treats the experience afforded as a resource to understand. 13 Indeed, this is the purpose that many advocates of the explainable AI research agenda seek. See (Selbst and Barocas 2018). For more context, see (Harper R. 2020).
218
R. HARPER
do in turn. We learn from how people follow the links on social networks that, though they might not have a notion of a graph relationship that has led some AI to foreground some link, the AI has done sufficient to provide a cue for the human to act in turn: a link is offered; this prompts a desire to find the ‘reasons behind’ that link. To repeat what we noted above—whether the reasons the humans identify are the same as those AI uses might not matter. For the important lesson is that, whether they are analogous or orthogonal or indeed have no ‘relation’ at all, in the dance, sufficient explanation has been found to justify a next act. This act may be more content, a new posting, or a new liking. And this results in new connections being highlighted by the AI and then new interpretations around that link by people. Either way, the dance is dynamic. If in the prior section we suggested that individuals might not fully understand how their individual actions through a user act are constituted as a point in a vector space, now we are saying that nevertheless, people do intuit ways of acting that depend upon some notion—good, bad, accurate or otherwise—of how AI is construing them. Their acts might be out of their hands, but somehow they grasp them nevertheless. The Pointer and the Individual So we are suggesting that people seek to devise ways of making sense of the ‘intelligence’ that allows them to act whether they correctly understand the AI or not. In the case of links, AI does not stop people acting by being too complex to understand; it gives them reasons to act. We have learned too that these acts (however justified) in turn become material for the AI. We might note that at times the applications they use appear to have no AI at all—file storage might not entail any, for example, at least from the content creator.14 Whatever these applications, most of their movement is through acts with a ‘pointer’. Actions with the pointer indicate and represent where the human in control of the pointer ‘is’ in the virtual world, and a history of their commands constitutes an important part of an individual’s digital footprint. One might say that, in this regard, an individual is not a human person with all the attributes that implies so much as a point in a virtual trajectory. We have been seeing that a person as thus construed is important to different instances of AI, indeed crucial, as this allows the AI in question to anchor what it does. The trajectories of There may well be various AI tools used in optimising back up.
14
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
219
points, along with the associated digital substances created, exchanged, posted and so on, are the ‘materiality’ of AI in the application in question.15 The vectors can be examined, and the resulting analyses deployed in various ways and mapped back to a particular point in space, an expression of some human controlling the pointer in question.16 Much of this feedback is straight-forward or at least the actions of the AI show itself in ways that the human expects or recognizes, providing a kind of feedback loop where the human is conscious of their agency. A click on a feed on Facebook is prompted by AI, and though the person looking at the feed might not know fully why, they click on it nonetheless since the reason is likely to be enough to justify the resulting move in virtual space. A click takes them from their Facebook page, to someone else’s say. But some uses of pointer clicks are more obscure and hide or even negate that agency. When an individual uses a pointer to click on, say, a ‘contact form’ on some website, they may have confidence that these clicks are expressions of their intentional acts. But they might not know that the total amount of time they spend on that site is an expression of their vector, and that this ‘durée’ can be used by some AI to generate a fee to the search engine that guided them to that website. The time a pointer lingers produces more than what is seen or understood by the one doing the pointing, or at least there is no feedback loop to the human that indicates this. The fee(s) that result can have consequences for where the individual in question finds themself when they next go on the web, whether it be to the same site, or to some other site, since they might be prompted to do so by cues that they did not know their actions paid for. In broad outline, we do not want to say these interactions, these convoluted and even disconnected turns in a dance, are wrong or need doing away with (though as we mentioned some from the sociological view do 15 In recent years there has been a great deal of concern with materiality in HCI, though often these concerns point to actual objects at the expense of virtual objects in ways that is not helpful. There is no mention of these kinds of AI materialities in Wiberg, for example (Wiberg 2017). 16 This connection is also commonplace for every day ‘users’ as it has been for HCI researchers. Members of the public, to use an old-fashioned moniker, might not express it in similar terms, but they understand how they are wed to a virtual space, the place where their pointer is, and they act in this knowledge when they use computer machines. Thus, it is that they easily transition to using new forms of pointers: their fingertip, say; this they know is a proxy for the pointer. It is not the pointer that was (or is) a proxy for their fingers, of course, as is it the other way around. Most people know that where they point (or touch) really is the point in the digital world; how they point doesn’t matter.
220
R. HARPER
seem to (Campolo and Crawford 2020; Frischmann 2018), but what is sure is that the hidden ties beg questions as to how the human and AI interact, and what the user in these situations might be. There are a number of subtleties here that need unpacking. One has to do with how the human in question is demarcated such that behavior in one pointer click can be linked to their behavior somewhere else. Above we have been noting that AI tools tend to be application specific (though space precludes elaboration), but now we are proposing that traces across web pages or sites might be linked. This linking seems reasonable; after all it is the same person doing the clicking. The issue is that the person doing the clicking might not know how much of themselves is carried over between the clicks. What they do know is that ‘who they are’ is often a crucial element shared in the digital, and hence with varieties of AI too, but what this ‘who’ includes or entails might not be entirely straight forward when they move their pointer from one application ‘window’ to another. Consider, for a moment, how the delicacies of identity management can be foregrounded in the digital. Contemporary browsers typically offer a private mode. What does it mean precisely? When selected, does a person represented through pointer clicks then become an anonymous person? Are ‘they’ nothing more than a click and hence hardly a person at all, merely a dynamic vector? Or does the vector link to other traces in the digital in such a fashion that their identity remains visible to the browser and to the applications the browser interacts with like search engines? Is this hidden from the individual doing the browsing? If the latter, then the meaning of private would be somewhat startling—the traces of action are only hidden from the one doing the actions! In fact, a version of this is what happens. One could say a lot about this, though perhaps we might only reflect on those reasons for this arrangement that have some semblance of being practical, rather than furtive. The Operating System (OS) that the browser is operating within will, for example, dialogue with the browser and treat the identity expressed in pointer acts as being shared. From the OS’s point of view, this sharing is practical, ensuring continuity of action across the ‘desktop’. When a pointer moves out of the browser to some other point on that local space (i.e., the desktop), the OS does not demand the ‘human’ controller of the pointer (re)identify themselves. For the human controller of the pointer, though, that movement across a single screen might imply distinct moments of identity, of their identity. At one point on their screen, in the web browser, they are a private person, in another, in their Word or Pages
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
221
documents, they are ‘themselves’, or at least not concerned about their visibility. After all, in one they are venturing out to the digital ether, in the other skulking in their private world of words. This is to put it poetically, but the point is that, as they experience the move of the mouse pointer, so what they ‘are’ at one moment in time, might somehow be different from a moment later even on the same screen—but in another window in another application. The phenomenological move of the pointer by one inch might be sufficient to recast what a person imagines are those aspects of them being presented. When the web was first being developed, questions to do with the management of identities across applications inside and outside of browsers were prominent. Password managers and similar were the product of this research, easing this movement, making it seamless. At that time, it was also expected that the modes of interaction with machines would diversify, and this might facilitate better distinctions in the way individuals were represented in the digital world. As mobile devices became increasingly important, so this imagined diversity seemed to increase in scale and potentiality. Indeed, the idea of exploring new user representations was a major theme of research at that time (Brown et al. 2000). Research into user representation that game consoles afforded are a case in point. Their emergence, twenty or more years ago, was thought to point towards possibilities way beyond games themselves. It was recognized that consoles framed choices about identity as much as they did what a pointer looked like—in the sense that they made tangible what the human as represented in the digital was about to do. From the moment a console was picked up, one might say, the human came to know that what would be relevant is their game Self—the phenomenology of it, to put it that way, made it thus. Think of shooter games: here an individual comes to be a ‘user’ as they will do in any other virtual setting, but there are some particular (though not unusual) identity characteristics expressed in the ‘user’ they come to be. First, they are what the role of shooter affords; that’s intrinsically part of their identity. Second, and given that, when an individual joins a game world, they most often identify themselves not as themselves, in the grand sense of all that implies (about which more shortly). On the contrary, they constrain their name to the place they are about to act in—that game world. It is reference to this as a given that they call themselves by what is sometimes known as a ‘handle’; a pseudonym. This identifier is often unique to the platform in question and affords much opportunity for the jocular between the players; more importantly,
222
R. HARPER
though, this practice has the advantage of allowing individuals to avoid irrelevant facts about themselves. It’s not a question of protecting identity, preserving their privacy let us say, as it avoids bringing things into the foreground that do not matter in that virtual place. These things (or facts) might distract both them and their opponents from focusing on the shooting, and this might include such things as their everyday, legal name. What they are seeking, instead, is privileging aspects of their identity that relate to the virtual, to the game, to the shooting, to the ‘world’ in question. This mode of phenomenal engagement does not lead to an entirely fictitious self, but the self as relevant for games in the virtual world. At the end of the last century and for a few years thereafter, such a framing was thought to be extendable to other devices, beyond gaming. ‘Pointing’ in and through a mobile was, for a while, considered a route for ‘tailored’ identity, for example; one that only states ‘location’; i.e., not who a person is but only where they are (i.e., real geography). Doing the same through a desktop machine was proposed, in contrast, as being a mode for more encompassing identity, where the person logged on was used to anchor digitally mediated activities. Here the point expressed more, one might say. In this research agenda, an individual might express their relevant identity through choice of device (Berg et al. 2003). This in turn opened up the possibility of articulating identity through the Internet of Things (IoT)—one is or could be whatever those things are. If one had a digital-wallet type device, say, one’s identity could be constrained to financial matters; if one has an ebook, then one is a reader, and so on (Harper R. 2003). It even led to explorations of how ‘the home’ could become smart, and thus afford a new form of human computer relations— the person as a home dweller, as distinct from the person as an office worker (Harper R. 2003, 2011). These were serious possibilities for a while, but for reasons outside our concerns, did not come to fruition and so did not unpack what engagement and interaction with computers came to mean. Instead, interaction became more standardized, with generalized, WIMP based hardware becoming the typical modus operandi of being digital. This might be changing once again, with AI and new form factors (speech being an obvious example and hence things like Amazon Alexa). At the turn of the century and for some years thereafter, though, it was the pointer, whether on a laptop, tablet or with a fingertip as its proxy on mobile, that was the opening moment of being digital. Indeed, the ‘point in space’ thus produced has remained dominant, the vectors of action it expresses integrated
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
223
across hardware but within applications running across hardware.17 As we say, this is part of the materiality of AI. Many of the applications that use AI today and which have become deeply ingrained in everyday digital practices developed around the WIMP interface and hence the pointer— search, for example. Re-imagining the Pointer and the Individual If above we have been looking at how the dance with computers is altering and shaping with AI, let us now look at whether the role of the pointer might be altered as a consequence. There is no reason to think this impossible, even if the pointer might seem too small a thing to explore. Here is a simple example: if one’s actions are currently incarnate on the pointer on one’s desktop, could those actions be represented differently? That is, as something other than a pointer? One might want to do so to express differences in intention or purpose. Though the point in vector space may still be that, a point in vector space, what is seen and hence what is understood by the human who is trying to express through that vector space might be different. For instance, when one navigates from one’s own desktop to a friend’s social media account via a browser, could the pointer be altered in that move to represent a hand, perhaps, so that one becomes, as it were, a waving user when that move occurs? This change would be happening as one moves between the relevant windows and relevant digital contexts—from the desktop to a social media page—maybe just a few inches on the screen, but miles in the virtual world. Such reforming of the ‘point in digital space’ from ‘pointer to hand’ is not without an analogy. After all, a ‘like’ is often represented this way in social media platforms such as 17 There may be a variety of reasons for this, including the relatively small values bespoke modes of interaction and related hardware enable as against the larger benefits of ‘platform’ based hardware. One might recall the debate about the turn to computer appliances as a reaction to the dominance of Microsoft and Intel at the end of the last century, and the failure of appliances to succeed in the marketplace at that time. Today there would seem to be more appliances, with for example smart speakers, but they are better thought of as shorthand ways of accessing cross hardware services, such as search and e-commerce; they are gateways rather than one use devices. See (Harper R. 2011). This does not mean that devices might become once again opportunities for reimagining how people interact with computing. Some in HCI believe so though this is often to do with how the notion of the human might change—as in post human and so forth.
224
R. HARPER
Facebook—that is a like is created with a pointer click. The reason why one might want to make such a change is because one’s goal on that virtual location is somewhat like visiting a friend and waving when they arrive—a mode of greeting and acknowledgement. This transition might not just be a pairing, between pointer to hand. One might want to do more than wave once one is on a friend’s account, and at that moment the ‘pointer’—or rather the point in digital space—can have its manifest form altered again. It might become an envelope or letter, say, to suggest that something is being given. Hence, depending on activities, a person expresses through a pointer, then a hand, then an envelope. Clearly, these are just suggestions, and effective designs need to consider how to articulate both sides here, as these last remarks about Facebook make clear. The design entails a concern for both parties, both recipients and givers in the design (Housley et al. 2017). As the anthropologist Goffman noted years ago (Goffman 1959), one controls who one is so that it fits the role at hand and that is a mutual task as it involves the others who depend upon or shape that role. It often takes two to make one. Here one of the two is the application in question, which is variously enacted with AI.18 Design is not only about the parties involved; there will be questions of regulation too, which might affect the relevant powers and claims of the parties. Trust between the parties and others that depend on the output of their actions will be important too (Harper R. 2014; Frischmann 2018). The rub, though, is that the ‘point in space’ (or rather the vectors thus articulated) that express what computing (and hence AI) and people want to ‘make’ and ‘be’ could itself be an opportunity for articulating meaning. Making a point in space somehow symbolic or expressive of a type of user might ease current muddles, bringing clarity and joint purpose, it might even open up new possibilities.19 They might 18 This is often misunderstood as some kind of claim about human nature, rather than something about the social arrangements of information exchange intrinsic to the performance of identity. Goffman’s use of metaphor, including most famously the theatrical one, was meant to help highlight this as a phenomenon and not to suggest that people ‘act’. 19 Just what the digital platforms that dominate the digital world would think about this is another question, however. The current ‘beings’ of digital capitalism (Alphabet, Apple, Microsoft, etc) might want to foreclose any directions for the human in their interaction with AI. After all, their profits are such as to inhibit innovation. Why change when business is so good?
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
225
better understand that they are part of the cy-mind we evoked earlier on; an agency made by their own acts and those of AI expressed as a vector point in digital space. The Individual and the Crowd We are saying that the pointer might be reimagined, and once this happens, new possibilities emerge—and these lead to possibilities beyond the pointer. Take search again. Here, in the ‘ordinary mode’, a ‘user’ is treated as a singular individual, and their search analogous to a library visitor wanting to find a book or an article. Never mind that search engines can support millions of such ‘users’ accessing millions of stored items at the same time, this is essentially the grammar. Today, as we have noted, AI tools are able to interrogate these individual behaviors at a mass level and come up with new insights—search engines use volumes of ‘user access’ to triage, extending the effectiveness of PageRank type algorithms to ensure that what one person gets access to reflect what most seem to want access to. In other words, what an individual seeks to find is converted into what most seem to enjoy. The end result is that the individual is steered to what is thought to be (by AI) the right destination. The individuals themselves, meanwhile, are given no indication that their actions are so processed, nor allowed opportunities to leverage that for their own creative judgement about the relationship their actions have to those of others. They might know something of the processes hidden in the systems but they have little control over them. They might want this control. For example, an individual might want to know which of the many sites that offer content ‘of the kind’ they are seeking is being viewed and accessed by most people—and hence the list of targets offered by a search engine should say that a site is the one that most are using, not simply list a site in such a fashion that volumes are obscure. An individual might want to be part of the population that makes up ‘most’ that helps triage the selection of ‘kinds’ here. They might want to be ‘typical’. This is worth highlighting so as to see the value of an opposite: an individual might want to find the least used; a kind that is not often sought. Currently an individual might be able to glean some insight into these kinds by looking at whereabouts a site is located in the Search Enquiry Results Page, the SERP. At the top of the SERP might suggest popularity, lower down the opposite (this assumes that this listing is not muddled by adverts, which these days the most cynical search engine companies allow to populate what appear in SERPs). But this is a by-product
226
R. HARPER
of the SERP interface designs that search engines use, not an intentional property that an individual can exploit. They do this not by separating themselves from the AI that they have been acting with, so much as through altering how they are using that AI, changing their turns in the dance they are having with it. It too is dancing with them. What we are saying is that individuals might well want to alter their turns in the dance, allowing the cy-mind that emerges in their interaction with AI to take them to different places. They might want to avoid the crowd or they might want to part of it; they might want the kind that is defined by large use, or the kind that is defined by infrequent use In either case, there are acting as part of a process that involves AI but in different ways, in different moves, with different purposes and implied responses at each moment. Focusing on the case of avoiding the crowd, one might note that news sites do this sort of aggregation now but confine themselves to what has hitherto been thought of as shared interest topics—news content. Various ‘user-driven’ web services also offer something analogous, such as Reddit—here people can find out what others have found appealing, and thus can follow the crowd albeit one step behind. What we are suggesting is that an individual one might want to do the opposite: find the news that no-one knows about, visit the sites that no-one else does. In other words, one might want to surf the web without others nearby. These possibilities could be foregrounded in search engine interfaces. Indeed, it doesn’t need to be confined to these interfaces, either; it might slip into the space that browsers currently function within, as the gateway to the digital world. Why does this gateway not offer routes to where the action is (or where it is not), say? This also points towards what might be the Quiet Web—a place very different from the Dark Web, needless to say. Here, the role of the pointer is only part of the design space—if this is key to the ‘user representations’ that articulate how an individual is wanting their digital proxy to travel around the web, using AI to help shape their route, then what is ‘found’ might represent something about the criteria for its selection. This is not such a radical proposal. Search engines already constitute what appear to be web pages (but in fact are only improvised) when a search produces an ambiguous target: these cards (Bryant et al. 2012), as they are sometimes called, select components of various web pages to make a new, temporary one, intended to speak to the search query, and
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
227
hence the human looking at the screen.20 In a similar fashion, a search engine could tweak pages delivered through a SERP to account for or express the motivations that lead to it selection. By motivations, of course, we mean that intersection of human desire and AI modelling. The two jointly produce the thing found, the page ‘on the web’. It is the cy-mind that delivers this or acts such that this results. It is not being argued that these specific ways of altering the relationship between the individual and their use of tools on the web (tools such as search engines and browsers) should be changed in these ways. They could be; that is our point. Our suggestions are pointing towards how the individual is treated when they engage with the web, when they participate in being a user and do so with AI. We are saying that, currently, how they get transformed by AI from being the single unit of action into a composite of actions for the purposes of, for example, search engines, is mostly obscured to them, and by and large, they are not able to act in ways that alter what this leads to. Yet this process of leading to something other than themselves could be made a resource, not a hidden thing. If a person was able to see how their individual acts come to be part of crowd-like acts, as a case in point, then they might utilize this knowledge to do new things. One of them could be to alter their relationship to the crowd. To allow this, AI tools would need to indicate where was popular. If the AI were to encourage a person to those places, it might indicate that it does because it is imputing that the individual would like those places. But once an individual is informed of this, that same individual may then decide to act otherwise—against the crowd, and hence against the initial imputation of the AI, too. But at that moment, the AI might recast what it draws from the situation, and what it can infer as it sees the vector or movement pointing in a different way. This inference can lead the AI to make a new suggestion. This is not a conflict then, but a new turn. It is a dance and the meanings shared at one point (in virtual space) shape what the next point in the dance might be. When conceived thus, the user is better understood as a cyber-human thing, and given that it is all about intentions, then a new kind of mind, the ‘cy-mind’.
20 The way web protocols work (viz, HTML5) allows the easy constitution of what appear to be real web pages by consolidating content components from different points on the web. Indeed, that this is possible lead to much debate about what the concept, a ‘page’, was going to mean when HTML5 was introduced.
228
R. HARPER
Conclusion We claimed at the outset that the ‘user’ in the age of AI is a combination of individual human acts and AI interpretations of those acts, which are themselves calculative of next acts (as well as summative of prior ones) and that these acts create new options, new turns in an endless iteration—first this, then this, then that. We now see that the interaction in question is not undertaken in such a fashion that each of those entities that contribute to it do so on continuously equal terms. In the dance, the moves express continuously changing parametres of understanding and interpretation. These are also confounded, at times, through interfaces that hide matters, and partly through a lack of realization that some opportunities might arise that end up being occluded as both sides lack full awareness of the ‘intentions’ of the other as expressed or made visible in interfaces. One particular concern is that key modes of human-computer interaction emphasize the salience and autonomy of the human side through pointer actions, while the computational processing going on inside the computer systems transforms these solipsistic acts of pointing into mass corporate acts without that being made ‘visible’ to the human doing the pointing. A single search query is turned into an instance of millions. An individual points with a technology that suggest that that pointing is uniquely their own, but the pointing is aggregated immediately into something else, something that is social. The social, though, is delivered by AI, and the way it does this is obscured. Yet it does not need to be. It could be made an affordance, something that the human in the interactional context could work with. Insofar as this is possible, then AI might in turn act differently, too, being more alert, if one can put it that way, to the variation in human intentions made possible in newly created digital acts in and through interfaces. The result could be something beyond either AI or human action, a mix of both, a cy-mind in the digital ether. Unfortunately, though we see human-AI interaction widely, we should also see that in too many instances it lacks the grace one would hope this newly created mind might have. Grace is an aesthetic term and is chosen to evoke the dance metaphor. A key point of this paper is to note that whatever the metaphorical register, the way people interact with AI is based in large part on assumptions about computers and their interactions with them that were devised many years ago. These do not allow the two, the individual and the computer, or rather we should say the AI-endowed computer, to effectively communicate their role
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
229
viz-a-viz each other, and this is the lack of grace we are alluding to. Their interaction can be ill-understood from both points of view—the AI and the human view. In the case of the human, they might act thinking that it is their own articulations of identity that are being expressed, for example, but it might not be, the AI imputing values of its own. But at the same time, the AI might act on a notion of identity that is being thwarted by the acts of the person in question—the human might want to act privately, for example, while the AI wants to convert the human acts into forms of social presence, an instance of behaviour in a crowd. Each interacts with an invisible hand. To go forward, people and AI require different assumptions. Central to this is the notion that they jointly make the user, the agent in the digital world that we are labelling the cy-mind, though it turns out that what is perceived as the consequences of the intelligence on either side might be different. What a prediction engine ‘thinks’ a cy-mind might be is likely to be different to what a human, on the other side of the interaction, might imagine. Indeed, it might be that these very terms, to think, to imagine, to act, are all ill-suited to a situation where new words are required to label things that combine the human and the artificial. Lexical change notwithstanding, the point is that the interaction they both engage in will in turn shape them, or at least alter what ‘they’ do at any moment in time. What is sure is that, over time, the consequences of this might be interesting. If Churchill said of buildings that people make them in their image and then, they, in turn, shape those same people, so we are saying that the doings that AI and people create together might end up shaping them too. The human contributing to the ‘user’ of the future might offer everything from passive contribution to fully engaged involvement, from lurking to leading, from playful compliance to isolating and contrarian behaviors, but always moves in a dance where the next step might be different from the one before. And this is because their moves are reacted to it by AI and all its varieties. Of importance here is what might be the ‘intentions’ of that AI in this dance. Are those intentions merely to be understood as the expression of surveillance capitalism or platform economics, two terms taken from the critical literature? Surely AI can ‘express’ these imperatives and much else beside. But what? Besides, won’t it be shaped by the human? Won’t the cy-mind be more than just AI and human acts, but something emergent? For the society that results to be healthy, to ensure that this moving interaction is undertaken in desirable ways, will require, we believe, new interfaces, new training for human ‘users’ and AI
230
R. HARPER
tools (especially machine learning), as well as new concepts for what they might do together, and then following on from that, new modes of governance and much else beside. We need, all of us, engineers and those who engage with digital tools, to develop a sense that the ‘digital lives’ these tools enable are the product of shared and mutually entwined intelligences—each nudging the other in an endless cycle of acts. As a consequence, the user of the future will be a very different thing from the user of the past. That user is dead. A different one is emerging or could emerge. As it does, so a different AI might emerge too, and one hopes without egregious notions of abstract or generalized power, and happier with the more modest notion that AI is something whose value can be demarcated in the dance with humans: a cy-mind not a cyborg. And fundamental to this, and whatever the measures or the values that end up mattering, it is through each other, the human and the AI, end up doing things. In this regard, they are mutual tools. If tools maketh the human, but the human maketh the tools. Even in the Age of AI.
References Agre, P. (2008). Computation and Human Experience. Cambridge: Cambridge University Press. Amoore, L. (2020). Cloud Ethics: Algorithms and the attributes of ourselves and others. Durham: Duke University Press. Banks, R., Gosset, P., Harper, R., Lindley, S., & Smyth, G. (2020). Breaching the PC Data Store: What do Graphs Tells us About Files? In A. Chamberlain, Research in the Wild: HCI and ethnomethodology. London: Springer. Baumer, E., & Brubaker, J. (2017). Post-Userism. Computer Human Interaction. Denver: ACM. Berg, S., Taylor, A., & Harper, R. (2003). Mobile Phones for the Next Generation. Computer Human Interaction (CHI) (pp. 433–440). Florida: ACM. Brown, B., Green, N., & Harper, R. (2000). Wireless World: Interdisciplinary Perspectives on the mobile age. Godalming: Springer Verlag. Bryant, E., Harper, R., & Gosset, P. (2012). Beyond Search: a technology probe investigation. In D. Lewandowski, Web Search Engine Research (pp. 227–50). Bingley: Emerald Library Sciences. Campolo, A., & Crawford, K. (2020). Enchanted Determinism: Power without Responsibility in Artificial Intelligence. Engaging Science: Technology and Society, 1–19. Card, S., & Moran, T. &. (1983). The Psychology of Human Computer Interaction. Hillsdale, New Jersey: Lawrence Erlbaum.
THE EMERGENCE OF THE ‘CY-MIND’ THROUGH HUMAN-COMPUTER…
231
Frischmann, B. &. (2018). Re-engineering Humanity. Cambridge: Cambridge University Press. Goffman, E. (1959). The Presentation of Self in Everyday Life. Garden City, NY: Doubleday. Grudin, J. (2009). AI and HCI: Two fields divided by a common focus. Ai Magzine, 48–49. Guidotti, R. M. (2019). A Survey of Black Box Methods. ACM Computer Surveys. Harper, R. (2003). Inside the Smart Home. Godalming: Springer. Harper, R. (2011). The Connected Home. Godalming: Springer. Harper, R. (2014). Trust, Computing and Society. Cambridge: Cambridge University Press. Harper, R. (2020). The Role of HCI in the Age of AI. International Journal of Human-Computer Interaction, 1331–1344. Harper, R., Licoppe, C., & Watson, D. (2019). Skyping the Family. London: Benjamins. Harper, R., Rodden, T., Rogers, Y., & Sellen, A. (2008). Being Human: HCI in 2020. Cambridge: MSR. Housley, W., Webb, H., Edwards, A., Procter, R., & Jirotka, M. (2017). Digitizing Sacks? Approaching social media as data. Qualitative Research, 627–644. Ma, X. (2018). Towards Human-Engaged AI. IJCAI, pp. 5682–5686. Marcus, G., & Davis, E. (2019). Rebooting AI: building Artificial Intelligence we can Trust. London: Ballantine. Ren, X. (2016). Rethinking the Relationship between Human and Computer. IEEE Computer 49, pp. 104–108. Rogers, Y. (2009). The Changing Face of Human-Computer Interaction in the Age of Ubiquitous Computing. In A. Holzinger, & K. Miesenberger, Symposium of the Austrian HCI and Usability Engineering Group (pp. 1–19). Godalming: Springer. Russell, S. (2019). Human Compatible: AI and the problem of control. London: Penguin. Selbst, A., & Barocas, S. (2018). The Intuitive Appeal of Explainable Machines. Fordham Law Review, 1085. Sellen, A., & Harper, R. (2002). The Myth of the Paperless Office. Cambridge: MIT. Shadbolt, N. & Hampson, R. (2018). The Digital Ape. London: Scribe Publications. Shanker, S. (1998). Wittgenstein’s Remarks on the Foundations of AI. London: Taylor & Francis. Smith, D. C. (1982). The Star Interface: an overview. AFIPS’82 (pp. 515–52). ACM Press. Taylor, P. (2021, January 21). Insanely Complicated, Hopelessly Inadequate. London Review of Books.
232
R. HARPER
Wiberg, M. (2017). The Materiality of Interaction: notes on the materials of interaction design. Cambridge, Mass: MIT. Yuet-Ming Wong, R. (2020). Values by Design Imagineries. Berkeley: University of California.
Saying Things with Facts, Or—Sending Messages Through Regulation. The Indirect Power of Norms Peppino Ortoleva
Introduction There are many ways in which governments communicate with their citizens. Much has been written and is being researched (also stimulated by recent political developments), about the uses of media, from traditional mass communication and its presumed loss of influence, to social networks and their supposed irresistible ascent. Some attention has been dedicated to the communication implications of major political changes, such as the “realignment” operated by Ronald Reagan through his change in tax policies, or the advent of Donald Trump with his nationalist rhetoric and his Twitter democracy, even though what has been written about these subjects is too often conditioned by the partisan convictions of the writers. In this essay I will try to go beyond these recent (more or less partisan) debates, and offer a different, long term perspective on another, often overlooked, aspect of political communication: the messages that are
P. Ortoleva (*) Universita di Torino, Torino, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_12
233
234
P. ORTOLEVA
implicit in norms, and how they can be used to influence public opinion, in a way that is collateral to their explicit regulatory content. If my analysis is correct, it will show that the areas of political communication are broader than those directly centered on the use of media. The implicit messages conveyed by laws may be defined as a form of indirect communication that, in reversing J. L. Austin’s formula, I will call “saying things with facts.” They may have a persuasive effect all the more powerful as it is implicit and indirect, and for the same reason they may generate ambiguities and misunderstandings that are as difficult to acknowledge as they are capable of long-lasting. I will first try to show how norms communicate these tacit, additional meanings, and to discuss the complex relations between their implicit messages, their more explicit communication, and the normative action which is their role: between the ways in which they “do things with words” and the ways in which they “say things with facts”. Finally I will briefly explore specific examples, namely the norms on compulsory vaccination, and the representation of the State and of Science they imply: this type of norms has been much debated recently but it has been the subject of recurring diatribes for two centuries. The conflicting positions on the subject, I will argue, have been stimulated at least as much by these implicit meanings as by the laws in themselves. Even though I am personally in favor of vaccination policies and critical in general of anti-vaccination movements, I will try to discuss without prejudices their implicit or indirect messages and the ways in which they influence public opinion.
Doing and Saying In 1955 the British moral philosopher John Longshaw Austin was invited to Harvard to deliver a series of lectures that have been later published under the title How to Do Things with Words (Austin 1962). They have since been very influential in the theory and philosophy of language, focusing the attention of theorists on the performative abilities and functions of language, and on the fact that words or “speech acts” (Searle 1969) may exert roles and produce consequences in ways that go beyond their strict communication functions. I will reverse Austin’s title and approach, and concentrate on how some actions that were conceived to perform a concrete task (as in the case of the issuing of laws) may also assume an implicit communication function, may produce additional
SAYING THINGS WITH FACTS, OR—SENDING MESSAGES…
235
meanings and convey additional messages which may exert their own influence on human behaviors. Before speaking of the implicit messages of norms, and in order to understand how they emerge, we should consider the explicit and more visible messages that laws always contain. Norms are in fact one of the primary ways in which “things are done with words”, and these words are conceived and regulated in order to become effectual. What differentiates norms from other forms of verbal expressions is indeed the fact that they are specifically designed and enunciated in order to have concrete consequences on the behavior of citizens, and that they do have concrete consequences. Norms are not mere words: their content is enforced by the apparatus of police, judges, and other branches of the State. We may distinguish at least two kinds of explicit messages generally implicit in laws: the first is “this is a law”. For a law to perform its effects, for its words to “do” things, it must be produced by a subject endowed with authority, and usually its production must follow a prescribed ritual. Consequently, when it is transmitted to those whom it applies it also generally contains basic information indicating the source of its normative power, and demonstrating that the correct procedure has been followed; the second is the content of the norm itself: the subjects to whom it is addressed, the actions it dictates and forbids, and the consequences that the obedience, or non-obedience, produces.
How “explicit” are these messages, and in particular the contents of the norms? This question is more complex, and relevant for our theme, than it may seem. As it is well known, liberal systems, in order to limit the arbitrary use of political or judiciary power, generally agree on the principle that “Everything which is not forbidden is allowed”. In the words of Cesare Beccaria, one of the fathers of modern legal civilization, “every citizen must be conscious that he may pursue any action not forbidden by laws without consequences, except those deriving from the action itself” (Beccaria 1764, chapter XXV; my translation). But for this principle to be effective, it should be better articulated, by adding an adverb: “Everything which is not explicitly forbidden is allowed.” For the citizens to be free from the arbitrary power of the norms, the arbitrary application of these norms must be reduced to a minimum: therefore, their content must be as clear as possible.
236
P. ORTOLEVA
The Role of Ambiguity But how clear? Ambiguities can never be totally suppressed in communications that use an inherently polysemic medium as a human, so-called “natural”, language. However the legislators may try to reduce ambiguities, they can never totally suppress them. Further, the more technical the language is in order to make the meanings of norms as strict as possible, the less it is understandable to the majority of citizens. This way, then, the content of a norm may become less ambiguous but only for a minority of professionals, more obscure for those to whom the laws are addressed. One of the consequences of their unavoidable ambiguity is, of course, that norms are always open to exegesis: the role of those who apply them is also a role of interpreting them. But another consequence is that, because of their ambiguity, norms generate what we may call a halo of meanings, which goes beyond the specialized work of the interpreters, surrounds the norms themselves, and conditions the ways in which they are socially perceived. So, ambiguity and the halo of meanings it generates is one of the major sources of the “implicit” messages of norms. A classic, and much debated, example of an ambiguity that generates additional meanings is the Second Amendment to the American Constitution: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” On the one hand, the right of the single citizen to keep and bear arms is presented as descending from the needs of the State not from personal needs or choices; on the other hand, it is formulated in such terms as to be presented as a right of the individual. As it is well known, the norm has been and is primarily interpreted as establishing the right for every citizen to arm himself/herself independently from any collective goals, while others offer a different meaning that strictly ties that right to “the security of a free State.” But beyond the opposition between two different basic meanings of the norm, which may be a matter of interpretation, an important part of the American public opinion attributes to it an additional meaning, according to which the right to keep and bear arms is a matter of personal freedom on the same level as free speech. And so, weapons may also be used against the orders of the State, if the citizen considers them unjustified or oppressive of his/her independence. This “additional message” has not been influential only on one side of the political spectrum. The idea of a right to keep and bear arms as a right of
SAYING THINGS WITH FACTS, OR—SENDING MESSAGES…
237
every individual, along with, or in opposition to, the democratic institutions, is now generally associated with the rightwing currents of American public opinion, but in the past it has been famously invoked by movements of totally different orientation, such as the Black Panther Party in the 1960s.
The Power of Implicit Messages The ambiguity of the laws, in any case, is only one of the sources of the implicit messages they convey. Laws live in history, and they may more or less unwittingly produce additional messages that are not inferred from their content and its unavoidable ambiguity, but from a more general context. One example: the laws that forbid police officers from drinking alcohol during working hours. They explicitly apply only to a specific category of people. But they contain some additional messages that are in fact addressed to the people at large: • we are no more a Prohibition society, the consumption of alcohol is permitted but it is not a substance like the others; public authorities may and must control its use, not only based on age but also based on the categories and the activities of citizens; • the city or the state surround you with police forces that are not only technically competent and respectful of laws, but also under moral control. You may, and should, feel in good hands. A norm that has been explicitly created to have police officers always in the best of their efficiency while performing their duty so becomes the criterion for a moral judgment. And although similar norms exist in many countries, their implicit messages in the United States, with their history of alcohol excesses but also of anti-alcohol crusades, may be partially perceived as at least somewhat different from many European countries. Are implicit messages of this kind really relevant? And how? First, history demonstrates that in many cases a law has been approved or rejected, by parliaments and by voters, more on the basis of its implicit meanings than of its concrete consequences. The case of vaccination laws we will discuss later will offer us a clear example. Those who oppose compulsory vaccination do so not only, and often not so much, on the basis of its real or presumed effects, but rather on the basis of the perception of these norms as asserting a total power of the State, and/or of Science, over
238
P. ORTOLEVA
the body of citizens. The example of vaccination laws will also demonstrate that in many cases implicit messages may not only confer additional meanings to norms, but also distort the original ones. Second, the behavior of citizens is directly conditioned by the explicit content of laws, but it may also be influenced (and not necessarily in a lesser way) by their implicit meanings. In other words, these additional meanings are part of the variety of ways in which laws produce their effects. The influence of laws on the behavior of individuals and societies may in fact be read in terms of a stratification of effects, or of a superposition of onion skins to use Gregory Bateson’s metaphor (1956, 216): • the most visible one, and the one that is the most directly tied to their explicit contents, is obviously what we may call the obedience effect, that is the ways in which their explicit provisions, and the systems of enforcement and sanctions which accompany them, directly condition people’s conduct; • equally important for society is what we may call the conformity effect, laws being one of the ways in which social conformity is created; in fact, a situation in which laws are less likely to be respected favors social disintegration, and also the opposite is true, because social conformity favors obedience; • a third level has to do with implicit messages, and in particular with the ways in which they become part of value systems and value judgments. A clear example of this third level is offered by the laws against desertion that were established in all belligerent nations in World War I. Besides the thousands who deserted, and were condemned, and the much bigger number which decided not to desert (also) for fear of punishment, it is through their implicit messages these laws touched whole armies and the civil population. This message was: even in an anonymous war, where the individual soldier has an almost irrelevant power, his behavior will still be judged according to values historically associated with personal choice and with face to face combat, such as courage as opposed to cowardice. So, the criminal war codes had a crucial role in preserving a value system that might seem obsolete in the strategic and technological context of modern warfare. Beyond punishing external behavior, they contributed to conditioning how the soldiers judged each other, and also (even more important) judged themselves. Beyond directly punishing the acts of cowardice
SAYING THINGS WITH FACTS, OR—SENDING MESSAGES…
239
they explicitly forbade in legal terms, they contributed to the preservation and also to a partial redefinition of the notion of cowardice in moral terms. Probably, the soldiers who decided their behaviors in war on the basis of this judgment were a much greater number than those who were principally motivated by the fear of punishment. Another indirect message implicit in the anti-desertion laws was that in war the individual completely belonged to the state. That in the state of war, as opposed to the state of peace, almost every behavior previously associated with personal freedom may become a crime. Whatever is not explicitly permitted may become an act of insubordination.
A Final Example: The Debate on Vaccination Laws Let me now move to a different type of legislation, one that has been widely debated in the last few years but that has also produced social controversies and opened conflicts for almost two centuries: the laws on compulsory vaccination. I want to show that the implicit meanings of these norms have been and are at least as relevant to these controversies as their explicit provisions. One preliminary statement: I believe that in general these norms are beneficial both to society as a whole and individually to each child (even though exceptions do exist). I expressly chose to discuss a case of norms that I think are good, because the study of implicit messages should not be considered primarily a polemical weapon against supposedly bad laws. All laws, however we judge them, may condition human behavior beyond their explicit content. What makes vaccination norms so interesting for our analysis? a. The fact that they have historically been the subject of major controversies, They started in the early nineteenth century with the British anti- vaccination movement (Durbach 2000, 45–62; Porter and Porter 1988, 231–252) a movement which at the end of that century, in 1898, achieved its first success with the recognition of a “conscience clause”, one of the earliest forms of conscientious objection in history. These controversies are still kept alive and made very visible by the recent “no-vax” or “free-vax” movements in many countries. More recently the introduction of new vaccines designed to fight the Covid pandemics have been reviving these controversies and have been in many countries the focus of vehement public debates, in the mass
240
P. ORTOLEVA
media and also in street demonstrations. Unfortunately, much of the literature on the most recent controversies is conditioned by a pro- vaccine bias, and tends to defuse “no-vax” theses more than to understand how and why so many people, not necessarily irrational or culturally deprived, tend to believe them. b. The fact that these controversies often revolve around issues of personal freedom and on the legitimate power (or illegitimate tyranny) of the state and the science it refers to, more than on the possible adverse effects on the bodies of citizens and children. c. The fact that since the beginning, many anti-vaccine movements have spread conspiracy theories about presumed projects of the public powers, or private interests (such as the so called “Big Pharma” often cited in the recent controversies about anti-Covid vaccines), or invisible enemies, to take hold of the bodies and minds of citizens. So, in a classic study, the historian Richard Hofstadter (Hofstadter 1964) analyzed the spread in the American extreme right of the legend that the fluoridation of water in California to prevent dental disease was in fact promoted by “the Communists” in order to “brainwash” the minds of children. More recently, this kind of rumors has found wider and more rapid circulation in the Internet, particularly (but not only) in the social media. It is my opinion that beyond their explicit provisions, vaccination laws in general carry at least two implicit messages: • Public authorities have the right and the duty to directly act on the bodies of citizens and their children, for their own sake and in order to defend public health as a collective good; • These laws are not the fruit of a political choice, they are dictated by the objective truth of Science. The evolution of the anti-vaccination movement has been conditioned by these implicit messages at least as much as by the specific content of the norms. So for instance in Britain those who opposed vaccines in the late nineteenth century moved from the contention that they were positively harmful to contesting that they posed intolerable limits on personal freedom, and this is how these groups won their “conscience clause.” The same line of thinking has been recently followed by the Italian anti- vaccination movement: some of the major websites referring to the
SAYING THINGS WITH FACTS, OR—SENDING MESSAGES…
241
movement have changed their slogan from NoVax to FreeVax, and focus less and less on presumed demonstrable damages caused by vaccines, and more and more on the freedom of choice. The freedom from a presumed domination of the State over the bodies of citizens, and (implicitly) from the power of the State to impose upon parents what to do with the bodies of their children: in fact one of the most consistent and ubiquitous slogans of the anti-vaccination movements for two centuries has been “Don’t touch my children.” More recently, however, many of the movements against anti-Covid vaccines have reverted to more extreme positions. From the Netherlands to Italy rules such as temporary lock downs have been compared to the norms of the Nazi State. Besides such extreme rhetorics, one of the most powerful arguments of these movements is the fear that the emergency powers many States are now assuming are subverting the rules of democracy, and will not be relinquished later. As to the second implicit message, the original content of the laws, which is based on what Science has until now discovered on the basis of evidence and statistical presumption, has largely been translated (by both sides of the controversies) into a broader meaning: the authority of Science is absolute. So, on the one hand, the defenders of the laws tend to represent them as the result of truths that cannot be doubted except by obscurantists, forgetting that doubt is essential to Science. On the other hand, the opposers may take any single example of the harmful effects of a vaccination as the evidence that not only should we never presume an absolute and infallible authority of Science (which should be obvious), but that all its official truths are unfounded and that they may (and should) be reverted: vaccination is a threat to health. The lasting success of conspiracy theories against vaccination, and their cyclical appearance in the course of centuries in ever similar and ever renewed forms, is due to the fact that they seem to respond in a consistent way to both these implicit messages. They “explain” epidemics not as a concrete reality, but as a deceitful construction, that “the powers that be” are promoting with the help of scientists, manipulated in their turn by economic interests and/or by political power. And, as it always happens with conspiracy theories and with the paranoid mentality analyzed by Hofstadter, there is no way of defusing these “explanations”. Every contrary evidence “demonstrates” how powerful are the conspirators in imposing their views, and how naive are those who believe them. The controversies about vaccination policies revolve around these additional messages at least as much as around vaccination in itself. Thus the
242
P. ORTOLEVA
battle on the implicit meanings of vaccination laws is transforming a possible rational debate on the results and the limits of scientific research into an opposition of incompatible principles. And in such a conflict both sides may make mistakes, including the one that defends the rationality of Science, and both may defend their mistakes with an extreme obstinacy. We can so come to a couple of considerations: • First, if the meaning of anything that is expressed in human languages has an unavoidable margin of ambiguity, the halo of additional meanings that may surround a message is bound to produce a greater possible ambiguity, and also more possible misunderstandings, whose effects may be lasting and may be difficult to recognize, and correct; • Second, a rational discussion about the explicit meaning of a norm may be difficult but in general a shared language is not impossible to find, in the tradition of jurisprudence and in the professional culture of legislators, lawyers and judges, but it is more difficult to discuss the implicit meanings. The opposite sides may approach them with languages, and conceptual frameworks, that are totally incompatible, and this may make a fruitful dialogue simply impossible.
Concluding Remarks In this essay I have tried to discuss one of the subtlest and most overlooked cases of undeclared influence on social perceptions and behaviors: the one that derives from implicit messages and indirect pressure. The case of the implicit messages of norms is just one of a variety of possibilities. Human communication always creates not only meanings but also what I have here defined “halos of meaning”. These may be the subject of shared assumptions and be assumed quite literally as common places, or else be a cause for undeclared dissensions, may generate voluntary or voluntary misunderstandings: all capable of funneling the behavior of people toward a direction or other, but in ways that are often below the line of conscious perception. Let us limit ourselves to the field discussed in this article. While the legislators are clearly responsible for the explicit meaning of the norms they enact, is anybody responsible for the implicit messages of those same norms, and for their consequences? The individuals who are induced to a certain behavior by their interpretation of the implicit messages of laws
SAYING THINGS WITH FACTS, OR—SENDING MESSAGES…
243
may find themselves in a contradictory situation. On the one hand these messages draw an indirect authority, even a form of binding force, from their being the byproduct of one of the most powerful of human actions, legislating. On the other hand, this binding force may not be ascribed, as laws in themselves always may be, to the conscious and responsible decision of a person or institution. Implicit messages remain, as it were, in the background: notions and rules that many take for granted and many share but nobody has explicitly stated. It is generally very difficult to ascertain whether the legislator has been more or less conscious of the implicit messages that the norm carries in itself, or if these are, so to speak, unmeditated. And in any case it is not really relevant: these messages are always without an author. Who is responsible, then, for their consequences? Their influence is part of what Hannah Arendt in a different context called “the rule by Nobody” (Arend 1969t, page 17). She spoke about the power of bureaucracy, we may borrow her words for the influence exerted by propositions, or “messages”, that may possess in fact a power parallel and similar to that of real (explicit) norms, but that are not regulated, because they are, -well, implicit. The main problem that the phenomena I have been discussing pose, is not, as in many instances of “nudging”, a cognitive one (seeing or not seeing, knowing or not knowing, who is influencing one’s behavior), but an ethical one: who is responsible for the implicit, and often unmeditated, messages that influence the behavior of people? The individuals who act on the basis of something they take for granted may be surrounded by innumerable others who share their messages and interpretations, but in the last instance they may be left alone with their responsibility for “obeying”, or “not obeying” to laws that nobody has ever written.
References H. Arendt, Reflecions on Violence, A Special Spplement to the New York Review of Books, February 27, 1969 Austin, John Langshaw. 1962. How to Do Things with Words. Oxford: Clarendon Press. Bateson, Gregory. 1956. “This Is Play.” In Group Processes, edited by B. Schaffner, 216. New York: Josiah Macy Jr. Foundation. Beccaria, Cesare. 1764. Dei delitti e delle pene.
244
P. ORTOLEVA
Durbach, Nadja. 2000. “They Might As Well Brand Us:Working Class Resistance to Compulsory Vaccination in Victorian England.” The Society for the Social History of Medicine 13: 5–62. R. Hofstadter, The Paranoid Style in American Politics, in “Harper’s”, November 1964 Porter, Dorothy and Roy Porter. 1988. “The Politics of Prevention: Anti- Vaccination and Public Health in 19th Century England.” Medical History 32: 231–252. Searle, John R. 1969. Speech Acts. Cambridge: Cambridge University Press.
Conclusion: The Troubling Future of Nudging Choices Through Media for Humanity James Katz, Katie Schiepers, and Juliet Floyd
Cross-cutting Themes The growing array of gadgets, “smarter” smartphones, intelligent networks and sensing devices allow greater individual freedom, convenience, economic opportunities, and new venues of self-expression and entertainment, all of which affect every level of our experience. We have more choices of what to do with our media, and what we can have our media do to us. We wear devices on our body and surround ourselves with yet more, all in an effort to improve and fine-tune our lives just to our liking. This is the daily experience and consumers engage with these algorithms with varying levels of enthusiasm and concern. But our interest here is to probe more deeply into the ethical and philosophical aspects of these
J. Katz • J. Floyd (*) Boston University, Boston, MA, USA e-mail: [email protected]; [email protected] K. Schiepers Boston, MA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6_13
245
246
J. KATZ ET AL.
nudging systems. And we want to consider, as have our chapter authors, their implications for healthy, fulfilling and autonomous lives in the kind of society to which these activities might give rise. Thus in this context, a fairminded critique of algorithmic nudging that can yield much good also requires that the other side of the proverbial coin must be examined. That is why so many critical comments were presented in this volume’s chapters even as there were descriptions of the pragmatic values and useful aspects of the nudging process. So to begin our summative assessment of the situation humans face with algorithmic nudging, we will highlight some cross-cutting themes that connect the diverse chapters in this volume. A theme prominent in several chapters is the importance of self- effectiveness when engaging with nudges. Although expressed from multiple viewpoints, an organizing theme to these arguments might well be social-determination theory (Ryan and Deci 2000). Ryan and Deci find that from a psychological and social-contextual perspective, over-supervision—exactly what we could expect from full-blown algorithmic nudging—“undermine(s) intrinsic motivation, self-regulation, and well-being.” They argue that inherent human needs—specifically feelings of competence, autonomy, and relatedness—are important for “enhanced selfmotivation and mental health and when thwarted lead to diminished motivation and well-being” (Ryan and Deci 2000: p. 68). Thus, under this broader canopy, chapter authors raise danger signal concerning a subtle and highly distributed human consequences about the deep penetration of algorithmic nudging. To make with an oversimplifying play on words, where there’s no will, there’s no way. So this point is profoundly important in terms of our concerns about ethics and philosophy as they relate to human society. Several chapters address the mental images through which users interpret and respond to a world increasingly dominated by digital activities. This is complemented by the importance of stories or narrative understandings of what the algorithms are trying to achieve, at least as understood from the viewpoint of the user. Jos de Mul argues that the metaphors used to by people, even to the extent they are aware of them, determines how they understand the experience. The conceptual metaphors of users co-mingle the mechanical and software dimensions of reality with their experiences, which are mostly derived from media. In particular, science fiction movies and popular literature do much to create their understanding, and as Laugier emphasizes, television series do the same. From less deterministic viewpoint, Katz and Crocker also highlight the ways in
CONCLUSION: THE TROUBLING FUTURE OF NUDGING CHOICES…
247
which behaviors towards algorithms are embedded in self-perceptions and a complex network of social relationships. Harper also finds much analytical material in the poor mental models that humans use when interacting with AI algorithms. Another theme is the way nudges erode our sense of personhood and freedom, and even limit the freedom of information to which we are exposed. This was addressed by Kipper’s essay and, even more directly in terms of media usage in popular culture, by Laugier. Some authors explored the underlying sources of nudging, touching on the question of why nudging should be an appropriate form of service- provider interaction with clients/customers. Here we find Beauvisage and Mallet’s viewpoint entering, their incisive analyses underscoring the dilemmas facing those who pursue the liberal enterprise. Less directly, Harper criticizes the perceived structural disequilibrium, now historically outdated, that interferes with effective human-machine interaction and higher orders of technological progress. Katz echoes a theoretical point made empirically by Hall and Madsen (2022) that nudges can be counter-productive. Bias and unfair treatment inherent in or arising from poorly designed algorithmic nudges remains a concern, as acknowledged by several chapter authors. It is worth pointing out that to some extent, this problem arises from the limits, both in terms of resources available and self-awareness of researchers. Therefore these aspects of the problem are amenable to resolution should the motives exist to address them. Without doubt, the recent wave of public concern about inequitable treatment of various groups in society has underscored the importance of addressing this area. Yet it also needs to be said that intentionality on the part of system designers, that is that they set out to achieve biased results, is highly implausible. Rather, when uncovered, such outcomes are highly embarrassing and greeted with dismay and consternation by those involved with developing such algorithmic systems. Interestingly, though, to the extent bias occurs in the application of nudging systems, the algorithmic technology itself can be part of the solution. By having better analytical tools that can inspect themselves, the problem of bias can successfully pursued (Belenguer 2022). A persistent theme across several chapters is the way in which nudges emanating out of political spheres can cajole, limit and even penalize activities across many areas of human endeavor. This happens most particularly via communication technology and mediated interaction with dynamic computer systems, including those presented via apps.
248
J. KATZ ET AL.
These powers of governmental agencies, as well as, to a lesser extent, private and other organizations, when connected to network or artificial intelligence technologies, can manipulate choice architectures in ways that—until the past few years—are so detailed and so far-reaching that they were for most people truly unimaginable. Sometimes these controls are directly exercised, but more frequently they operate at the level of subtle influence and ongoing modification of perceptions of choices, sometimes working invisibly by limiting choices that users never knew existed. Just in terms of our mobiles, they prompt us to wake up, go to sleep, get exercise and even tell us when to ignore the devices themselves. But they eavesdrop, and discreetly or even secretly record our movements, working with distant forces to affect the service offerings and entertainment choices we receive, and in some cases foreclosing entertainment options “for our own good.” They will tattle if we violate travel and proximity rules of dubious legitimacy and efficacy. And the next frontier of mediated nudging? Our body odors might be sniffed out by “gar phones.” Smartphones, it seems may soon have the capability to detect molecules shed by our bodies, revealing to the sensors what we have eaten, inhaled or drank. We will even inadvertently reveal diseases budding within our bodies (Hernandez 2022), a topic of great interest to healthcare providers, insurers and potential employers. We may even see this interest extended yet more broadly, for instance to potential romantic partners and social relationships. From there, one can readily imagine various dating apps getting involved as well, with all the accoutrements that would entail.
The Larger Environment: Societal Control Via Nudging in a Nation-wide Experiment These changes in what our personal media and the larger mediated environment can offer to us, and do to us, are of immense importance on the human, social and environmental levels. As such, they have justifiably attracted substantial attention in the public sphere. Academics and policymakers alike have labored long and hard to discern the potential positive and negative aspects to nudging. Essayists and public intellectuals have also chimed in on the topic. What we aim to do in this collection of papers is to take what we believe to be a novel approach in focusing on the media nexus of the nudging process, adding something distinctive to this literature with the chapters in this volume.
CONCLUSION: THE TROUBLING FUTURE OF NUDGING CHOICES…
249
At this point, we turn our attention to the future of nudging via communication technology. We predict that governments will seek to expand their knowledge about and control over the people within their power, and that commercial and social service organizations will seek also to expand influence and gather information about those with whom they interact. Why this is so is beyond the scope of this volume, but we can see these trends manifesting themselves across modern societies as the power, popularity and effectiveness of communication technology expands. These trends are playing out most palpably in the People’s Republic of China. China has been continually pioneering what is possible in terms of social control over populations in terms of communication and technology. Mao Zedong applied the concept of unauthorized micro-broadcasts (a.k.a. gossiping and complaining) as a way to intervene in the smallest of interactions. He declared that these unauthorized micro-broadcasts were to be severely repressed by extreme public punishment as a way to build social solidarity in the general population (Chang and Halliday 2005). Today, authorities in China have implemented (albeit imperfectly) a generally more benign but also much more efficient and thorough automatic system of behavioral monitoring to control its population, including social media censorship and an army of fake online commentators (King et al. 2017). Beyond information control that includes mobile platforms, the Chinese government, guided by the Chinese Communist Party, is creating a system that generates for its population a life-long social credit score. The score depends on data collected across a range of activities including taking care of one’s elderly family members, paying bills on time, avoiding fracases with neighbors, proper online behavior, and honesty in dealings with others. Sensors located in public places supplement data collection; these often rely on facial recognition to identify wrong-doers, including jaywalkers. Also significant is online social media behavior and commentary, including expressions of support for the government and not repeating gossip. Both citizens and businesses are included in the process of social credit scoring. High social credit scores yield various financial and travel benefits while low scores can mean the reverse, including being banned from high-speed trains and passenger aircraft. To date millions have been banned from such transportation modes (Lee 2020). The question of an individual’s identity—necessary to properly allocate rewards and punishments—is being solved by life-long tracking using tools such as AI aging of facial recognition templates and DNA collection. This way no one can escape even if they remain off the grid for decades.
250
J. KATZ ET AL.
The above generic social credit system is applied to ordinary Chinese citizens. Members of certain ethnic minority groups are singled out for particularly detailed monitoring and subject to extreme sanctions including detention and torture. Despite the strong evidence supporting these allegations, they are denied by the Chinese government (BBC 2022). For certain Chinese Communist Party members there is another form of algorithmic nudging and checking. Those members of a higher rank face a different form of behavioral monitoring. I have been told that there are daily lessons delivered via smartphone to those in the higher party ranks, and that these lessons need to be studied and then responded to with thoughtful commentary. Initially, it was not unheard of for these ranking party officials to turn the task over to their secretaries who would read and respond to the lessons. To counter this practice, facial recognition technology has now been implemented so that the app can authenticate that the higher official in question is the one reading the material and personally inputting the response to the lessons. Further, AI can monitor iris dilation and gaze direction to monitor the viewer’s level of engagement and attentiveness to the presented material. Much like the grading of college admission essays in the West, AI technology can carry out the verification process and also assess the quality of the written response: this topic of automatic AI-based judgments of human creative products is worthy of its own essay, but lies beyond the scope of this chapter. The Communist Party has also sought ways to generate broad public enthusiasm for Communist Party President Xi Jinping. Internet giant Tencent created an app that seeks to stimulate virtual applause for his party congress speeches. Presented in a game format, the object is to tap as many times as possible to indicate applause for the President’s speech (Lumb 2017). The above examples are but a few of the many ways in which smartphone applications are being harnessed to mobilize public support for Communist Party and Chinese governmental objectives. This of course is complemented by efforts to root out unapproved information and foreign news sources, completing the circle of communication control via communication channels. The convergence of all these forces means that nudging communication media play an important role in restricting human freedom for the one out of six people on the planet who live in China. Convergent with the ethical dimension, this demographic statistic alone underscores the sheer numerical importance of deepening understanding of how mobile communication media shapes people’s lives in order to develop sound policies that protect human rights.
CONCLUSION: THE TROUBLING FUTURE OF NUDGING CHOICES…
251
It Can’t Happen Here, Until it Does Yet it would be terribly misleading, as many of the essays in this collection demonstrate, to believe the problem of media being used to nudge and even directly control individuals is an issue that only Chinese people face. Evidence shows that there are already growing nudging systems in the West that are designed to influence and control domestic populations, even if they are not necessarily as overt as they are in the case of China. Further, there is a tendency for approaches devised in one area of endeavor to be used in others. Bad scores for customers of Airbnb and Uber can get customers banned from using those services. While this is extreme, less ambitious modes of algorithmic nudging are easy to implement and hard to notice. To take the example of Uber rideshare algorithms, this might include offering one class of vehicle over another, re-calculating fares given customer choices, or suggesting walking to a location as opposed to precise-point pickup to maximize driver efficiency. And all this would be taking place behind the curtain of customer ignorance. Aggregated social media scores are used by companies to identify promising leads and avoid risky ones. These ratings in turn can be drawn on by governmental authorities in making decisions about which citizens to investigate for tax evasion (Houser and Sanders 2020). And likewise it seems that, as Jeremy Bentham and Michele Foucault predicted, people tend to respond timorously in anticipation of judgements rendered by remote authorities and unseen evaluators (Garzik and Stern 2022; Ng 2022). This process in the West readily echoes the social credit scores of China. There is a convergence of both explicit and implicit forces of data collection and nudging operating—and growing—beneath the waterline of public visibility. The usage of mobile technology to control movement came to the fore during lockdowns to dampen the COVID pandemic. The formerly nightmarish scenario of being tracked and called to account over one’s activities and contacts (in this case with suspected carriers of the virus) became part of our waking routines (Skelding 2022). Many of these were nudging based, relying on prompts to influence the public (Krawiec et al. 2021). Tracking hardware is already commonplace or under consideration for undocumented immigrants, certain Alzheimer patients and those under house arrest (Aguilera 2022; Francis 2022). Such technology is even being used to strike a balance between freeing potentially dangerous people (namely violent sexual predators who have been conditionally released as having been successfully treated) and maintaining public accountability. This is happening in California, which is requiring certain offenders to
252
J. KATZ ET AL.
wear GPS trackers (East County Today 2022). Critics maintain this is a slippery slope that will lead to greater tracking for lesser offenses, creating a subpopulation of monitored second-class citizens. Looking to the future, it is well-known that the mobile phone’s capabilities to monitor ambient conditions means that it can collect and relay a treasure-trove of data about the user, as well as provide a rich source of environmental data. In terms of health and well-being, these include possible symptoms of depression, joint degeneration, balance problems, poor sleep quality, mood changes and social isolation (Majumder and Deen 2019). Mobility and exercise levels are also of particular fascination. Much as how car insurance companies will offer reduced rates if drivers will install an activity monitor in their vehicles, it seems inevitable that similar incentives will be offered by employers and insurers if users will agree to intense monitoring of diet and exercise via their mobile phones. Ever more innovative and invisible ways of collecting data that can be used for nudging is taking place. Commercial and criminal exploitation via mobile communication devices and apps represents another vector of risks in need of attention by mobile communication scholars, and of course policymakers. Collecting data on users has been a powerful revenue stream for many private companies. This apparently was the case for the Canadian coffee shop chain Tim Hortons. Tim Hortons created an app that allowed it to spy on its users. “People who downloaded the Tim Hortons app had their movements tracked and recorded every few minutes of every day, even when their app was not open” declared a Canadian report on the affair (Office of the Privacy Commissioner of Canada 2022). Such data can be used in an enormous variety of ways to nudge the app users, and if resold or shared, can expand in ways that are both readily foreseeable and also difficult to predict (as suggested by the smell molecule detection potential of smartphones (Hernandez 2022), mentioned in the introduction). Thus the utility of such data extends beyond market research and resale value. A case can be made in terms of the various vulnerabilities to privacy invasion, identity theft, blackmail and even theft of national security secrets. In this instance, the alleged culprit is the enormously popular Chinese TikTok (Thayer 2022). This app has already been subjected to vociferous criticism due to its algorithms that nudge people in certain directions, including ever-deeper into self- destructive content pathways. This algorithmic nudging via a popular app is all the worse because young people and others from vulnerable categories may be swept into dark and damaging conduct and moods (Barry et al. 2021). The company can of
CONCLUSION: THE TROUBLING FUTURE OF NUDGING CHOICES…
253
course defend itself by saying that while the app nudges in certain directions, the user is always free to go into a different direction. Far-reaching are the ramifications of manipulated nudging environments. A foretaste of possibilities was captured in a 2023 incident related to TikTok, the popular visual social media platform which uses algorithms to guide users to specific content. The Wall Street Journal reported that the company had been tracking those of its users who watched gay or LGBT sexual content. TikTok employees in China allegedly had access to these data and controlled who had permission to view it. In a similar vein, chatbots, that is, software programs that mimic human interaction with users, can aggregate yet more detailed user profiles. The privacy implications of such practices are severe since they open the door to blackmail, embarrassment or other abuses of users. But these data-collection practices also go to the heart of potential algorithmic manipulation, even if their aims may be benign. Using machine learning and AI, vast amounts of individual-level data can be collected on virtually anyone who uses digital media. Intimate, psychologically fraught individual data can also be potentially collected by popular large language models (LLMs) such as ChatGPT. Machine learning techniques can discover heretofore unknown relationships and then algorithmically apply them to subtly manage an individual through a maze of nudges. There is another more subtle risk to the algorithmically-informed choices that users make, especially in conjunction with collateral data collected by the app. Once again, in the case of TikTok, the app’s privacy policy explicitly states that it collects users’ location and movement data as well as biometric information. Recent revelations indicate that the app also has the potential to steal fingerprint and facial recognition images, tap into password management via the clipboard facility, and access the smart phone’s microphone to listen to conversations even when the app is closed. This data is not just visible to the private Chinese company ByteDance, TikTok’s owner; according to Chinese law, the Chinese government can access ByteDance’s data under a wide variety of circumstances. But beyond formal access by the Chinese government, there remains the possibility of unauthorized access to the company’s data. Indeed, surreptitious access and data thefts are a constant problem at all levels of computer security. The combined weight of all these risks represent an immense threat to privacy, personal safety and security as well as, via indirect means, corporate and governmental data integrity. And the panoply of risks can be both more profound and harder to discern. The algorithms that guide TikTok users through the branching
254
J. KATZ ET AL.
possibilities, nudges and recommendations to various categories of content can have massive cumulative consequences. One may not agree with his domestic policy positions, but there is merit in US Sen. Ted Cruz’s Facebook comment addressing the implications of the vast powers of governmentally supported app designers, in this case the algorithms of TikTok: “#TikTok is a Trojan horse the Chinese Communist Party can use to influence what Americans, see, hear, and ultimately think” (Cruz 2020). The example of TikTok may be one of the most prominent illustrations of why the future of nudging via communication media is troubling. But TikTok also serves as a bellwether for the types of concerns in need of regulatory and policy interventions; clearly these extend throughout society from the individual, group and national levels.
Coda In this brief sketch, we have pointed to ways in which nudging through media can be applied through social engineering policy to open as well as foreclose specific behavioral activities of users. Many of these applications of algorithmic prompts are clear to the user. Yet many others nudges and prompts take place below the waterline of individual awareness. These are often taking place in important domains of human activity and may have a cumulative and potentially pernicious effect. The distress aroused by Facebook’s manipulation of news stories to see how their content influences users’ moods is a case in point (Hill 2014). No news stories were foreclosed from the reader, but some were made more prominent than others, an obvious nudge in one direction or another. Moving from the level of individual behavior and reactions in light of algorithmically created environments, it is necessary to pay attention to the way corporate and especially governmental officials may manipulate this environment. These interventions can affect users in ways that rival, and could even surpass, the nightmarishly exhaustive measures described by George Orwell’s dystopian novel 1984 (Orwell 1950).Yet we now stand at the precipice of heretofore unimaginable threats to our privacy and autonomy. Many of these threats are borne of mobile media and communication. Let the spirit of Orwell inspire a new generation of mobile communication scholars to consider and analyze their uses. In this way, the fruits of their work can aid in the battle to protect privacy and individual autonomy against the interests of governmental manipulation and commercial exploitation.
CONCLUSION: THE TROUBLING FUTURE OF NUDGING CHOICES…
255
References Aguilera, Jasmine. (2022). U.S. officials deploy technology to track more than 200,000 immigrants, triggering a new privacy lawsuit. Time, April 18. https:// time.com/6167467/immigrant-tracking-ice-technology-data/. Barry, Rob, Georgia Wells, John West, Joanna Stern and Jason French. (2021). TikTok serves up Sex and Druga to minors. Wall Street Journal, Sept. 8. https://www.wsj.com/ar ticles/tiktok-a lgorithm-s ex-d r ugs-m inors- 11631052944. BBC. (2022). Who are the Uyghurs and why is China being accused of genocide? BBC News, 24 May. https://www.bbc.com/news/world-asia-china- 22278037. Belenguer, Lorenzo (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics, 10:1–17. doi: https://doi.org/10.1007/s43681-022-00138-8. Chang, Jung and Jon Halliday. (2005). Mao: The Untold Story. New York: Knopf. Cruz, Ted. (2020). Senator Ted Cruz (Facebook). https://www.facebook. com/315496455229328/posts/tiktok-i s-a -t rojan-h orse-t he-c hinese- communist-party-can-use-to-influence-to-wha/3094838890628390/. East County Today. (2022). Bill to allow police to track some violent sex offenders with GPS signed by Newsom. East County Today, July 22. https:// eastcountytoday.net/bill-t o-a llow-p olice-t o-t rack-s ome-v iolent-s exoffenders-with-gps-signed-by-newsom/ Francis, Ellen. (2022). Britain will electronically tag some asylum seekers with GPS devices. Washington Post, June 18. https://www.washingtonpost.com/ world/2022/06/18/britain-electronic-tagging-migrants-asylum-seekers/. Garzik, Jeff and Jeremy Stern. (2022). The Borg of the Gargoyles: How government, tech, finance, and law enforcement converged into an all-knowing criminalization complex—and how to resist it. Tablet, July 18. https://www. tabletmag.com/sections/news/articles/the-borg-of-the-gargoyles. Hall, Jonathan and Joshua Madsen. (2022). Can behavioral interventions be to salient? Evidence from traffic safety messages. Science, volume 376, number 6591. https://www.science.org/doi/10.1126/science.abm3427. Hernandez, Daniela. (2022). High-tech smell sensors aim to sniff out disease, explosives—and even moods. Wall Street Journal, July 16. https://www.wsj. com/articles/high-tech-smell-sensors-scientists-develop-11657914274. Hill, Kashmir. (2014). Facebook manipulated 689,003 users’ emotions for Science. Forbes, June 28. https://www.forbes.com/sites/ k a s h m i r h i l l / 2 0 1 4 / 0 6 / 2 8 / f a c e b o o k -m a n i p u l a t e d -6 8 9 0 0 3 -u s e r s - emotions-for-science/?sh=37fdca59197c.
256
J. KATZ ET AL.
Houser, Kimberly and Debra Sanders. (2020). The use of Big Data analytics by the IRS: Efficient solutions or the end of privacy as we know it? Vanderbilt Journal of Entertainment and Technology Law, 19, 817. https://scholarship.law. vanderbilt.edu/jetlaw/vol19/iss4/2 King, Gary, Jennifer Pan, and Margaret E. Roberts. (2017). How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. American Political Science Review, 111, 3, Pp. 484–501. Krawiec, J.M., Piaskowska, O.M., Piesiewicz, P.F. et al. (2021). Tools for public health policy: nudges and boosts as active support of the law in special situations such as the COVID-19 pandemic. Global Health 17, 132. https://doi. org/10.1186/s12992-021-00782-5. Lee, Amanda. (2020). What is China’s social credit system and why is it controversial? South China Morning Post, August 9. https://www.scmp.com/economy/ c h i n a -e c o n o m y / a r t i c l e / 3 0 9 6 0 9 0 / w h a t -c h i n a s -s o c i a l -c r e d i t system-and-why-it-controversial. Lumb, David. (2017). Clap for China’s president anywhere, anytime with this app. Endgadget, October 20. https://www.engadget.com/2017-10-20-clap-for- chinas-president-anywhere-anytime-with-this-app.html. Majumder, Sumit and Jamal Deen. (2019). Smartphone sensors for health monitoring and diagnosis. Sensors (Basel), 19(9): 21–64. https://doi.org/10.3390/ s19092164. Ng, Alfred. (2022). Homeland Security records show “shocking” use of phone data, ACLU says. Politico, July 18. https://www.politico.com/ news/2022/07/18/dhs-location-data-aclu-00046208. Office of the Privacy Commissioner of Canada. (2022). Tim Hortons app violated privacy laws in collection of ‘vast amounts’ of sensitive location data. June 1. Gatineau, QC. https://www.priv.gc.ca/en/opc-news/news-and- announcements/2022/nr-c_220601/ Orwell, George. (1950). 1984. New York: Signet Classic. Ryan, Richard M., & Edward L. Deci (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68. Skelding, Conor. (2022). Canada secretly tracked 33 million phones during COVID-19 lockdown: report. New York Post, July 23. https:// nypost.com/2021/12/25/canada-s ecretly-t racked-3 3-m illion-p honesduring-lockdown/. Thayer, Joel. (2022). On TikTok, it’s all fun and games until China wants your info. Wall Street Journal, July 21. P. 1. & ff.
Index1
A Alexa, 89, 138, 222 Algorithms, 3–5, 9, 10, 12, 19, 23, 34–52, 77–79, 84, 88, 90, 91, 107, 122, 173–181, 183–191, 246, 247, 251–254 Amazon, 41, 42, 42n11, 148, 186–189, 199, 222 Apple, 138, 199 Artificial intelligence (AI), 7, 9, 10, 14, 42, 46, 48, 49, 50n15, 51, 75–77, 79, 85–92, 95, 97, 101, 101n82, 102, 105–107, 107n89, 117–123, 126, 127, 129–134, 207–209, 208n1, 209n3, 211, 211n6, 212n8, 214–220, 214n10, 216n11, 217n12, 217n13, 218n14, 219n15, 222–230, 224n19, 247–250, 253 Augmented-reality, 118 Austin, John Langshaw, 234
B Behavior, 1, 4, 7, 8, 10, 11, 79, 82, 84–86, 85n27, 88–91, 94n59, 98, 100, 100n79, 137–156, 174, 175, 177, 182, 184, 247, 249, 254 Bias, 4, 7, 9, 45, 70, 89, 91, 94, 94n59, 97, 140, 147, 149, 154, 174, 179, 180, 247 Bidirectional communication model, 139 Biometrics, 138 Boston Dynamics, 120, 122 Butterfly effect, 22 C Cambridge Analytica, 44, 87 Cavell, S., 8, 61, 62 CCTV, 138 Censorship, 162–164, 249
Note: Page numbers followed by ‘n’ refer to notes.
1
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Katz et al. (eds.), Nudging Choices Through Media, https://doi.org/10.1007/978-3-031-26568-6
257
258
INDEX
ChatGPT, 253 China, 4, 170, 176, 249–251 Choice architecture, 1–6, 21, 26, 35n1, 66, 67, 77, 149, 248 Clickbait, 28 Common good/collective good/ greater good, 15, 63, 64, 87, 141, 240 Computational irreducibility, 124–127, 134 Computational systems/computer, 3, 4, 7, 8, 13, 14, 38–41, 39n8, 43, 46, 50–52, 52n17, 77–80, 78n6, 82–87, 93n57, 100, 102, 103, 106, 107, 124, 129, 130, 132, 207–230 Conceptual metaphor/material metaphor, 35, 35n2, 35n3, 37, 38, 38n6, 39n8, 39n9, 40, 46, 47, 49, 52, 246 Conformity effect, 238 Confucius/Confucianism, 161, 162 Consent, 13, 62, 68–71, 179, 195–204 Control, 6–8, 11, 12, 20–30, 20n1, 119, 120, 173, 174, 178, 180, 184, 187, 192, 237, 248–251 Cookies, 195, 197, 199, 200 COVID, pandemic, 14, 63, 78n10, 101, 105, 181, 239, 251 Cybernetics, 139, 155 D Database(s)/data/big data/metadata/ user data, 3, 13, 42–44, 47, 59, 70, 83, 87, 89, 92, 101, 133, 140–142, 149, 163, 165, 169, 174, 189, 195, 196, 198, 200–204, 210–215, 216n11, 249, 251–253 Democracy, democratic, 10, 59–73, 87, 91, 92, 128, 129, 131, 160, 166–168, 174, 175
Descartes, Rene, 34, 37, 81, 81n18 Determinism, 24, 25, 246 Dewey, John, 69–71 E Emerson, R.W., 61, 62 External regulation of behavior, 150 Extrinsic reinforcement, 144–147 F Facebook, 39, 41, 44, 45, 87, 146, 185, 187–189, 202, 212, 215, 216, 219, 224, 254 Facial recognition, 100n78, 138, 249, 250, 253 Fifty-cent army, 160 Fitbit, 138, 139 G Gamification, 11, 141–147, 152–154 GDPR, 195–204 Google, 41, 44, 81, 90, 120, 130, 185, 188, 202, 212, 212n8 Government/government policy/ies, 4, 8, 9, 11, 12, 62, 64–69, 129, 131, 132, 160–170, 233, 249, 250, 253 H Harmony, 162 Health, 11, 15, 64, 68, 180, 182, 246, 252 Human/human nature/humanness/ humanity, 3, 5, 8, 10, 117–135, 139, 140, 150, 152, 155–156, 173–192
INDEX
I Identified regulation of behavior, 151 Instagram, 41, 146 Integrated regulation of behavior, 151 Introjected regulation of behavior, 150 J Junk food, 161 L Laws, legislation, 14, 37, 39, 40, 46, 48, 49, 201, 201n1, 202, 204, 234–243, 253 LGBT, 253 Liberty/freedom/autonomy, 6–8, 11, 12, 19–31, 59, 60, 65, 106, 124, 144, 148, 150–152, 154, 161, 168, 169, 175, 185, 192, 198, 202, 246, 247, 250, 254 M Machine, 253 Masks/mask mandates, 63 Message framing, 11 Moral philosophy/morality/morals, 8, 13, 59–62, 64–67, 71–73, 93, 102, 106, 174, 182, 183, 196, 197, 202–204, 234, 237, 239 Motivational design/motivation, 11, 140–141, 143–147, 149–151, 246 N Negative energy, 160, 162, 164 Negative feedback loops, 138, 139 Nest, 138 Netflix, 42, 106, 107 New media, 8, 149–154 Nietzsche, Friedrich, 7, 36, 37
259
Nike Fuel, 138 Nissan Leaf, 140–141 Nudging/nudges/large-scale nudges/ digital nudges, 1–4, 6–8, 10, 12, 13, 15, 19–23, 26–31, 34–52, 59–73, 77, 90, 91, 131, 144, 147–151, 153–155, 159–170, 173–192, 195–204, 246–254 O Obedience effect, 238 OPower, 140 P Parship, 39, 42 Paternalism, 6, 21, 29, 59–73 Positive energy, 12, 159–170 Power House, 142–143, 145, 146, 148 Privacy, 9, 31, 90, 99, 99n77, 102, 179, 185, 190, 197–199, 202, 203, 252–254 Psychology, 2, 15, 83, 118, 155, 246 Purchasing behavior, 137 S Schramm, W., 139 Search engine, 14, 211, 219, 220, 225–227 Sensing technology, sensors, 137–143, 145, 149, 154, 155, 248, 249 Skinner Box, 146 Snapshot, 138, 139 Social media/social networks, 5, 11, 90, 97, 133, 137, 146–148, 151, 159n1, 163–168, 164n3, 170, 174–176, 178–181, 187, 189, 212, 213, 215–218, 217n12, 223, 233, 240, 249, 251 Spotify, 42, 184
260
INDEX
Storytelling/story, 37, 37n5, 38, 40, 46, 153, 154 Streaming, 137, 138, 186 Surveillance, 3, 9, 11, 90, 137, 138, 154, 203 T Technology, 4, 5, 7, 9–11, 34, 38–40, 39n7, 43, 44, 46, 49, 50n13, 52, 59, 75, 76, 78n10, 85, 87–90, 92, 95, 98, 100n78, 105, 107, 118, 121–127, 131, 137–138, 142, 149, 153, 154, 173, 176, 184, 247–251 TikTok, 39, 41, 106, 146, 252–254 Tinder, 39, 42 Tracking technology, 251, 252 Turing, Alan, 9, 39, 39n8, 50n15, 75–107 TV/television, 9, 63, 71–73, 107, 246 Twitter, 41, 97, 97n65, 179, 180, 185 U User/user experience, 3, 5, 7, 10–14, 85, 93, 106, 137, 139–142, 144, 146–154, 156, 173–192,
195–204, 208–215, 214n9, 217n12, 218, 219n16, 220, 221, 223–230, 246, 248, 252–254 V Vaccination/vaccine/vaccinated, 14, 63, 188, 234, 237–242 W WeChat, 12, 159, 159n1, 163–166, 164n3 Weibo, 165, 166 Wellbeing, 8, 11, 29, 30 Wittgenstein, Ludwig, 8, 9, 61, 76–78, 76n2, 80–82, 82n19, 86–88, 88n34, 91, 92, 94, 94n59, 95, 97, 98, 98n71, 99n77, 100–103, 100n78, 101n81, 106, 107, 130 X Xerox, 209–211, 210n5