368 80 5MB
English Pages 275 [266] Year 2021
Towards an International Political Economy of Artificial Intelligence Edited by Tugrul Keskin · Ryan David Kiggins
International Political Economy Series
Series Editor Timothy M. Shaw , University of Massachusetts Boston, Boston, USA; Emeritus Professor, University of London, London, UK
The global political economy is in flux as a series of cumulative crises impacts its organization and governance. The IPE series has tracked its development in both analysis and structure over the last three decades. It has always had a concentration on the global South. Now the South increasingly challenges the North as the centre of development, also reflected in a growing number of submissions and publications on indebted Eurozone economies in Southern Europe. An indispensable resource for scholars and researchers, the series examines a variety of capitalisms and connections by focusing on emerging economies, companies and sectors, debates and policies. It informs diverse policy communities as the established trans-Atlantic North declines and ‘the rest’, especially the BRICS, rise. NOW INDEXED ON SCOPUS!
More information about this series at http://www.palgrave.com/gp/series/13996
Tugrul Keskin · Ryan David Kiggins Editors
Towards an International Political Economy of Artificial Intelligence
Editors Tugrul Keskin Center for Global Governance Shanghai University Shanghai, China
Ryan David Kiggins Political Science University of Central Oklahoma Edmond, OK, USA
ISSN 2662-2483 ISSN 2662-2491 (electronic) International Political Economy Series ISBN 978-3-030-74419-9 ISBN 978-3-030-74420-5 (eBook) https://doi.org/10.1007/978-3-030-74420-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
There is a great deal of excitement around artificial intelligence (AI) at the start of the 2020s. Gone are the days when the subject was just the preserve of computer scientists and phycologists. Suddenly everyone is interested, including artists, authors, businesspeople, economists, engineers, politicians, scientists, and social scientists. This blossoming in the popularity of AI is welcome, but we should be wary not to believe all that we hear or read about it. A common misconception is that AI is new. Yet Alan Turing first published the notion of computers that can think in 1950. In the same article, he proposed the Imitation Game, now widely referred to as the Turing Test, to evaluate whether a computer system displays intelligence. If, in a natural-language typed conversation, you cannot tell whether you are chatting with a computer or a person, then the computer has passed the test. Six years later, the Dartmouth Conference on AI was held at Dartmouth College, NH. The proposal for the conference, issued in 1955, was the first published use of the phrase “artificial intelligence”. The conference itself was a loose gathering of experts who exchanged ideas during the summer of 1956. During the decades that followed, many practical and useful applications of AI were developed. They were mostly focused on explicit knowledge-based representations of AI. This family of AI techniques included so-called expert systems that captured expert knowledge as a form of advisory assistant in specialist domains like medical diagnosis, v
vi
FOREWORD
spectrometry, mineral prospecting, and computer design. Their popularity peaked in the 1980s, but they are still an important technique today. At the same time, a separate family of AI research had been focused on models inspired by the neurons and interconnections of the brain. These data-driven models of AI had been of only academic interest until 1985, when a practical combination of an artificial neural network structure with an effective learning algorithm was published. (Specifically, the back-error propagation algorithm was shown to train a multilayered perceptron.) That breakthrough led to another wave of excitement around these structures that could effectively learn to classify data, guided by experience from training examples. So, what has happened to cause such excitement 35 years after that breakthrough and 70 years after Turing’s article? I can think of five main reasons. First, the two broad families of AI models have matured and improved through iterative development. Second, there have been some more recent developments, such as deep-learning algorithms, which are a newer and more powerful type of neural network for machine learning. Third, the rise of the Internet has enabled any online system to access vast amounts of information and to distribute intelligent behaviors among networked devices. Fourth, huge quantities of data are now available to train an AI system. Finally, and perhaps most importantly, the power of computer hardware has improved to the extent that many computationally demanding AI concepts are now practical on affordable desktop computers and mobile devices. Much of the current interest in AI is focused on machine learning. The principle is simple: show a large artificial neural network thousands of examples of images or other forms of data, and it will learn to associate those examples with their correct classification. Crucially, the network learns to generalize so that, when presented with an image or data pattern that it has not seen before, it can reliably classify it provided that similar examples existed in the training set. This is a powerful technique that enables a driverless car, for example, to recognize a Stop sign in the street. However, it is important to remember that the algorithm will not understand what that classification means. To go beyond a simple classification label requires knowledge-based AI. That’s the same AI that has its roots in the expert systems that started to evolve in the decades following the Dartmouth conference. So, any practical AI system today needs to use a mixture of techniques from the AI toolbox. AI systems can now perform amazing feats faster and
FOREWORD
vii
more reliably than a human. Nevertheless, we should not get carried away. Any current AI is always confined to a narrow task and has little or no conceptualization of what it does. So, despite some very real and exciting AI applications, we are still a long way from building anything that can mimic the broad range of human intelligence convincingly. Furthermore, our current models of AI are not leading in that direction. A new and unexpected model could take us by surprise at any time, but I don’t expect to see it in my lifetime. Even within its current limitations, AI is starting to transform the workplace. It is already assisting professionals in pushing the boundaries of their specialism. Examples range from cancer care to improved business and economic management. AI also has the potential to remove dull and repetitive jobs. There is the tantalizing possibility of a new world order in which we work less and enjoy more leisure and education. Such a possibility creates its own challenges though, as it will require us to re-structure our societies accordingly. The COVID-19 pandemic of 2020–21 has shown that societies are capable of sweeping changes, as livelihoods have changed and homeworking has become the norm for many. AI has shown its value by supporting scientists in COVID-19 applications that include repurposing existing drugs, diagnostic interpretation of lung images, recognizing asymptomatic infected people, identifying vaccine side-effects, and predicting infection outbreaks. While the human-like qualities in AI are still rather shallow, now is the right time to grapple with the bigger ethical questions that will arise as its capabilities grow. Even now, there are questions over the degree of autonomy and responsibility to grant to an AI. Further, if it goes wrong, who will be responsible? If we ever create an AI that is truly humanlike, will it have its own rights and responsibilities? While we are a long way from that scenario, now is the time to debate, discuss, educate, and legislate for a future world in which AI is a dominant player. ∗ ∗ ∗ Portsmouth, UK
Adrian A. Hopgood
Adrian A. Hopgood is Professor of Intelligent Systems at the University of Portsmouth, UK.
Contents
Part I Political Economy 1
Social Production and Artificial Intelligence Ryan David Kiggins
2
The Role of Women in Contemporary Technology and the Feminization of Artificial Intelligence and Its Devices Mirjam Gruber and Roland Benedikter
3
4
5
6
3
17
Rise of the Centaurs: The Internet of Things Intelligence Augmentation Leslie Paul Thiele
39
AI in Public Education: Humble Beginnings and Revolutionary Potential Kenneth Rogerson and Justin Sherman
63
Chinese and U.S. AI and Cloud Multinational Corporations in Latin America Maximiliano Facundo Vila Seoane
85
AI Application in Surveillance for Public Safety: Adverse Risks for Contemporary Societies David Perez-Des Rosiers
113
ix
x
CONTENTS
Part II 7
8
9
10
11
Global Security
Artificial Intelligence for Peace: An Early Warning System for Mass Violence Michael Yankoski, William Theisen, Ernesto Verdeja, and Walter J. Scheirer
147
Between Scylla and Charybdis: The Threat of Democratized Artificial Intelligence Ori Swed and Kerry Chávez
177
Comparison of National Artificial Intelligence (AI): Strategic Policies and Priorities Sdenka Zobeida Salas-Pilco
195
Militarization of Artificial Intelligence: Progress and Implications Shaza Arif
219
Artificial Intelligence and International Security Nitin Agarwala and Rana Divyank Chaudhary
Index
241
255
Contributors
Nitin Agarwala National Maritime Foundation, New Delhi, India Shaza Arif Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan Roland Benedikter Eurac Research Center for Advanced Studies, Bolzano-Bozen, Italy Rana Divyank Chaudhary National Maritime Foundation, New Delhi, India Kerry Chávez Department of Political Science, Texas Tech University, Lubbock, TX, USA Mirjam Gruber Eurac Research Center for Advanced Studies, BolzanoBozen, Italy Ryan David Kiggins Department of Political Science, University of Central Oklahoma, Edmond, OK, USA David Perez-Des Rosiers Institute for Global Studies, Shanghai University, Shanghai, China Kenneth Rogerson Sanford School of Public Policy, Duke University, Durham, NC, USA Sdenka Zobeida Salas-Pilco Central China Normal University, Wuhan, China xi
xii
CONTRIBUTORS
Walter J. Scheirer Department of Computer Science and Engineering, University of Notre Dame, South Bend, IN, USA Justin Sherman Georgetown University, Washington, DC, USA Ori Swed Department of Sociology, Texas Tech University, Lubbock, TX, USA William Theisen Department of Computer Science and Engineering, University of Notre Dame, South Bend, IN, USA Leslie Paul Thiele University of Florida, Gainesville, FL, USA Ernesto Verdeja Kroc Institute for International Peace Studies and Department of Political Science, University of Notre Dame, South Bend, IN, USA Maximiliano Facundo Vila Seoane National University of San Martín (UNSAM), Buenos Aires, Argentina Michael Yankoski Department of Computer Science and Engineering, University of Notre Dame, South Bend, IN, USA
Acronyms
AAV ADHD AFRL AGI AGV AI AIDP AMII ANI API ASI AVs AWS CAD CAIR CCAI CEO CIFAR DARPA DoD DRDO EU FAQ FY GDP GPS
Autonomous Aerial Vehicle Attention Deficit Hyperactivity Disorder Air Force Research Laboratory Artificial General Intelligence Autonomous Ground Vehicle Aritifical Intelligence New Generation Artificial Intelligence Development Plan Alberta Machine Intelligence Institute Artificial Narrow Intelligence Application Program Interface Artficial Super Intelligence Autonomous Vehicles Amazon Web Services Canadian Dollars Centre for Artificial Intelligence and Robotics Canada CIFAR Artificial Intelligence Chief Executive Officer Canadian Institute for Advanced Research Defense Advanced Research Projects Agency Department of Defense Department of Defense and Research European Union Frequently Asked Questions Fiscal Year Gross Domestic Product Ground Positioning System xiii
xiv
ACRONYMS
HART IA IBM ICCPR ICESCR ICT IM IoT IoTIA IVAs JAIC JEDI KRW LAWS M2M MARF MediFor MILA MITT ML NGIOA NGOs OECD POC R&D RBC SAR SASC STEM STI SWAT TNCs U&CD UAV UDHR UI UK UNESCO US USA USAID USD UUV VNSAs WMDs
Harm Assessment Risk Tool Intelligence Augmentation International Business Machines, Inc. International Covenant on Civil and Political Rights International Covenant on Economic, Social and Cultural Rights Information and Communication Technologies Intelligent Machines Internet of Things Internet of Things Intelligence Augmentation Intelligent Virtual Assistants Joint Artificial Intelligence Centre Joint Enterprise Defense Infrastructure Korean Republic Won Lethal Autonomous Weapons Machine-to-Machine Multi Agent Robotic Framework Media Forensics Montreal Institute for Learning Algorithms Ministry of Industry and Information Technology Machine Learning Networks and Cyberspace of Government, Industries, Organizations, and Academia Non-Governmental Organizations Organization for Economic Cooperation and Development Personlaized Point of Care Research and Development Royal Bank of Canada Synthetic-Aperture Radar Service Armed Service Committee Science, Technology, Engineering, and Math Science, Technology and Innovation Special Weapons and Tactics Transnational Corporations Uneven and Combined Development Unmanned Aerial Vehicle Universal Declaration of Human Rights User Interface United Kingdom United Nations Educational, Scientfic, and Cultural Organization United States United States of America United States Agency for International Development Unites States Dollars Underwater Unmanned Vehicle Violent Non-State Actors Weapons of Mass Destruction
List of Figures
Fig. 5.1
Fig. 7.1
Fig. 7.2
Fig. 7.3
Poster advertising Microsoft’s AI at Buenos Aires’ downtown (Source Photo by the author taken on March 2019) A selection of political memes from the past decade, all exemplifying cultural remixing. Left: A meme that is critical of the Syrian regime’s use of poison gas, in the style of the iconic Obama “Hope” poster. Center: a meme associated with the UC Davis Pepper Spray Incident during the Occupy Wall Street Protests. Right: A Black Lives Matter meme where the raised fist is composed of the names of police victims. This meme also includes the movement’s signature hashtag A selection of political memes with disturbing messaging. Left: Brazilian President Jair Bolsonaro depicted as an action hero, ready to take on Brazil’s drug traffickers. Center: A misogynistic meme featuring Indian Prime Minister Narendra Modi. Right: Hammer and sickle superimposed on the prayer mats of Islamic worshipers in Indonesia The processing pipeline of the proposed early warning system for large-scale violence, composed of three basic required technology components
100
153
154
156
xv
xvi
LIST OF FIGURES
Fig. 7.4
Fig. 7.5
Fig. 7.6 Fig. 9.1
The output of the provenance filtering process to find related images in a large collection for three different meme genres from Indonesia. Each row depicts the best matches to a query image (the left-most image in these examples) in sorted order, where images ideally share some aspect of visual appearance. Scores reflect the quality of matches between individual objects in images. At the very end of the sorted list, we expect the weakest match, and the very low scores reflect that. These ranks form the input to the clustering step, which presents a better arrangement for human analysis Selections from three different content genres from the total pool of 7,691 discovered by the prototype system Screenshot of the web-based UI of the prototype system Japan’s AI development phases (Japan. Artificial Intelligence Technology Strategy Council 2017, 5) (Source Reproduced by permission of the New Energy and Industrial Technology Development Organization [NEDO]. Artificial Intelligence Technology Strategy)
161
162 164
202
List of Tables
Table 5.1 Table 9.1 Table 9.2 Table 9.3
Features of the main Chinese and U.S. AI-Cloud-MNCs operating in Latin America French government’s resource allocation for AI development Summary of countries’ AI policies, strategies, priorities, and budgets Comparison of the national AI strategic policies and priorities according to general categories and subcategories
93 201 208
210
xvii
PART I
Political Economy
CHAPTER 1
Social Production and Artificial Intelligence Ryan David Kiggins
Thomas Edison (1847–1931), patented the incandescent lightbulb in 1879. Within three years, the first large-scale electric power generation plant was constructed on Pearl Street in New York City. Yet, electrical technology did not achieve net adoption (widespread use) until, 1929— nearly fifty-years after its invention (Jovanovic and Rousseau 2005). Electricity is an example of a General-Purpose Technology (GPT) that is “characterized by the potential for pervasive use in a wide range of sectors and by technological dynamism” (Bresnahan and Trajtenberg 1995, 84). Decades may pass before a GPT reaches net adoption, delivering on promised economic productivity gains (David 1990). Focusing on economic productivity gains from GPT is certainly useful but, unfortunately, may elide inescapable social disruption generated by GPT that challenge long held and trusted socio-political norms, practices, and institutions. Such disruption is described as a Fourth Industrial Revolution in which the biological, material, and digital are (re)combined into a single
R. D. Kiggins (B) Department of Political Science, University of Central Oklahoma, Edmond, OK, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_1
3
4
R. D. KIGGINS
form (Schwab 2017; also see Tegmark 2017; King et al. 2016; Brynjolfsson and McAfee 2014). Scrutinizing aspects of social disruption induced by the latest GPT—Artificial Intelligence—is the primary purpose of this collection. Employing the framework of international political economy enables this collection to grapple with the scale and scope of GPT induced disruption to social relations. By social relations we mean to include social, economic, and political interactions among humans across time, space, and culture. Where Kiggins (2018) provided a collection that investigated how material manifestations of information technology—robots and artificial intelligence—intervene in economic production, this collection pushes further into investigating how humans and machines interface, interact, and integrate to produce distinct change in social relations. Eschewing a focus on economically wealthy, politically powerful, industrialized and technologically advanced nation-states, several contributions in this volume focus attention on anticipated AI disruption in the Global South, including the “BRICs”—Brazil, Russia, India, and China. Others examine discreet aspects of AI induced disruption to broader processes of social production throughout the international political economy. This project and focus on the Global South would not have been possible without the generous support of The Center for Global Governance at Shanghai University China. At which convened a conference to scrutinize the varieties of social disruption induced by artificial intelligence. Dubbed the Modern Technology and International Relations Conference, scholars from across Asia, the Americas, Europe, and Oceana met during 12–13 April 2019,1 to present and discuss scholarship that spanned the myriad ways artificial intelligence is currently and will likely influence structures of social relations. This collection represents a broad cross section of work presented and discussed during that conference and reflecting the interdisciplinarity of conference participants. The conference was most concerned with how artificial intelligence would alter extant social structures, practices, and relations across cultures, time, and space. In short, how the advent and adoption of artificial intelligence (AI) and AI linked or directed technologies would influence social production. Within political economy, there is a long tradition that examines the interaction of material production and social production. While most closely associated with the Marxist tradition, this approach to political economy is not original to Marx or his acolytes. On the contrary, the Marxist tradition is essentially one of several approaches
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
5
within the broader economic interpretation of history tradition. A tradition on which IPE is partly built. At the dawn of American political science, Edward R. A. Seligman (1861–1939), a professor of political science within the first department of political science established at Columbia University, New York, published a three-part article in Political Science Quarterly, between 1901 and 1902, on what he dubbed the economic interpretation of history tradition. The purpose of the three-part article was to introduce American scholars to the Marxist approach, situate the Marxist approach to other schools of economic interpretation, and advocate for broad acceptance of economic interpretations of history within the “American” social sciences. At this point in his career, Seligman was widely recognized as the expert in the United States on Marx and, more broadly, the economic interpretation of society tradition (Barrow 2018; Nore 1983). His students and theirs would go on to produce seminal works in the fields of history, political science, and sociology during the twentieth century. Inquiry utilizing the economic interpretation of history, therefore, need not be confined to interpreting historical phenomena. The strength of economic interpretation is precisely its focus on illuminating how economics influences much of what one observes in society, noting particularly how social production is closely associated with economic production and technological innovation. The three-part article by Seligman (1901, 1902a, b) is important to our current purposes for three reasons: (1) Twentieth-century politics and academic polemics conspired to reconceptualize the economic interpretation of history as solely Marxian; (2) Seligman notes that an economic interpretation of history should account for processes of social production; and (3) according to Seligman, an economic interpretation of social production processes need not be Marxian. Seligman argues that the core premise of an economic interpretation of social phenomena is shared across different schools within the tradition—including Marxism. Seligman describes that core premise as follows: The existence of man depends upon his ability to sustain himself; the economic life is therefore the fundamental condition of all life. Since human life, however, is the life of man in society, individual existence moves within a framework of the social structure and is modified by it. What the conditions of maintenance are to the individual, the similar relations of production and consumption are to the community. To economic
6
R. D. KIGGINS
causes, therefore, must be traced in the last instance those transformations in the structure of society which themselves condition the relations of social classes and the various manifestations of social life. (1901, 613)
The key point is that any economic interpretation of history will note economic influences on the contours of social relations, social classes, and social structures. Explicit in this approach to explicating social phenomena, is technological innovation and use. Seligman a bit further along in part one of the three-part article includes an insight from Marx that, “Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them” (quoted in Seligman 1901, 635). Humans use technology to extract from nature what is necessary to sustain themselves and through the process of performing this activity, produce their social identities, societal structures, social classes, and social relations. The economic interpretation of history is a powerful analytical tool. Influencing American political scientists, sociologists, and historians including Beard ([1913] 2012) reinterpretation of the American founding as an effort to preserve the economic holdings of a landed elite, Williams ([1959] 1988) demonstration that the purpose of American foreign policy is to promote commercial and political expansion overseas risking American imperialism. An interpretation of American foreign policy that came to be known as the Open Door. Building off the Open Door, Bacevich (2002) argues that American foreign policy does ultimately lead to American empire and imperialism to the peril of the country, as does the critique by Layne (2006) of American grand strategy in the post-Cold War period on the premise that pursuing overseas commercial and political expansion is detrimental to American national security. Kiggins (2015) showed consistent with the Open Door, that the purpose of US internet governance policy is to leverage the technology to expand American products and political ideals overseas in order to ensure domestic economic growth and political tranquility, but nonetheless renders the Internet a tool of American imperialism. Each noted contribution underscores, consistent with an economic interpretation tradition, the extent to which economics is of primary influence on society beyond rigid focus on the economics of production, consumption, and distribution. Indeed, within international political economy (IPE),
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
7
LeBaron et al. (2020) show that IPE scholars investigate a wide range of topics through the lens of economic interpretation including institutionalized racism, misogyny, climate change, in addition to common topics such as international development, globalization, and transnational production and corporations. International political economy as a framework casts a wide analytical net, offering sufficient depth and breadth for situating this collection within IPE. An IPE orientation to AI induced disruption includes the notion that technology directly influences composition of social relations through altering patterns of activity that constitute international political economy. There is a direct and interactive connection between technology use, economic, and social production. This view of the interaction between technology, humans, and social relations is consistent with contemporary approaches in Science and Technology Studies (STS) that scrutinize the social construction of society and technology (Bijker et al. 2012; Bucher 2018; Fulk 1993; Klein and Kleinman 2002; Pinch and Bijker 1984; Plantin and Punathambekar 2019). The field of IPE would do well to incorporate insights from STS to efforts accounting for how AI disruption will alter extant patterns of cooperation and competition within the international political economy. This volume constitutes a preliminary step toward that possibility. Before an overview of volume contributions, an introduction to AI is offered.
A Working Definition of AI Simonite (2017) reported that AI disruption is of such magnitude that nuclear weapons will be rendered obsolete. Nuclear weapons. Obsolete. The ultima ratio in world politics since the end of World War II will cease to matter. Vladimir Putin, current President of Russia, stated that “Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world” (Simonite 2017, n.p.; also see Lee 2018; UK Parliament 2017; Vincent 2017; White House 2016). Putin is simply expressing the certainty that harnessing GPTs is crucial to sustaining military capability and economic wealth, two of three pillars for national power (see Carr 2001). The third pillar of national power, according to Carr (2001), is influence over public opinion. Through rapid net adoption of AI, competition among all actors in international political economy during the twentieth-first century is shifting from the material sphere to the cognitive
8
R. D. KIGGINS
sphere. No longer will military formations be the primary target adversaries plan and game to destroy. Public opinion influence is the ultimate weapon, shifting the emphasis from hard power to soft power capability through adoption, development, and exploitation of AI. For IPE scholars, understanding what is AI, is paramount. Yet, AI is difficult to define; for how does one define intelligence? Monett and Lewis note that, “Theories of intelligence…have been the source of much confusion” (2017, 212). This confusion in large measure arises from several disparate disciplines contributing to the scientific development of AI (Wang 2019). All disciplines are constituted of normative, empirical, and methodological assumptions that influence how, why, when, and who researches what questions (Kuhn 2012). The presence of incommensurability—lack of a common measure for comparison— within AI research disciplines significantly retards development of AI technology. The presence of incommensurability necessitates reliance on working definitions of Artificial Intelligence. The term “artificial” refers to non-biological. The meaning of intelligence, however, is difficult to pin down. Wang suggests the following working definition: “Intelligence is the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources” (2019, 17). For the purposes of this collection, and following Wang (2019), a working definition of Artificial Intelligence (AI) is a non-biological information-processing system capable of adapting to its environment under conditions of scarce knowledge and resources. Functionally, AI is a set of techniques for calculating the probability that an outcome will occur. AI such, AI is essentially a technology that makes probabilistic predictions. “Prediction is the process of filling in missing information. Prediction takes information you have, often called ‘data’, and uses it to generate information you don’t have” (Agrawal et al. 2018, 13; also see Mayer-Schonberger and Ramge 2018). The key economic cost of prediction is the price of information. With the advent of the Internet and social media technologies, the cost of information is approaching zero, and thus, the price for prediction accordingly decreases. Cheap AI prediction is driving widespread adoption of the technology (Zuboff 2019). Boundaries to AI adoption are rapidly falling in international political economy. “AI is everywhere. It’s in our phones, cars, shopping experiences, romantic matchmaking, hospitals, banks, and all over the media” (Agrawal et al. 2018, 1). Lee (2018) argues and Russian President
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
9
Vladimir Putin reportedly agrees,2 that the nation-state able to develop the best AI technology, will dominate the twenty-first century. On this basis alone, treating AI as the latest iteration of GPT undervalues the ubiquitous disruption to IPE that AI presents and underscores the acute urgency for developing insight into AI disruption this collection offers.
AI, Social Production, and IPE In the chapters that follow, AI disruption is explored, analyzed, and remedies suggested for mitigating even marshaling AI disruption for widespread societal benefit. A useful framework for situating the contributions is to conceptualize IPE as being comprised of a clash between three sets of logic: market, state, and society (Gilpin 2001). The logic of the market is to maximize profits contrasted with that of the state, which is to seize and harness some measure of market profit to support growth in wealth, well-being, and ensure security for a society (ibid.). The logic of society is to preserve institutions against radical change.3 Gilpin (2001) asserted that the clash of logics was central to the study of IPE and this may be demonstrated as we utilize the logics to situate contributions to this collection. Gruber and Benedikter, in the inaugural contribution, provide a superb analysis of how AI technology confirms extant structures of patriarchy within IPE. This contribution sounds a warning about unchecked patriarchy permeating AI technology through the embodiment of female features within AI technologies without female consent or design input. The perpetuation of submissive gender stereotypes through AI technologies reflects the logic of society to the extent that patriarchy is supported through net adoption of AI, lending support to existing societal structures of power. In addition, AI female embodiment also reflects the logic of the market as it relates to valuation and commodification of female labor roles and commercialization of “the female” through AI technology. The upshot is that far from disrupting the logic of the market or the logic of society, AI instantiates contradictions and inefficiencies inherent in an IPE ordered by liberal free-market institutions. The combination of AI technology and human labor is demonstrating significant productivity gains compared to strictly AI or human effort. In Chapter 2, Thiele explores repercussions that emerge from coupling AI and human labor. Of concern is potential loss of cognitive skill and creativity by humans as dependency grows on AI to direct human labor.
10
R. D. KIGGINS
Cognitive deskilling has the potential to disrupt the logic of the market through decreasing human productivity, as creative skill and practical judgment are lost to intelligence augmentation by collaborating with AI in economic production and exchange. Such a development raises the specter of reduced productivity growth that could negatively influence future profits. Recognizing that productivity gains from human-AI teams are too scintillating for market participants, Thiele, suggests that humans need to begin developing emotional and cognitive capabilities that ensure humans do not lose the race against AI. Rather, such an approach assists humans to win with AI. Thiele’s contribution sets the stage for an overview of national strategies for adopting AI to public education. Through a comparative analysis of how China, India, and the US are adopting AI in respective public education systems, Rogerson and Sherman offer a balanced analysis that explains benefits and costs to educational outcomes. Education is that rare area that is deeply interconnected with all three logics of an IPE of AI. An educated workforce is consistently linked to increased productivity gains and value creation sufficient to support efficient allocation, profit growth, and wealth accumulation. The prevailing model of societal governance is the liberal state in part conceived on the principle that educated humans are capable of self-governance that may include use of state regulatory power to redress inequalities that may arise through wealth distribution. Public education is crucial to the inculcation of societal norms and obeisance to extant societal social structures by social agents thereby safeguarding societal social structures from disruptive change. Incorporating AI within public education systems has the potential to develop a pedagogical model and curricula that is flexible and adaptable to student aptitudes, demographics, and socio-economic disparities. It would be possible, for example, to include curriculum designed to develop and support cognitive skill and creativity at risk through coupling humans and AI in production teams. Yet, as Rogerson and Sherman note, it remains to be observed which public education AI adoption method will prove its worth. In Latin America, net adoption of AI, more broadly, is being driven by market competition among Chinese and American firms. According to the logic of the market, competition between firms contributes to efficient allocation of scarce resources through incentivizing firms to product at lowest cost to maximize profits. Vila-Seoane, in his contribution, reminds that such logic may clash with the logic of
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
11
the state tasked with marshaling the economic potential of a nation in order to support and legitimate societal social structures. The embeddedness of market, state, and society is on full display as Vila-Seoane, relying on a neo-Gramscian approach, demonstrates that market competition between Chinese and US transnational corporations to provide AI and cloud computing services contributes to and perpetuates uneven societal development throughout Latin America. The distribution of AI ownership among Chinese and US transnational corporations is may be viewed as reflective of prevailing liberal technique in global society. Alexander scrutinizes AI from the perspective of technique, suggesting that AI qualifies as a method or set of methods rationally derived to maximize efficiency. With maximize efficiency taken to refer to the extent to which technique influences human agency to comport with prevailing social norms and structures in society. Any disruption to the logic of society will occur simultaneous to agents influenced by technique to adapt behavior to emerging social structures consistent with AI. That disruption may be experienced as subtle rather than radical change. Thus, AI constitutes a technique that will refashion societal structures and agent behavior in accordance with AI algorithms. Viewing AI through the lens of technique prepares ground for the contribution by Perez-Des Rosiers that examines how AI operates in society as surveillance technology in support of public security. AI supports the logic of society most directly through its application to surveillance of human agents. Collecting millions of human choice observations, AI analyzes that data to ascertain a pattern of behavior on which it relies to predict future human choices. The power to predict human agency offers the state and corporations the capacity to directly influence humans to exercise agency consistent with the logic of the state and the logic of the market consistent with public security objectives. Examination of AI-enabled surveillance of human agency is continued in the next chapter. A vexing problem in conflict studies is identifying precursors to instances of political violence perpetuated by human agents. Yankoski et al. set out to develop a model that may be employed to solve that problem. Focusing on hate propaganda distributed over social media networks, Yankoski et al. demonstrate the use of a computational forensic analysis that identifies trends and threats that give rise to political violence. With this information, state authorities are able to act before political violence occurs thereby preserving peace in society. Violent non-state
12
R. D. KIGGINS
actors (VNSAs), according to Swed and Chavez, in the following chapter, may be able to leverage AI technologies to outwit state authorities’ efforts to prevent political violence. As the price of AI decreases and access to AI increases over time, VNSAs will be in position to utilize AI in furtherance of objectives, challenging the logic of the state through increasing costs to address VNSAs, requiring the state to extract additional revenue from the market to fund those costs. The state effectively must exert more control over the market to fulfill its obligations to safeguard societal structures consistent with the logic of society. Both chapters that examine the role of AI in surveillance highlight distinct actors and uses for the technology gesturing to the notion that AI is what humans make of it. Similarly, the contribution of Salas-Pilco, a survey of national AI strategies and priorities, provides perspective on the different approaches, in some cases, and matching in others, to net adoption of AI taken by nation-states. This detailed overview of national AI strategies and priorities is a useful primer for assessing which nation-state may achieve AI dominance vis-à-vis all others and become the undisputed preponderant AI power in IPE. As one engages with the contribution by Salas-Pilco, the logic of the market, logic of the state, and logic of society may be observed just below the surface of various national AI strategies and priorities. For example, Salas-Pilco notes that Canada prioritizes economic wealth development through its national AI strategy consistent with the logic of the market. In contrast, China emphasizes economic wealth accumulation and control consistent with both the logic of the market and the logic of the state. Indeed, China, the US, and Russia, it is noted, are most ambitious identifying a multitude of applications for AI technologies. The militarization of information technologies is one such application much discussed and debated (see Austin 2018; Lee 2018; Wittes and Blum 2015; Harris 2015; Singer and Friedman 2014; Rid 2013). Arif’s contribution weighs into this ongoing discourse through a comparative examination of Russia, India, China, and US military programs to leverage AI in support of military missions. Most concerning to Arif, is the development of an AI arms race among these four countries which, may lead to heightened risk for war as the political risk for war decreases with the shift of warfighting from humans to AI enabled war machines. In the end, Arif recommends that a global dialogue about militarization of AI begin with the objective being a global AI arms control treaty to manage AI arms races. Echoing Arif’s concerns, the final contribution by
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
13
Agwarwala and Chaudhary provides a look at the extent to which militarized AI will disrupt international security. As with Arif, Agwarwala, and Chaudhary express great concern that AI harbors impressive disruptive potential such that a global treaty is necessary to manage AI arms races and protect against the destructive potential of a first AI war.
Conclusion Each contribution casts light on the clash among logics of IPE, showing how net adoption of the latest GPT—AI—causes disruption to patterns of cooperation, competition, and economic exchange. These patterns form the basis of our social relations, the products of social production processes that technology directly shapes and influences. Adequately accounting for disruption to social relations arising from changes to patterns of cooperation, competition, and economic exchange is best accomplished through an IPE framework. The breadth, depth, and flexibility of an IPE framework enable interdisciplinary inquiry into AI induced disruption, necessary for understanding how that disruption to social production will change us. Developing understanding of how AI disruption changes us is the basis for developing and enacting policy, regulation, and governance of AI necessary for ensuring we reap the full benefit at minimal cost of AI technology.
Notes 1. http://internationalstudiesandsociology.blogspot.com/2019/04/internati onal-conference-modern.html. Last accessed 4 November 2020. 2. See, https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intell igence-will-rule-world.html. Last accessed 21 July 2020. 3. I thank my colleague and friend, Dr. Andrew Magnussen, for his willingness to listen to my travails developing the logic of society, and offering helpful guidance as we biked through heat, humidity, and over undulating terrain.
Works Cited Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Press, 2018. Austin, Gregory. “Robots Writing Chinese and Fighting Underwater.” In The Political Economy of Robots, pp. 271–290. Palgrave Macmillan, Cham, 2018.
14
R. D. KIGGINS
Andrew, Bacevich. American Empire: The Realities and Consequences of US Diplomacy. Harvard University Press, 2002. Barrow, Clyde. More than a Historian: The Political and Economic Thought of Charles A. Beard. Routledge, 2018. Beard, Charles A. An Economic Interpretation of the Constitution of the United States. Simon and Schuster, [1913] 2012. Bijker, W. E., T. P. Hughes, and T. Pinch (Eds.). (2012). The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. MIT press. Bresnahan, Timothy F., and Manuel Trajtenberg. “General Purpose Technologies ‘Engines of growth’?” Journal of Econometrics 65, no. 1 (1995): 83–108. Brynjolfsson, Erik, and Andrew McAfee. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company, 2014. Bucher, Taina. If… then: Algorithmic Power and Politics. Oxford University Press, 2018. Carr, Edward H. “The Twenty Years’ Crisis: With a New Introduction by Michael Cox” (2001). David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” The American Economic Review 80, no. 2 (1990): 355–361. Fulk, Janet. “Social Construction of Communication Technology.” Academy of Management Journal (2017). Gilpin, Robert. Global Political Economy: Understanding the International Economic Order. Princeton University Press, 2001. Harris, Shane. @ War: The Rise of Cyber Warfare. Headline Publishing Group, 2015. Jovanovic, Boyan, and Peter L. Rousseau. “General Purpose Technologies.” In Handbook of Economic Growth, vol. 1, pp. 1181–1224. Elsevier, 2005. Kiggins, Ryan David. “Big Data, Artificial Intelligence, and Autonomous Policy Decision-Making: A Crisis in International Relations Theory?” In The Political Economy of Robots, pp. 211–234. Palgrave Macmillan, 2018. Kiggins, Ryan David. “Open for Expansion: US Policy and the Purpose for the Internet in the Post-Cold War Era.” International Studies Perspectives 16, no. 1 (2015): 86–105. King, Brett, Andy Lark, Alex Lightman, and J. P. Rangaswami. Augmented: Life in the smart lane. Marshall Cavendish International Asia Pte Ltd, 2016. Klein, Hans K., and Daniel Lee Kleinman. “The Social Construction of Technology: Structural Considerations.” Science, Technology, & Human Values 27, no. 1 (2002): 28–52. Kuhn, Thomas S. The Structure of Scientific Revolutions. University of Chicago Press, 2012.
1
SOCIAL PRODUCTION AND ARTIFICIAL INTELLIGENCE
15
Layne, Christopher. The Peace of Illusions: American Grand Strategy from 1940 to the Present. Cornell University Press, 2007. LeBaron, Genevieve, Daniel Mügge, Jacqueline Best, and Colin Hay. “Blind Spots in IPE: Marginalized Perspectives and Neglected Trends in Contemporary Capitalism.” Review of International Political Economy (2020): 1–12. Lee, Kai-Fu. AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt, 2018. Mayer-Schönberger, Viktor, and Thomas Ramge. Reinventing Capitalism in the Age of Big Data. Basic Books, 2018. Monett, Dagmar, and Colin W. P. Lewis. “Getting Clarity by Defining Artificial Intelligence—A Survey.” In 3rd Conference on Philosophy and Theory of Artificial Intelligence, pp. 212–214. Springer, Cham, 2017. Nore, Ellen. Charles A. Beard, An Intellectual Biography. Southern Illinois University Press, 1983. Pinch, Trevor J., and Wiebe E. Bijker. “The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other.” Social Studies of Science 14, no. 3 (1984): 399– 441. Plantin, Jean-Christophe, and Aswin Punathambekar. “Digital Media Infrastructures: Pipes, Platforms, and Politics.” Media, Culture & Society 41, no. 2 (2019): 163–174. Rid, Thomas. Cyber War Will Not Take Place. Oxford University Press, USA, 2013. Schwab, Klaus. The Fourth Industrial Revolution. Currency, 2017. Seligman, Edwin R. A. “The Economic Interpretation of History. I.” Political Science Quarterly (1901): 612–640. Seligman, Edwin R. A. “The Economic Interpretation of History. II.” Political Science Quarterly (1902a): 71–98. Seligman, Edwin R. A. “The Economic Interpretation of History. III.” Political Science Quarterly (1902b): 284–312. Simonite, Tom. “For Superpowers, Artificial Intelligence Fuels New Global Arms Race.” Wired, September 8 (2017). Available at: https://www.wired.com/ story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race/. Last accessed 27 October 2020. Singer, Peter W., and Allan Friedman. Cybersecurity: What Everyone Needs to Know. Oxford University Press USA, 2014. Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017. Thiele, Leslie Paul. “Against Our Better Judgment: Practical Wisdom in an Age of Smart(er) Machines.” In The Political Economy of Robots, pp. 183–209. Palgrave Macmillan, 2018.
16
R. D. KIGGINS
UK Parliament. Science and Technology Select Committee Report on Robots and Artificial Intelligence. Tech. rep, 2017. Vincent, James. “Putin Says the Nation that Leads in AI ‘Will Be the Ruler of the World’.” The Verge 4, no. 10 (2017). Wang, Pei. “On Defining Artificial Intelligence.” Journal of Artificial General Intelligence 10, no. 2 (2019): 1–37. Williams, William Appleman. The Tragedy of American Diplomacy. WW Norton & Company, 1988. Wittes, Benjamin, and Gabriella Blum. The Future of Violence: Robots and Germs, Hackers and Drones-Confronting A New Age of Threat. Basic Books, 2015. White House, O. S. T. P. Preparing for the Future of Artificial Intelligence. Tech. rep. Executive Office of the President-National Science and Technology Council Committee on Technology. https://obamawhitehouse.archives.gov/ sites/default/files/whitehouse%7B%5C_%7Dfiles/microsites/ostp/NSTC/ preparing%7B%5C_%7Dfor%7B%5C_%7Dthe%7B%5C_%7Dfuture%7B%5C_% 7Dof%7B%5C_%7Dai.pdf, 2016. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Hachette Book Group, 2019.
CHAPTER 2
The Role of Women in Contemporary Technology and the Feminization of Artificial Intelligence and Its Devices Mirjam Gruber and Roland Benedikter
Introduction: The Contemporary Link Between Economy, Technology and Women Technology and its development has been an essential factor of globalization. Innovations in information and communication technologies seem to reduce the distances between people, countries, continents and even beyond. Technology is therefore a fundamental driver of the economy, and new products in this field are linked essentially to global and international economic growth (Mowery and Rosenberg 1991). Technology, for its part, is developing at high speed, partly outpacing ethical attempts to cope with new options and feasibilities. Yet during the past century
M. Gruber (B) · R. Benedikter Eurac Research Center for Advanced Studies, Bolzano-Bozen, Italy e-mail: [email protected] R. Benedikter e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_2
17
18
M. GRUBER AND R. BENEDIKTER
another factor became an important part of evolving economies and has remained so with increasing impact on today’s globalized world: Women (Joekes 1990). The reason was and is simple. To stay competitive, nations have needed not only males in the labor market, but have to include the other half of the population. Currently, many studies on international development, employment and entrepreneurship show how essential women have become in the labor market and for the general performance of economies around the globe (International Monetary Fund 2018; PwC, Women in Work Index 2018; McKinsey & Company 2018; Cuberes and Teignier 2016; Ferrant and Kolev 2016; Bliss and Garratt 2001). The mobilization of women to join the labor force in post-Fordist conditions has been debated quite controversially, fostered by studies which suggest that in the 1970s and 1980s, i.e., before the latest wave of women entering the workforce, an average U.S. household with just one member of the family, usually the male, working and the other staying at home with the children, had the same or a similar buying power as a current family with, common since the 1990s, both parents working (Forste and Fox 2012). While such studies are often confined to (neo-)conservative debates and thus only partly included in national and international foresights, there is a debate about how and to what extent the mobilization of women may have contributed to the sinking global average buying power and the rising global inequality rates—which is another paradox inbuilt in the question of the “inclusion” of women in neoliberal economic systems. While women’s economic empowerment remains essential, critics argue that the propaganda of inclusion led to the exploitation of women as much as to their liberation from gender-restricted roles and domestic submission (Golla et al. 2011; Fox 2002). And while most analysts agree that gender equality “boosts economic growth” (European Institute for Gender Equality 2017) and advancing women’s equality could add USD 12 trillion to global growth by 2025 (Woetzel et al. 2015), critics assert these numbers just mean national economic growth via the growth of enterprises and firms and thus national productivity, not individual growth as related to household and personal buying power. However, there is also a similarly disputed—and perhaps equally essential—link between technology and women that has shaped the social development of the past decades and is promising to even grow in importance over the coming years. In this chapter, we show why it is important to monitor the growing interconnection between what is denoted as
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
19
“female” and technology, especially with regard to the recent developments in Artificial Intelligence (AI). We note that new technology devices equipped with AI systems often have female features, and that this seems to point to a broader trend toward commodification and commercialization of “the female” in and through the realm of advanced technology. While most early AI-equipped tech devices had a very rudimentary sound from the point of view of pronunciation, timbre, sentence, melody, etc.— they sounded like what we expected from robots—with the help of AI technologies, machines now sound quite similar to humans. Remarkably, most of the actual techno-voices sound female. For instance, the default voices of most up-to-date navigation systems are female. Smart language assistants are also equipped by default with a female voice. Other examples of commonly known AI systems abundantly present in everyday life are Siri, Alexa, Cortana and Google Home. Most recently, it seems that the feminization of technological systems is even going another step further: toward the feminization of the appearance and design of tech devices. Roboticists are already creating or developing AI systems in humanoid bodies, and those bodies are in many cases female. However, most systems equipped with AI are not artificial intelligence in the strict sense but rather semi-intelligent machines with a very specific intelligence based on human-fed machine learning. Even though the most eminent technological progress takes place in the military industry (see Shaza Arif in this volume Chapter 10)—and AI systems and robots used there usually do not show female characteristics—gender biases are firmly in place and apparently not changing. Nevertheless, the female robots (also known as gynoids or fembots) that exist in the technologized economy tend to be created to work as service assistants or even as sex workers, thus confirming submissive gender stereotypes (Benedikter and Gruber 2019; Telegraph 2018; NBC San Diego 2017; McCurry 2015; Chen et al. 2017; Richardson 2016; Rogers 2015). Many, such as Erica, Asuna, Nadine, Harmonie and Roxxxy, are already in the public eye. Looking at these developments, we have to ask: How are technology and the role of women related and what roles do new, allegedly AI-based technological applications play in this respect? In our view, the focus of investigation of the female-technology intersection must lie with AI because on the one hand the technology seems to have a lot of (ascribed) potential for the future and can therefore be positive; and on the other
20
M. GRUBER AND R. BENEDIKTER
hand, because AI seems to be most affected by its embodiment and representation through female bodies and voices. To answer these questions we referred to several feminist theories with different approaches to technology, aiming to get to the origins of this emerging relationship. In section three, we address and discuss these perspectives and try to answer the research questions.
Literature Review: Technology and Feminism Feminist literature and theory deals with the subject of technology differently, depending on the current, strand or approach of feminist theory applied. Those currents and strands cannot be put in a simple chronological order, but always overlap and often arise and develop in parallel. We demonstrate how different feminist scholars have integrated the topic of technology and developed and evaluated theories and concepts regarding the historical and cultural relations between gender and technology. Feminists’ questions about technology and gender changed from how women can get access to technology, to the processes and developments within technology—how technology can be used for gender emancipation and the general processes of development and use (Wajcman 2009). In the following, we explain some of the approaches, starting with second-wave feminism. Second-Wave Feminism Second-wave feminism began in the 1960s in the USA and included a broad range of topics and issues such as sexuality, family, work, reproductive rights and, later, technology. First-wave feminism focused mainly on suffrage and overturning legal obstacles to gender equality. Liberal Feminism Due to the historical exclusion of women from the fields of science and technology, liberal feminism regarded technology as a power resource, which was almost exclusively in the hands of men (Bailey and Telford 2007, 247; Wajcman 2004). Early liberal feminists focused on this exclusion (Wajcman 1991, 2004). In their view, the problem is the male control of (neutral) technologies (Wajcman 2009). When enthusiasm for technology boomed in the 1980s (see Silicon Valley), many feminists insisted that the patriarchy of the nineteenth and twentieth centuries
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
21
manifested itself in the machine world conceived and invented by men. Therefore, achievements of technology were and would always remain an extended arm of men. Liberal feminists promoted equal access for women to education and employment in these areas. Indeed, the fundamental idea of bourgeois/liberal feminism is the equality of both sexes. Reforms should resolve inequalities within the existing social system. Radical Feminism Radical feminism emphasizes the difference between men and women. In this view, men systematically dominated and controlled women’s power, culture and pleasure (Wajcman 2009, 146). According to representatives of the student New Left and the Civil Rights Movement (namely Catherine MacKinnon and Mary Daly), separation is the only way out of the male domination of women. Radical feminists such as Mary Daly (1978), Susan Griffin (1978) and ecofeminist Maria Mies (1985) “warn that technology has historically and culturally been constructed as male and used for patriarchal objectives such as war and capitalist exploitation. They claim technology is gendered from the ground up as the entire process of design and development is controlled by men with men’s interest in mind” (Bailey and Telford 2007, 247). Technologies of human biological reproduction in particular are seen as dangerous and a further patriarchal exploitation of women’s bodies. Thus, radical feminists are strongly opposed to new reproductive technologies, such as in vitro fertilization (Wajcman 2009) as in their opinion technology is a patriarchal tool used to control women’s bodies and sexuality and replace their labor force, and it is the enemy of feminism and women’s equality due to the reinforcement of gendered and classed hierarchies (Bailey and Telford 2007, 247; Daly 1978). Women were often portrayed as victims of patriarchal technoscience (Wajcman 2009). In this interpretation, women are subordinated to men and this even permits men to use women’s bodies as objects (sexual appropriation) (Bailey and Telford 2007, 250; MacKinnon 1982). Of course, this approach has not been without criticism (e.g., Denfeld 1995; Roiphe 1994). Socialist or Marxist Feminism Socialist feminism focuses on the relationship between technology and women’s work. Socialist feminists understand the oppression of women as part of capitalism and patriarchy that is based on exploitation and class differences. According to Wajcman (2009) this literature argued
22
M. GRUBER AND R. BENEDIKTER
that technology “is far from being an autonomous force, technology itself is crucially affected by the antagonistic class relations of production” (p. 147). The male domination of skilled trades that developed during the Industrial Revolution resulted in the exclusion of women from technology (Bradley 1989; Cockburn 1983; Milkman 1987). Wajcman also opines that (2009, 146) socialist and radical feminists analyzed “the gendered nature of technical expertise, and put the spotlight on artefacts themselves.” While liberal feminists regarded technology artifacts as neutral, socialist feminists saw male dominance embedded in the machinery world and technology as a key source of male power and socially shaped by men (Wajcman 2009, 147; Wajcman 1991; Cockburn 1985; McNeil 1987; Webster 1989). Therefore, women were excluded and only revolution can change women’s role in society. Third-Wave Feminism Comparatively, third-wave feminism is more positive regarding technology. In information and communication technologies (ICTs) in particular it sees the possibility of an empowerment of women and transformation of gender relations within technology (Kemp and Squires 1998). Post-feminism Feminists of the third wave also deal with the social construction of gender, changing identities and cultural categories. For instance, recent feminists, such as Wajcman (1991, 2004), lean on the work of American philosopher and gender theorist Judith Butler and other postmodern feminists in whose view gender is socially constructed. Therefore, “gendered relations and interests that shape technology” (p. 248) are also transformable. Butler focuses on the body, which can be related to technology especially since technology devices are embodied in humanoid bodies or voices. Even though the relationship between Butler’s concept of the body and technology is not self-evident, we introduce it briefly here and expand on it in the next chapter. In Butler’s view, “‘the body’ is itself a construction, as are the myriad ‘bodies’ that constitute the domain of gendered subjects. Bodies cannot be said to have a significant existence prior to the mark of their gender; the question then emerges: To what extent does the body come into being in and through the mark(s) of gender?” (Butler 1990, 12, italics
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
23
in the original). According to Butler, the construal of sex is a “cultural norm which governs the materialization of bodies” (Butler 2011, 2–3). In the western tradition the mind is supreme, but Butler radicalizes this supremacy and according to Kuhlmann and Babitsch (2002, 434) for Butler “the ‘mind’ no longer dominates the body, but instead the body is reduced to cultural practices.” Elizabeth Grosz, Professor of Women’s Studies and Literature at Duke University, also places the body at the center of her analysis. In contrast to several feminist theories, she sees the body as much more than society and culture allows, as something that could even lead women to independence and autonomy (Grosz 1994, xiii). However, both Grosz and Butler assume that the “categorization of bodies according to gender is not unequivocal and requires permanent confirmation and repetition” (Kuhlmann and Babitsch 2002, 136; Angerer 2002). Donna Haraway, postmodern feminist,1 author of A Cyborg Manifesto and prominent scholar in the field of science and technology studies, assumes that the body is “made” but also an agent and not a resource (1991). In the Cyborg Manifesto, she relies on post-colonial and critical race theories and introduces the cyborg “to argue in favor of coalitions based on affinities of interest rather than on categorical identities such as gender, class, and race” (Bailey and Telford 2007, 255). She questions the distinction between machine and organism. Her work is directed against several feminist approaches, as discussed above, that plead for a clear rejection of technology and claim for a one-sided attachment of women to nature. In Haraway’s view, this attachment is directed to both, nature and technology. She states (1997) that overcoming boundaries can help to dismantle gender biases and thus overcome hierarchies (Kuhlmann and Babitsch 2002, 436). Haraway “is the most radical proponent of the dissolution of the boundaries between nature and technology, based on the figure of the cyborg, a creature with a hybrid body which cannot be described using the usual polarized categories (Haraway 1997)” (Kuhlmann and Babitsch 2002, 436). She describes a cyborg as follows: A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction. […] The cyborg is a creature in a post-gender world; it has no truck with bisexuality, pre-oedipal symbiosis, unalienated labour, or other seductions to organic
24
M. GRUBER AND R. BENEDIKTER
wholeness through a final appropriation of all the powers of the parts into a higher unity. (Haraway 1991, 149–150)
Haraway sees the western world characterized by dualisms, one side being ruler and the other servant: “Chief among these troubling dualisms are self/other, mind/body, culture/nature, male/female, civilized/primitive, reality/appearance, whole/part, agent/resource, maker/made, active/passive, right/wrong, truth/illusion, total/partial, God/man” (Haraway 1995, 177). Haraway’s cyborg helps dissolve the problematic dualisms self/other, that legitimize domination over women, black people, nature, workers and animals, since domination over people who are considered different is taken for granted. Without ignoring the criticism that the manifesto received, it offers an exceptional perspective of a possibly transformative relationship between technology and feminism (Bailey and Telford 2007, 258). Cyberfeminism Haraway is optimistic about the opportunities regarding new technologies and plays an important role in cyberfeminism; her cyborg became the ideal citizen for cyberfeminists in a post-patriarchal society. Sadie Plant’s “Zeros and Ones, Digital Women and the New Technoculture (1995) is considered another fundamental basis of cyberfeminism. Cyberfeminists are an unofficial group of female thinkers, coders and artists who focused on the new information and communication technologies of the twentieth century. They ask: Can we use technology to hack the codes of patriarchy? Can we escape gender online? Cyberspace and the internet are seen as possibilities for the liberation of women and women are especially suited for life in the digital age, although or because the incipient computer technology was largely a male domain. According to Wajcman (2009, 147), “[c]yberfeminists […] see digital technologies as blurring of boundaries between humans and machines, and between male and female, enabling their users to choose their disguises and assume alternative identities.” She adds that (2009, 144) recent cyberfeminist writings “see digital and biomedical technologies as offering possibilities for destabilizing conventional gender differences.”
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
25
Technofeminism Technofeminists dislodge the assumption that technological artifacts and objects are separate from society. They regard those objects not as merely technical or social but as a social fabric, which keeps society together (Wajcman 2009, 149). According to several scholars, this constructivist approach “treats technology as a sociotechnical product – a seamless web or network combining artefacts, people, organizations, cultural meanings and knowledge” (Wajcman 2009, 149; Hackett et al. 2008; Law and Hassard 1999; MacKenzie and Wajcman 1999; Bijker et al. 1987). Furthermore, technology and society mutually influence technological change and development. However, the marginalization of women in technological professions and its communities influences the design, use and technical content of artifacts (see empirical research on the microwave oven e.g., Cockburn and Ormrod 1993; on the telephone e.g., Martin 1991; and on robotics e.g., Suchman 2008). According to Wajcman (2009, 149) “if ‘technology is society made durable’ (Latour 1991, 103), then gender power relations will influence the process of technological change, which in turn configures gender relations.” Xenofeminism The objective of xenofeminists is, similar to that of cyberfeminists, to use technology for progressive gender political purposes. According to them, technology can be used to change the world and the risks of imbalance, abuse and exploitation of the weak must be taken into account but accepted. Xenofeminists want to abolish gender by letting hundreds of sexes flourish, which should then create a society in which properties currently gathered under the heading of gender no longer serve as a grid for the asymmetric functioning of power. Xenofeminists such as Helen Hester (2018) have a positive attitude toward bodyhacking. According to the Wirtschaftslexikon Gabler (2020) “In bodyhacking, one intervenes invasively or non-invasively in the animal or human body, often in the sense of animal or human enhancement and sometimes with the ideology of transhumanism. It is about physical and psychological transformation, and it can result in the animal or human cyborg” (own translation from German) and is a special form of biohacking.2 This is similar to the second-wave feminists, who do not equate biology with destiny. Biohacking technologies can demonstrate that biology is modifiable, without denying biological facts.
26
M. GRUBER AND R. BENEDIKTER
Technology Development Is a Source and a Consequence of the Development of the Role of Women Here we answer the research questions of how technology and the role of women are related and what role AI plays by considering various feminist approaches and secondary data on gender emancipation and AI. Second-Wave Feminism and the Proportion of Women in Technology Jobs Even though women in many industrial countries have much more rights than 50 years ago, the tech industry is male-controlled and women are still largely excluded from technology jobs. Women’s access to education improved worldwide and the number of women who graduate from universities has risen, but less than approximately 12–18% of engineers are women (Word Bank Data 2018; National Science Board 2018; Kaspura 2017; Computer Science 2012). Particularly in industrialized countries, the tech industry is clearly dominated by men. Statistics from major tech companies located in Silicon Valley explicitly confirm that. Women account for 28% of the total workforce at Microsoft, 32% at Google, 33% at Apple and 37% at Facebook. In none of the companies do more than 23% women work in tech jobs, with only an average of about 28% women in management positions (Richter 2020). Therefore, female perspectives, opinions and judgments are largely lacking. The absence of women in big technology companies could be an indication of the relationship between today’s technology and women. Moreover, this absence reveals that gender is still an influence factor in the design, development, use and consumption of technology devices. Technology reflects the people who make it. The use of female voices and bodies in new technology devices indicates that they are created especially for men in the way men want and with their purpose in mind. As both liberal feminists and radical feminists claim, technology is designed, developed and finally also controlled by men. Speaking assistants such as Alexa, Siri, Cortana as well as robots with female bodies such as Erica, Nadine or Sophia are examples that could confirm these assumptions and theories. According to Plant (1995, 59) “Software […] has a user-friendly face it turns to man, and for it, as for woman, this is only its camouflage”
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
27
(Plant 1995, 59). In sum, the absence of women in tech jobs is reflected in the technology in the form of female voices and bodies. Recent literature also confirms these assumptions. In her new book “Invisible Women: Exposing data bias in a world designed for men,” Caroline Criado Perez outlines how technology discriminates against women with several examples. She analyzes a huge amount of data and exposes many disadvantages women face in today’s world. Especially in technological innovations, the biological difference between men and women is ignored and technologies are adapted to male characteristics. These range from mobile phones that are on average too big for women’s hands to airbags that are designed to protect the male body; hence putting women at a clear disadvantage in traffic accidents (Perez 2019). The situation is no different with our previous example of speaking assistants. They are not only female by design but also use speech recognition systems that are trained to recognize men’s voices in particular. For instance, “Google’s version is 70% more likely to understand men” (Glaser 2019), so women’s instructions are less likely to be correctly understood by their tech devices and therefore less likely to be executed correctly. In other words, female speaking assistants are created first of all for men. AI technologies have already been implemented in medicine. In fact, more and more diagnosis robots or systems work with machine learning technologies, which are the basics of AI. However, many new technologies in the medical field are not yet part of the basic equipment of national health systems, thus it is hard to find relevant research about it. However, as an example, the angiogram, a method for diagnosing heart attacks (without AI), is only based on male data, i.e., symptoms of a man suffering a heart attack. In women, a heart attack is often manifested by a different set of symptoms and therefore is often not detected, with fatal consequences (Regitz-Zagrosek and Schmid-Altringer 2020; Perez 2019). The fear that the new technologies equipped with AI will mainly be tested on male test subjects and then implemented is therefore not unfounded. After all, in medical research, too, both in the human and animal fields, the male is preferred as a test object/person (RegitzZagrosek and Schmid-Altringer 2020; Perez 2019). We cannot say exactly how the new diagnostic systems are programmed, but machine learning techniques certainly present an opportunity to include a broader and more heterogeneous range of data. Thus, they could even change this discrimination against women by including female data equally.
28
M. GRUBER AND R. BENEDIKTER
Furthermore, in this view, technology reflects society, thus leading to the domination of men over women. Jennifer Robertson, the author of Robo Sapiens Japanicus, analyzed the development of robots in Japan, where intelligent machines are already being used especially in caring professions (IFR 2018). She argues that female robots called fembots are created in a way that encourages the preservation of traditional family structures and politics. Fembots, however, could recall traditional roles, when women did men’s groundwork. Actually, the majority of the world is still characterized by a traditional or discriminating division of sex roles—concretely, women (and LGBT) have less legal, political and social rights than heterosexual men. This background seems to be reflected in technology. Although total equality does not exist in many societies, feminists have achieved equality for women in several ways in the majority of industrialized countries. Now, in the perspective of radical feminists, the feminization of technological devices—i.e., female robots and female AI voices—could be interpreted as a renewed attempt to control the female body. Therefore, one reason for this development could be to sustain the patriarchal system, because the feminization of tech devices seems to plead for a certain legitimization of a hierarchy that gives social, political and economic power to men. The link between technology and women can also be found in the increasing supply of and demand for sex robots, which are mainly female. According to Ritchie and Roser (2019) it is estimated that there are currently about 130 million “missing” women in the world, because of selective abortion and excess femal deaths. Particularly in Asian countries, the proportion of women to men is very unequal. India, for example, has 109 men per 100 women (Chao et al. 2019). It seems that female sex robots with AI are sold as a solution for this imbalance. Such robots are different from simple sex dolls because they offer an alternative relationship for the users; at least according to their creators, who actually see a possibility of extending merely physical “relationships” with lifeless dolls to psychological ones, i.e., being able to transfer love for a person to a machine (Kleeman 2017; Benedikter and Gruber 2019; for more information about human machines relationships see chapter Rise of the Centaurs: The Internet of Things Intelligence Augmentation”). The debate about sex robots is divided. On the one hand, experts consider sex robots to be a positive approach with the potential to reduce prostitution, rape and abuse of women (Levy 2009; Eneman et al.
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
29
2009). On the other hand, critics are convinced that sex robots spread a stereotypical and misogynistic image of women. The scholar and activist Kathleen Richardson argues that the relationship between human and robot sex has been imported from prostitution—client sex work, and prostitution “is no ordinary activity and relies on the ability to use a person as a thing” (Richardson 2016, 290). Furthermore, she claims that there is no evidence for the arguments that sex robots will reduce prostitution or other forms of abusing women. Since the development of sex robots with AI is still in its infancy, it is difficult for scientists to make data-based and representative analyses. But there is a similar example where technology and the exploitation of women converge: on the internet. This is not just a technology that drives the simple reproduction of stereotypical images of women, but there is a complex connection between the internet and the sex industry. On the one side we see the technology of the internet, which is one of the most important drivers of globalization and an important factor for the economy. On the other side is the sex industry, which has grown extremely fast in recent decades and has reached a new dimension (revenue estimates in the USA vary from 6 billion dollars a year to 97 billion dollars a year (Kelly 2017; Kari 2016; NBC News 2015; Davenporte 2018). The internet and the sex industry have pushed and stimulated each other enormously. On the one hand, internet pornography has pushed the technological progress of the internet, e.g., the transition to HTML5 was decisively influenced by the porn industry (Caulfield 2010). On the other hand, the sex industry expanded its distribution and sales and became a mega business with the internet (Kelly 2017; Kari 2016; NBC News 2015; Davenporte 2018). In sum, the example of the internet and sex industry emphasizes the relationship between technology and the role of women. Unfortunately, these examples show it is detrimental to the role of women, because pornography and the sex industry in general objectify women and very often spread stereotypical and misogynous images of women. Furthermore, this business is strongly related to human sex trafficking and bondage, and children are involved more and more. The internet has thus contributed to a new form of exploitation of women and even children (especially girls) and has reached new technological levels through this exploitation (Hughes 2001).
30
M. GRUBER AND R. BENEDIKTER
Third-Wave Feminism and the Female Body The feminization of technology devices of the latest generation seems to subvert the female body from the role of the subject, i.e., as actor in creating and developing interaction, to the role of the object. Thus, the feminization of technology devices tends to subvert the de-objectification of the female body. In the view of socialist feminists, the artifacts, which in this case are devices, voices and machines of the newest generation, are at the center of attention and clearly illustrate the power of men. As already mentioned, women are still mainly excluded from the creation and development of technology. The dominance of men in the capitalist system of our world is demonstrated in a new offensive way. Where do female cyborgs find a place in the cyberfeminist community and how could technology help escape gender stereotypes? These could be current questions for the cyberfeminist movement. Wajcman (2009, 148) states “[t]o move forward we need to understand that technology as such is neither inherently patriarchal nor unambiguously liberating.” Providing AI or robots with female bodies and voices only strengthens problems and issues of society. In our view, bodies and voices of technology devices of the newest generation do not make gender less important or less relevant but rather the other way around. Technology devices in female-like bodies and female voices do not seem to dissolve the problematic dualism self/other, which legitimizes the domination over women or any other group. Post-feminists acknowledge a historical meaning of “sex” and “gender” categories, which are “part of our cultural and political discourses,” but these categories are also “always unfinished and capable of change” (Bailey and Telford 2007, 251). Does the actual development of a feminization of highly advanced technology devices including AI highlight the assumption of Butler, who claims that gender is socially constructed in order “to denaturalize and re-signify bodily categories” (Butler 1990, X)? When technology leaders give AI and robots distinct female features, they not only enhance a confirmation and repetition of those categories, but also expand it to the technological dimension. In mainstream media, in various countries, the feminization of robots and AI is not yet a salient topic. But at a second glance, there is growing interest. Attention is drawn above all to the question of why language assistants are almost exclusively female. The “imitation” of women in robot design, for example, is justified by the assumtion that women appear
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
31
more trustworthy, sympathetic and attractive than men (Lobe 2021; Stern 2017; Science ORF.at 2012; Eyssel and Hegel 2012; Alba 2017). The traditional understanding of the role of the helpful or even selflessly caring woman is thus transferred to machines. The successful cross-cultural implementation of systems such as Alexa, Cortana and Google Home in transnational European everyday life supports this assumption (Stern 2017). However, the manufacturers deny that AI systems are intentionally female. But they do it in a “natural” way, because they take on tasks that, historically speaking, were primarily performed by women (Schmalzried 2018).
Conclusion We noticed that AI technology devices of the newest generation are becoming female or are built with female features, which raised our questions of how technology and the role of women are related and in this respect, what role AI plays. To investigate these issues, we reviewed several concepts of feminist theories that focus especially on technology and gender. This research has shown that technology is of central interest for feminist theories. Especially since the second wave of feminism, technology seems to be a relevant factor for questions regarding gender. Moreover, the current direction of technology development, in particular AI, can be explained from the various points of view of feminist theories. The distinct approaches adopt different explanations, which can classify and categorize various parts of the current developments. Despite the differences between theories, the exclusion of women from technology jobs and subsequently from the design, creation, development, consumption and usage of tech devices is a fundamental factor of the present issue. We cannot say how many women were involved in the creation process of the newest tech devices, but we know that their creators and controllers are mainly men, thus female perspectives, interests and viewpoints are largely lacking in the creation of AI. Therefore, the relationship between the role of women and technology is imbalanced and we showed that women are often discriminated against by technological devices. The example of the relationship between the sex industry and the internet revealed that sometimes technologies is even pushed by a particular exploitation of women. Furthermore, it seems that the recent development of the feminization of tech devices enforces the return of a traditional role model for women.
32
M. GRUBER AND R. BENEDIKTER
Women continue to be considered the weak sex and thus the feminization of advanced technology devices may be seen as an attempt to dominate women. This chapter has shown that feminist approaches regarding the relationship between women and technology as well as the female body are not obsolete and even though women have already achieved a lot in terms of gender equality, the fight is still not over. Technology plays a crucial role in this. In conclusion, in this chapter we introduced some general issues about technology and gender, especially in relation to women, and proposed some possible explications of the current developments. We are sure that technology and AI devices have great potential to recognize the value of gender emancipation and equity, but as yet, actual technology devices are a thorn in the side of a holistic gender equity.
Notes 1. Marxist feminists Rosemary Hennessy and Chrys Ingraham criticize postmodern theories and explicitly rank Donna Haraway among “cultural feminists” (http://fuchs.uti.at/wp-content/uploads/infogestechn/ haraway.html). 2. According to Merriam-Webster Biohacking is defined as “biological experimentation (as by gene editing or the use of drugs or implants) done to improve the qualities or capabilities of living organisms especially by individuals and groups working outside a traditional medical or scientific research environment.” Retrieved from https://www.merriam-webster.com/dictio nary/biohacking.
Bibliography Alba, Alejandro. 2017. “Where Are All The Male AI Voice Assistants?.” Vocativ. Accessed February. 24. https://www.vocativ.com/404806/male-ai-voice-ass istants-apple-siri/index.html. Angerer, Marie-Luise. 2002. “The Body of Gender: Oder the Body of What? Zur Leere des Geschlechts und seiner Fassade” [The Emptiness of Sex and Its Facade] In Konfiguration des Menschen, edited by Ellen Kuhlmann, and Regine Kollek, 169–179. Wiesbaden: VS Verlag für Sozialwissenschaften. Bailey, Jane, and Adrienne Telford. 2007. What’s So Cyber about It: Reflections on Cyberfeminism Contribution to Legal Studies. Can. J. Women & L. 19: 243.
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
33
Benedikter, Roland, and Mirjam Gruber. 2019. “The Technological RetroRevolution of Gender. In a Rising Post-Human and Post-Western World, It Is Time to Rediscuss the Politics of the Female Body.” In Feminist Philosophy of Technology, edited by Janina Loh and Marc Coeckelbergh, 187–205. Stuttgart: JB Metzler. Bijker, Wiebe E., Thomas P. Hughes, and J. Trevor (eds.). 1987. The Social Construction of Technological Systems. Cambridge, MA: MIT Press. Bliss, Richard T., and Nichole L. Garratt. 2001. “Supporting Women Entrepreneurs in Transitioning Economies.” Journal of Small Business Management 39, no. 4: 336–344. https://doi.org/10.1111/0447-2778. 00030. Bradley, Harriet. 1989. Men’s Work, Women’s Work. Cambridge: Polity Press. Butler, Jane. 1990. Gender Trouble. New York: Routledge. Butler, Jane. 2011. Bodies That Matter: On the Discursive Limits of Sex. Oxon: Routledge. Caulfield, Brian. 2010. “Porn Maven Says He’ll Dump Flash for HTML 5.” Forbes, June 29. Accessed August 3, 2020. https://www.forbes.com/sites/ velocity/2010/06/29/porn-maven-says-hell-dump-flash-for-html-5/. Chao, Fengqing, Patrick Gerland, Alex R. Cook, and Leontine Alkema. 2019. “Systematic Assessment of the Sex Ratio at Birth for all Countries and Estimation of National Imbalances and Regional Reference Levels.” Proceedings of the National Academy of Sciences 116, no. 19: 9303–9311. Chen, Yingfeng, Feng Wu, Wei Shuai, and Xiaoping Chen. 2017. “Robots Serve Humans in Public Places—KeJia Robot as a Shopping Assistant.” International Journal of Advanced Robotic Systems, 14, no 3: 1729881417703569. https://doi.org/10.1177/1729881417703569. Cockburn, Cynthia. 1983. Brothers: Male Dominance and Technological Change. London: Pluto Press. Cockburn, Cynthia. 1985. Machinery of Dominance: Women, Men and Technical Know-How. London: Pluto Press. Cockburn, Cynthia, and Susan Ormrod. 1993. Gender and Technology in the Making. London: Sage. Computer Science. 2012. “Women in Computer Science”. Last modified on October 12, 2018. https://www.computerscience.org/resources/women-incomputer-science/. Cuberes, David, and Marc Teignier. 2016. “Aggregate Effects of Gender Gaps in the Labor Market: A Quantitative Estimate.” Journal of Human Capital 10, no 1: 1–32. Retrieved from https://doi.org/10.1086/683847. Daly, Mary. 1978. Gyn/Ecology: The Metaethics of Radical Feminism. Boston: Beacon Press. Denfeld, Rene. 1995. The New Victorians: A Young Woman’s Challenge to the Old Feminist Order. New York: Warner Books.
34
M. GRUBER AND R. BENEDIKTER
Davenporte, Barbie. 2018. “Porn Is Not a $4 Billion Industry. Think Before You Swallow.” LA Weekly, October 18. Accessed August 2, 2020. https://www. laweekly.com/porn-is-not-a-4-billion-industry-think-before-you-swallow/. Eneman, Marie, Alisdair A. Gillespie, and Bernd Carsten Stahl. 2009. “ Criminalising Fantasies: The Regulation of Virtual Child Pornography.” In Proceedings of the 17th European Conference on Information Systems, edited by P. Tavel, 8–10. European Institute for Gender Equality. 2017. “Gender Equality Boosts Economic Growth”. March 2017. Accessed May 22, 2018. https://eige.eur opa.eu/news/gender-equality-boosts-economic-growth. Eyssel, Friederike, and Frank Hegel. 2012. “(S) He’s Got the Look: Gender Stereotyping of Robots 1.” Journal of Applied Social Psychology 42, no. 9: 2213–2230. Ferrant, Gaëlle, and Alexandre Kolev. 2016. “Does Gender Discrimination in Social Institutions Matter for Long-Term Growth? Cross-Country Evidence.” OECD Development Centre Working Papers, No. 330. Paris: OECD Publishing. https://doi.org/10.1787/5jm2hz8dgls6-en. Forste, Renata, and Kiira Fox.. 2012. “Household Labor, Gender Roles, and Family Satisfaction: A Cross-National Comparison.” Journal of Comparative Family Studies 43, no. 5: 613–631. Fox, Julia. 2002. Women’s Work and Resistance in the Global Economy. In Labor and Capital in the Age of Globalization, edited by Berch Berberoglu. Lanham, MD: Rowman & Littlefield Publishers. Glaser, Eliane. 2019. “Invisible Women by Caroline Criado Perez— A World Designed for Men.” The Guardian, February 26. Accessed August 3, 2020. https://www.theguardian.com/books/2019/feb/28/invisi ble-women-by-caroline-criado-perez-review. Golla, Anne Marie, Anju Malhotra, Priya Nanda, and Rekha Mehra. 2011. “Understanding and Measuring Women’s Economic Empowerment. Definition, Frameworks and Indicators.” Accessed August 3, 2020. https://www. icrw.org/wp-content/uploads/2016/10/Understanding-measuring-wom ens-economic-empowerment.pdf. Grosz, Elizabeth. 1994. Volatile Bodies. Towards a Corpo-Real Feminism. Indiana: Indiana University Press. Hackett, Edward J., Olga Amsterdamska, Michael Lynch, and Judy Wajcman (eds.). 2008. The Handbook of Science and Technology Studies. Cambridge, MA: MIT Press. Haraway, Donna. 1991. Simians, Cyborgs, and Women: The Reinvention of Nature. New York and London: Routledge. Haraway, Donna. 1995. Ciencia, cyborgs y mujeres: la reinvención de la naturaleza. Valencia: Universitat de València.
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
35
Haraway, Donna. 1997. Modest_Witness@Second_Millennium. FemaleMan_Meets_Oncomouse. New York: Routledge. Hughes, Donna M. 2001. “Globalization, Information Technology, and Sexual Exploitation of Women and Children.” Rain and Thunder: A Radical Feminist Journal of Discussion and Activism 13: 1–3. IFR. 2018. “Robot Density Rises Globally.” Accessed May 29, 2019. https:// ifr.org/ifr-press-releases/news/robot-density-rises-globally. International Monetary Fund. 2018. “Pursuing Women’s Economic Empowerment.” Accessed March 9, 2019. https://www.imf.org/en/Publications/Pol icy-Papers/Issues/2018/05/31/pp053118pursuing-womens-economic-emp owerment. Joekes, Susan P. (ed.). 1990. Women in the World Economy: An INSTRAW Study. Oxford: Oxford University Press. Kaspura, Andre. 2017. “The Engineering Profession: A Statistical Overview.” In Engineering Profession: A Statistical Overview, Thirteenth Edition: 93. Kari, Paul. 2016. “This Ballot Measure Could Upend the $10 Billion Porn Industry.” Market Watch, November 4. Accessed August 3, 2020. https:// www.marketwatch.com/story/the-vote-that-could-turn-the-adult-film-ind ustry-upside-down-2016-11-02. Kelly, Guy. 2017. “The Scary Effects of Pornography: How the 21st Century’s Acute Addiction Is Rewiring Our Brains”. The Telegraph, September 11. Accessed August 3, 2020. https://www.telegraph.co.uk/men/thinking-man/ scary-effects-pornography-21st-centurys-accute-addiction-rewiring/. Kemp, Sandra, and Judith Squires (eds.). 1998. Feminisms: An Oxford Reader. Oxford: Oxford University Press. Kleeman, Jenny. 2017. “The Race to Build the World’s First Sex Robot.” The Guardian. Accessed April 27. https://www.theguardian.com/technology/ 2017/apr/27/race-to-build-world-first-sex-robot. Kuhlmann, Ellen, and Birgit Babitsch. 2002. “Bodies, Health, Gender—Bridging Feminist Theories and Women’s Health.” Women’s Studies International Forum 25, no. 4: 433–442. Pergamon. Latour, Bruno. 1991. “Technology Is Society Made Durable.” In A Sociology of Monsters: Essays on Power, Technology and Domination, edited by John Law, 103–131. London: Routledge. Law, John, and John Hassard (eds.). 1999. Actor-Network Theory and After. Oxford: Blackwell. Levy, David. 2009. Love and Sex with Robots: The Evolution of Human-Robot Relationships. New York: Harper Perennial. Lobe, Adrian. 2021. “Wie weibliche Roboter Genderstereotype festigen” [How female robots reinforce gender stereotypes]. Der Standard Accessed February 28. https://www.derstandard.at/story/2000124495962/wie-wei bliche-roboter-genderstereotype-festigen.
36
M. GRUBER AND R. BENEDIKTER
Martin, Michèle. 1991. ‘Hello Central?’: Gender, Technology, and the Culture in the Formation of Telephone Systems. Montreal: McGill-Queen’s University Press. MacKenzie, Donald, and Judy Wajcman. 1999. The Social Shaping of Technology. Milton Keynes: Open University Press. McCurry, Justin. 2015. “Erica, the „Most Beautiful and Intelligent“ Android, Leads Japan’s Robot Revolution.” The Guardian, December 31. Accessed August 4, 2020. https://www.theguardian.com/technology/2015/dec/31/ erica-the-most-beautiful-and-intelligent-android-ever-leads-japans-robot-rev olution. McKinsey & Company. 2018. “Women Matter: Time to Accelerate. Ten Years of Insights into Gender Diversity 2018.” Accessed February 8, 2019. https://www.empowerwomen.org/-/media/files/un%20women/emp owerwomen/resources/hlp%20briefs/unhlp%20full%20report.pdf?la=en. Milkman, Ruth. 1987. Gender at Work: The Dynamics of Job Segregation During World War II . Urbana: University of Illinois Press. MacKinnon, Catharine A. 1982. “Feminism, Marxism, Method, and the State: An Agenda for Theory.” Signs: Journal of Women in Culture and Society 7, no. 3: 515–544. Macdonald, Kenneth. 2017. “A Robotic Revolution in Healthcare.” BBC News, March 20. https://www.bbc.com/news/uk-scotland-39330441. McNeil, Maureen (Ed.). 1987. Gender and Expertise. London: Free Association Books. Mowery, David C., and Nathan Rosenberg. 1991. Technology and the Pursuit of Economic Growth. Cambridge: Cambridge University Press. National Science Board. 2018. “Chapter 2: Higher Education in Science and Engineering.” Science & Engineering Indicators 2018. Accessed August 4, 2020. https://www.nsf.gov/statistics/2018/nsb20181/assets/561/highereducation-in-science-and-engineering.pdf. NBC News. 2015. “Things Are Looking Up in America’s Porn Industry.” January 20. Accessed August 4, 2020. https://www.nbcnews.com/business/ business-news/things-are-looking-americas-porn-industry-n289431. NBC San Diego. 2017. “Artificial Intelligent Robot Receives Citizenship in Saudi Arabia: CNBC.” NBC last modified on October 25. 2017. https://www.nbcsandiego.com/news/national-international/Artifi cial-Intelligent-Robot-Receives-Citizenship-in-Saudi-Arabia-453196913.html. Perez, Caroline. 2019. Invisible Women: Exposing Data Bias in a World Designed for Men. New York: Random House. Plant, Sadie. 1995. “The Future Looms: Weaving Women and Cybernetics. Body & Society 1, no. (3-4): 45–64. https://doi.org/10.1177/1357034X9500100 3003.
2
THE ROLE OF WOMEN IN CONTEMPORARY TECHNOLOGY …
37
PwC, Women in Work Index 2018. 2018. Accessed in July 17, 2020. https://www.pwc.co.uk/services/economics-policy/insights/womenin-work-index.html. Regitz-Zagrosek, Vera, and Stefanie Schmid-Altringer. 2020. Gendermedizin: Warum Frauen eine andere Medizin brauchen: Mit Praxistipps zu Vorsorge und Diagnostik [Gender Medicine: Why Women Need a Different Medicine: With Practical Tips on Prevention and Diagnostics]. Munich: Scorpio Verlag. Richardson, Kathleen. 2016. “The Asymmetrical ‘Relationship’ Parallels Between Prostitution and the Development of Sex Robots.” ACM SIGCAS Computers and Society 45, no. 3: 290–293. Richter, Felix. 2020. “GAFAM: Women Still Underrepresented in Tech.” Statista, February 19. Accessed August 5, 2020. https://www.statista.com/ chart/4467/female-employees-at-tech-companies/. Ritchie, Hannah and Roser, Max. 2019. “Gender Ratio.” Published online at OurWorldInData.org. Accessed https://ourworldindata.org/gender-ratio [Online Resource]. Rogers, Krista. 2015. “Meet Asuna, the Hyperreal Android That Will Leave Your Jaw Hanging.” Sora News 24, February 11. Accessed August 3, 2020. https://soranews24.com/2015/02/11/meet-asuna-the-hyperreal-and roid-that-will-leave-your-jaw-hanging/. Roiphe, Katie. 1994. The Morning After Sex, Fear, and Feminism. New York: Back Bay Books. Schmalzried, Gregor. 2018. “Warum ist Siri eine Frau?” [Why is Siri a Woman]. Puls. Accessed February 16. https://www.br.de/puls/themen/netz/wei bliche-stimmen-bei-alexa-google-home-und-co-warum-ist-siri-eine-frau-100. html. Science Orf.at. 2012. “Typisch weiblich, typisch männlich” [Typically female, typically male]. Accessed https://sciencev2.orf.at/stories/1703594/index. html. Stern, Joanna. 2017. “Alexa, Siri, Cortana: The Problem With All-Female Digital Assistants.” The Wall Street Journal. Accessed February 21. https:// www.wsj.com/articles/alexa-siri-cortana-the-problem-with-all-female-digitalassistants-1487709068?mod=rss_Technology. Suchman, Lucy. 2008. “Feminist STS and the Sciences of the Artificial.” In The Handbook of Science and Technology Studies, edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman (3rd ed), 139–164. Cambridge, MA: MIT Press. Telegraph. 2018. “Sophia the Robot Takes Her First Steps.” January 28. Accessed July 29, 2020. https://www.telegraph.co.uk/technology/2018/ 01/08/sophia-robot-takes-first-steps/. Wajcman, Judy. 1991. Feminism Confronts Technology. Cambridge: Polity Press. Wajcman, Judy. 2004. TechnoFeminism. Cambridge: Polity Press.
38
M. GRUBER AND R. BENEDIKTER
Wajcman, Judy. 2009. “Feminist Theories of Technology.” Cambridge Journal of Economics 2010, 34: 143–152. https://doi.org/10.1093/cje/ben057. Webster, Juliet. 1989. Office Automation: The Labour Process and Women’s Work in Britain. Hemel Hempstead: Wheatsheaf. Wirtschaftslexikon Gabler. 2020. “Bodyhacking.” Accessed May 23, 2020. https://wirtschaftslexikon.gabler.de/definition/bodyhacking-100401. Woetzel, Jonathan, Anu Madgavkar, Kweilin Ellingrud, Eric Labaye, Sandrine Devillard, Eric Kutcher, James Manyika, Richard Dobbs, and Mekala Krishnan. 2015. “The Power of Parity: How Advancing Women’s Equality Can Add $12 Trillion to Global Growth” (No. id: 7570). McKinsey & Company. Accessed August 4, 2020. https://www.mckinsey.com/featuredinsights/employment-and-growth/how-advancing-womens-equality-can-add12-trillion-to-global-growth. World Bank Data. 2018. “School Enrollment, Tertiary, Female (% Gross).” Accessed March 3, 2020. https://data.worldbank.org/indicator/SE.TER. ENRR.FE.
CHAPTER 3
Rise of the Centaurs: The Internet of Things Intelligence Augmentation Leslie Paul Thiele
Artificial Intelligence (AI) frequently captures headlines in the popular press and has become a focus of attention for business corporations, governments, military organizations, and scholars. AI is depicted as a tool for achieving unprecedented human prosperity. It is also described as the greatest threat to our species, with the potential to undermine human rights and freedoms, shatter human identity, and threaten human extinction. While such opportunities and risks merit increased attention and study, the impact of Intelligence Augmentation (IA) is often overlooked. Yet IA not AI is the most pressing and proximate concern for our species, particularly given its coupling with the Internet of Things (IoT). In the wake of vast increases in data streams and computing power, an Internet of Things Intelligence Augmentation (IoTIA) will fundamentally transform economic, social, and political life. Humans and intelligent machines working together in centaur relationships will become pervasive, prolific, and predominant. But for billions of humans, the IoTIA
L. P. Thiele (B) University of Florida, Gainesville, FL, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_3
39
40
L. P. THIELE
may designate an Internet of Things Intelligence Amputation. A crisis of cognitive deskilling is in store, as digital tools erode the aptitudes of their human users. Cultivating the human capacities for creativity and practical judgment is crucial to a promising future in an IoTIA world.
Rise of the Machines There have been many “AI winters” over the last half-century. In this technological field, like so many others, the hype of rapid progress frequently exceeded concrete deliverables. But notable achievements are now imminent. In both military and civilian sectors, we are in the midst of an “AI arms race.” Global resources devoted to the development of artificial intelligence are at an all-time high. Consequently, machine learning is advancing rapidly, doubling in computational power every three and a half months, much faster than Moore’s Law would predict (Saran 2019). Neural networks are particularly promising. Also known as deep learning, neural networks demand great quantities of data and raw computing power. Both are now widely available and growing rapidly, a condition that did not prevail in the early decades of AI development. The AI spring has arrived, and it will be fecund. The deployment of ever-smarter machines appears inevitable. That realization, for many, conjures images of terminator-style robots engaged in mortal combat with bands of rag-tag human rebels. This “rise of the machines” fantasy is captivating. A winner-take-all battle of humans against AI-enhanced robots is foreseen in the not-so-distant future. Such apocalyptic visions are not wild fancy, despite their frequent forecasting by Hollywood filmmakers. Smart people are seriously worried (Hawking et al. 2014). In some respects, the rise of the machines is playing out in real time. AI programs add more notches to their belts daily, demonstrating themselves proficient—and often superhuman—in predictive modeling, strategic games, vehicle navigation, natural language translation, and complex pattern recognition, including that of flora, fauna, human voices and faces. Hollywood may actually have something to teach us about the trajectory of artificial intelligence. The Terminator films did not present a simple “man versus machine” storyline. Beginning with Terminator 2: Judgment Day, collaboration between humans and machines became the name of the game. The lead characters found themselves allied with a smart machine, namely a cyborg sent from the future, in their efforts
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
41
to defeat Skynet—the self-aware, global network of drones, servers, military satellites, fighting machines, and misanthropic androids. If the battle against malicious artificial intelligence is to be successfully waged, viewers are led to believe, victory can only be secured by an alliance of humans and machines. Is humanity in a race against machines, desperate to stave off subordination, enslavement, or extermination by artificial intelligence? Or are humans advancing with machines in a cooperative venture? Those who adopt the latter perspective foresee human-machine systems producing unprecedented prosperity while preventing any number of natural or anthropogenic disasters—from killer asteroids colliding with Earth to catastrophic climate change, planetary resource depletion, pandemics, failed states, and global war. “This is not a race against the machines,” Kelly (2012) observes, “If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines. Most of what you do will not be possible without them. And there will be a blurry line between what you do and what they do.” Such predictions are increasingly common. While optimistic, they are also unnerving. The race with the machines will radically transform economic, social, and political life, and the speed and scale of the transformation will be unprecedented. We are approaching a watershed in human history. Following the defeat of chess grandmaster Gary Kasparov by IBM’s Deep Blue in 1997, the exploration of humans working in tandem with smart machines began in earnest. A year after his loss, Kasparov founded the field of “advanced chess,” often called “centaur chess.” Pairing human players with computers proved very effective. Centaur teams regularly defeat both the best AI programs and the best human players in chess tournaments (Thompson 2013). Centaur relationships are not limited to strategic games. The US Defense Advanced Research Projects Agency (DARPA) is currently developing the “third wave” of artificial intelligence. First-wave AI followed rules. Second-wave AI engaged in advanced statistical analysis. Thirdwave AI incorporates reasoning and contextual awareness into machine learning. This will allow AI to generate, test, and refine its own hypotheses, working collaboratively with designers, engineers, and scientists. First- and second-wave AI only did what it was told to do. The “grand vision,” Valerie Browning, director of DARPA’s Defense
42
L. P. THIELE
Sciences Office, observes, is to transform machines from tools into “trusted, collaborative partners” that discursively engage with human co-investigators (Corrigan 2018, 2019). The future, Browning suggests, belongs to centaurs. Centaurs were mythological creatures with the upper body of a human and the lower body of a horse. The centaur myth likely arose among the ancient Minoan people after their exposure to horse-riding nomads. The Aztecs, in like manner, initially apprehended the invading Spanish cavalry as creatures that were half-man and half-animal. Both ancient horsemen and contemporary centaurs display a symbiotic relationship. And just as the cavalry of former times conquered and colonized their respective worlds (e.g., the Mongol conquest of Eurasia, and the Spanish conquest of the Aztec and Incan civilizations), so contemporary centaurs are well positioned to become the dominant forces of the twenty-first century. Evidence for a centaur future abounds. The medical and business sectors are widely deploying coupled human-AI systems, sometimes called “co-bots” (Daugherty and Wilson 2018). Global challenges and crises, such as climate change, are thought solvable only by means of humanAI partnerships (University of Cambridge 2019; Gaudin 2019). Military organizations are developing “centaur strategies” in which human agents work in tandem with semi-autonomous weaponry (Scharre 2016). And the arena of social, economic, and political analysis and forecasting is well positioned for centaur relationships (Tetlock and Gardner 2015, 23). Across a growing number of sectors and fields, centaurs outperform human-only or AI-only systems in data analysis, prediction, pattern recognition, and a host of other capacities (Agrawal et al. 2018, 65; Silver 2012, 125). A centaur future is in the making.
Bicycles for the Mind and the IoTIA J. C. R. Licklider was known as the “Johnny Appleseed” of computing for pioneering ARPANET, the direct forerunner to the Internet. Six decades ago, Licklider (1960, 5) presciently observed: “There are many man-machine systems. At present, however, there are no man-computer symbioses… The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly…. [and] the main intellectual advances will be made by men and computers working together in intimate association.” Douglas Engelbart also envisioned a man-computer symbiosis. His work at the Augmentation Research Center
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
43
at SRI International in the 1960s resulted in the creation of the computer mouse, the development of hypertext, early graphical user interfaces, and networked computers. Focusing his efforts on the use of computers to increase intellectual capacity and facilitate collaborative work, Engelbart founded the field of Intelligence Augmentation or IA, also known as Intelligence Amplification (Markoff 2015, xii–7, 114). With intelligence augmentation in view, Steve Jobs called the personal computer “a bicycle for our minds.” Tool building separates human beings from other primates, he mused. Then Jobs provided the context for his striking metaphor: I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So that didn’t look so good. But then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle…. A human on a bicycle blew the condor away, completely off the top of the charts. And that’s what a computer is to me … it’s the most remarkable tool that we’ve ever come up with; and it’s the equivalent of a bicycle for our minds. (Jobs 2010)
Jobs’s depiction of the personal computer as a bicycle for the mind provides a direct correlate to the mythological Homo equus. In both instances, the human mind and will gains speed, endurance, and power from a symbiotic partnership. If the personal computer offers a bicycle for the mind, the IoT provides an entire transportation system. It is a burgeoning global network of interconnected computers, smartphones, appliances, buildings, vehicles, cameras, apps, and sensors (some of which are attached to or embedded in human bodies). The number of connected devices in the IoT surpassed the number of human beings on the planet in the early 2010s. That number quadrupled by 2020 to over 30 billion and is predicted to reach 75 billion by 2025 (Statista 2020). Some industry analysts forecast 500 billion IoT devices by 2030 (Cisco 2016). Most of the information flowing within the IoT is generated, transmitted, aggregated, and analyzed by smart machines. Indeed, machineto-machine (M2M) communication likely already exceeds the total global volume of all human-to-human conversation and communication. We are
44
L. P. THIELE
in the midst of a “second machine age” characterized by billions of interconnected, interactive smart devices. It may surpass—and “make mockery of”—the scope and depth of all previous transformations of the human condition (Brynjolfsson and McAfee 2014, 66, 96). The IoT will be the primary home of intelligence augmentation, putting a vast and quickly expanding world of information, networks, and capabilities at human fingertips. It portends significant advances in productivity and performance across diverse sectors and fields: from agriculture, energy management, manufacturing, and national security, to education, health care, human rights protection, and natural resource conservation. Smartphones, laptops, and other forms of mobile technology, for example, enable lay populations to play the role of “citizen scientists.” With the aid of digital devices and apps, they can gather local data for, and collaborate with, professional counterparts. The result is a “swarm” or “emergent” intelligence that surpasses that which any expert or smart machine might demonstrate in isolation (Rheingold 2003, xii, 178–182). Pietro Michelucci, director of the Human Computation Institute, argues that the “distributed brain” created by human and computer intelligence in a crowdsourced IoTIA system has the potential to solve many of the world’s “wicked problems,” such as climate change and geopolitical conflict (Michelucci and Dickinson 2016). This “crowd and the cloud” approach to knowledge creation and problem-solving is gaining traction within both academic and governmental circles (see http://crowdandc loud.org). Bicycles for the body increase human mobility and efficiency without undermining agency, while fostering physical fitness. Centaur relationships within digital ecosystems can provide heightened efficiency and intellectual power, while potentially contributing to mental fitness. However, digital technologies within the IoT operate at superhuman levels of connectivity, speed, and functionality and require diminishing amounts of human effort. For billions of people availing themselves of the IoT, the need and opportunity to exercise memory, reasoning, resourcefulness, judgment, and diverse social aptitudes is steadily declining. While the IoT demonstrates an unprecedented capacity for data capture and management, networking, and functionality, people within this digital web may exercise very little agency. Human beings will remain the subjects of much of the data gathered and analyzed (e.g., health data generated by individuals wearing biometric sensors). People will also serve
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
45
as the targets of advertising informed by this data. And, of course, there will be myriad centaur relationships. Still, the IoT—arguably the primary vehicle for commercial and technological development in the twenty-first century—will have smart machines occupying many if not most of the drivers’ seats. The hope was that the IoTIA would provide souped-up bicycles for active intellects. But as Case (2018) observes, digitally augmented minds are increasingly kicking back in La-Z-Boy Recliners. The human brain is like a muscle: absent use, it weakens and atrophies. With the Internet of Things supplying ever more services and dispatching ever more tasks, legions of humans will have many of their cognitive capacities sidelined. This is the “crisis” that David Krakaurer (2015) of the Santa Fe Institute identifies as “app intelligence”—the plethora of AI-enabled programs, devices, and networks that effectively usurp human decision-making and contribute to the erosion of a broad swath of mental capacities. Throughout history, technological advances have been accompanied by the deskilling of the workforce. In the early industrial age, machines assumed many of the tasks formerly carried out by artisans and manual laborers. Many human skill sets were eclipsed. With the development of computers, technology also contributed to the deskilling of clerical staff, service industry workers, and many professionals engaged in data processing and analysis. As repetitive, calculative, and analytic tasks were increasingly dispatched by machines and computers over the last two centuries, human personnel were often freed up to engage in more complex, high-skill endeavors. In this manner, technological advances were accompanied by the reskilling and upskilling of workers (Vallor 2015). The path to reskilling and upskilling in the wake of the IoT expansion, however, is uncertain at best. Consider the field of medicine. Tencent recently partnered with Babylon Health to incorporate a healthcare assistant into its WeChat platform. Effectively a cross between Siri and WebMD, it creates a digital health profile of users that can be employed to monitor and diagnose a wide range of medical conditions. Such developments may drastically reduce the need for doctors and nurses, and those who remain will increasingly rely on the AI-enhanced IoT to aggregate and analyze medical knowledge. If for no other reason, health practitioners’ dependence on smart digital systems will be necessitated by information overload. Staying up to
46
L. P. THIELE
date with the plethora of health knowledge generated daily would require doctors to spend the lion’s share of their time reading medical journals and reports. Networked machines can effortlessly digest all this knowledge daily and provide up-to-date diagnostic advice. As IoT decision support services improve, doctors may increasingly be relegated to the role of collecting observational data for computer programs that dispense diagnoses and prescriptions (Carr 2014, 115). And with sophisticated biometric devices increasingly widespread and capable of the 24/7 monitoring of bodies, the doctor’s role as a “human sensor” may also become superfluous. One might hope that patients would retain the right to choose between human-based and digital health services. But the legal and insurance systems may not allow it. To make a diagnosis or recommend a medical procedure without IoT input—perhaps even IoT permission—would leave doctors open to malpractice suits (Susskind 2018, 109). Just as a cruise ship captain would be legally liable if satellite images, radar, and weather forecasts were not consulted before sailing his or her vessel into a brewing hurricane, so doctors will have to heed IoT dictates before engaging in diagnoses and treatment. To be sure, digital decision support services in health care will initially fill the subservient role of the “consulting physician” who works at the behest of the flesh-and-blood “treating physician.” But digital recommendations are likely to assume an increasingly imperative tone as the IoT grows in scope and functionality. Heath insurance agencies may make the use of and compliance with a medical IoT the condition of coverage or at least the only means of securing low premiums. Doctors participating in a medical IoT might become little more than echoborgs —physical mouthpieces that provide a reassuring human voice to patients whose health is administered by smart machines. In the same vein, military personnel currently serve the role of moral agents and “fail-safe” decision-makers in centaur relationships. But increasingly autonomous systems appear imminent (Scharre 2016). As monitoring capabilities multiply with satellite imaging and on-the-ground sensors, the amount of data generated by a militarized IoT will become overwhelming to human decision-makers. The task of quickly analyzing massive amounts of streaming data from rapidly changing fields of operation will steadily force control of weaponry, tactical operations, and even strategic command to the non-human elements of an IoT system. Given the challenges of decision-making in quickly changing, highly complex
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
47
and increasingly data-rich battlefields, soldiers and officers may soon be the weakest links in the military chain of command and arsenal. Skill sets formerly developed through field operations will decline. Opportunities for upskilling are unclear. Cognitive deskilling will not be limited to specific professions and sectors of the workforce. It will pervade digital culture, much abetted by the proliferation of Intelligent Virtual Assistants (IVAs, also known as Virtual Personal Assistants or VPAs). These digital assistants, such as Alexa, AliGenie, Cortana, Google Assistant, and Siri, are known as “software agents.” Their human users, in contrast, might best be viewed as subjects or clients, underlining a shift in performativity from human to machine. IVAs are promoted as digital extensions and amplifications of human abilities. Yet these technological augmentations often amount to intelligence amputations (Maes 1998). They reduce or eliminate opportunities for the exercise and cultivation of cognitive skills. Individuals contributing to IoT “distributed brains” may suffer the same cognitive decline. Driving a GPS-outfitted, IoT-networked vehicle that continuously transmits data to the cloud greatly contributes to collective knowledge of the best ways for drivers (or driverless vehicles) to avoid congestion and reach their destinations. But a human driver’s contribution to this emergent intelligence in no way enhances her own navigational abilities. Indeed, wayfinding skills are withering away in those who consistently avail themselves of navigationally networked vehicles and smartphones. And while a limited repertoire of digital skills will proliferate in lay populations ensconced in the IoT, these competencies may be more than offset by the erosion of a host of other aptitudes (Thiele 2018a). At present, IoTIA systems mostly filter choices, with AI algorithms churning through massive amounts of data to shortlist options for human selection. But the next step of decision displacement is around the corner. With the release of the ultra-fast streaming search function Google Instant in 2010, Sergey Brin announced that Google would become “the third half of your brain.” While Brin’s phrase is cryptic, a reasonable interpretation is that Google’s goal was to “know what you want in a search, perhaps even before you know” (Yarrow 2010). Digital forecasting is steadily displacing human decision-making. Notable developments are evident in the arenas of entertainment selection (Netflix), newsfeeds (Facebook), and retail commerce (Amazon). Indeed, Amazon has a patent for “anticipatory shipping,” where merchandise arrives at
48
L. P. THIELE
customers’ doorstep without their ordering it, the product of algorithms that predict likely purchases (Agrawal et al. 2018, 17). Two and a half centuries ago, Adam Smith argued that the “hidden hand” of the emerging free market ensured the comprehensive coordination of supply and demand without administrative oversight or governance. Emerging technology in the contemporary marketplace is developing a hidden mind. IoT-enabled corporations can now microtarget consumers and clients, catering to—and shaping—unexpressed individual needs and wants. People will not have to decide what they want or when they want it. They will be supplied before they demand. As options in the marketplace are increasingly shaped and selected by networked machines, the atrophy of human decision-making aptitudes is to be expected.
Political Deskilling and the Fate of Democracy Choices both reveal and shape character. Ethically, we become what we do and decide. Declines in cognitive activities and choice selection within IoT systems may well be accompanied by the loss of moral agency. Howell (2014) speculatively envisions the development of “Google Morals.” This highly efficient app for ethics would liberate its users from the burden of moral judgment. Provided a brief description of the situation at hand, an ethics app would supply the most appropriate course of action. Users could forego wrestling with ethical conundrums to engage in more pleasurable activities, such as shopping (based on recommendations from Amazon), watching a film (based on recommendations from Netflix), catching up on the recent events (based on a tailored newsfeed delivered by Facebook), or dating (based on recommendations from Match.com). An ethics app may seem far-fetched. But related work in contiguous fields, such as politics, suggest a trend. Malone (2018, 102–104) sees a “huge opportunity” in developing digital democracies wherein citizens delegate the task of voting to AI programs: “Some of the voters would be people. Some would be machines. Each would have their own different kinds of knowledge and expertise…. In addition, a separate set of computer agents would be continuously learning how best to combine all these votes into results that are more accurate than either people or computers could produce alone.” Likewise, Hughes (2004) celebrates the cyborg citizen, and believes human-machine partnerships will advance democratic life.
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
49
Given low voter turnout, party polarization, government gridlock, and deepening political cynicism in many nations, functional centaur democracies with effective “voting apps” and AI executive functions might become widely embraced. As many as two out of five citizens in Europe already prefer AI programs to politicians as key national decision-makers (IE 2019). While an AI head of state seems improbable for the foreseeable future, governmental officials aided, abetted, and steered by smart(er) machines is an imminent prospect, as is machine-to-machine (M2M) political decision-making (Howard 2015, 257). Decision displacement in political life appears destined for growth. Citizens and statespeople, as a result, may become deskilled in the civic arts. Decision displacement demands the datafication of people, places, and processes—a quantified world populated by quantified individuals (Cheney-Lippold 2017). Datafication is the foundation of predictive analytics, which informs everything within IoT systems from purchase recommendations to automatic newsfeeds to political mobilization (Thiele 2018b). Corporations and governments that harness the power of predictive analytics effectively know consumers, clients, and citizens better than they know themselves. As datafication deepens, opportunities for exploitation, manipulation, and domination grow. China’s “social credit” initiative is a case in point. US Vice President Mike Pence (2018) called it “an Orwellian system premised on controlling virtually every facet of human life.” There is some truth to the claim. The social credit system is based on the massive collection and analysis of personal data by an AI-enabled IoT, which the regime exploits for its own purposes. For example, China has developed and deployed the world’s largest facial recognition database to predict and preempt social unrest and political dissidence (Ding 2018, 33–34). Chinese corporations are eager partners. Jack Ma, the founder and former executive chairman of Alibaba, and reputedly the richest man in China, stated in 2017 that so-called smart cities powered by his Alibaba’s hardware and AI algorithms will enable the prediction and preemption of security threats. “Bad guys won’t even be able to walk into the square,” Ma told a Communist Party commission overseeing law enforcement (Ryan 2018; and see Loeffler 2018). Whether “bad guys” refers to criminals or dissidents is an open question. Various forms of “predictive policing” are widely practiced in the China, and Alibaba has been joined by Ping An, Tencent, and Huawei to advance sophisticated and ubiquitous urban surveillance.
50
L. P. THIELE
The AI-enabled IoT is playing a crucial role in the development of such “robust authoritarianism.” Indeed, some argue that digital information technology “favors” tyranny, as autocracies are free to harvest and utilize vast streams of personalized data absent the constraints of civil liberties (Harari 2018). The nineteenth-century anarchist Pierre-Joseph Proudhon (1923, 293–294) observed that to be governed is to have every transaction noted, registered, and measured and potentially prevented, forbidden, and punished. Under the “pretext of public utility,” Proudhon charged, the governed are “watched, inspected, spied upon, directed, law-driven, numbered, regulated, enrolled, indoctrinated, preached at, controlled, checked, estimated, valued, censured, commanded, by creatures who have neither the right nor the wisdom nor the virtue to do so.” While Proudhon vastly exaggerated the governance capacities of his day, he provided an accurate prediction of twenty-first-century practices. Paranoia became prophecy. What Proudhon could not have known was that most of the “creatures” measuring, inspecting, assessing, authorizing, regulating, and censuring citizens in the twenty-first century would not be human. And with citizens’ civic skills and practices steadily eroded in an IoT world, it is unclear who will ensure that the digital creatures engaged in ruling over us have the right, the wisdom, and the virtue to do so. The IoT has an Orwellian potential, and the 75th or centennial anniversary of the (1948) publication of 1984 could see the realization of its darkest visions. Orwell’s dystopic “picture of the future” displayed “a boot stamping on a human face— forever.” To be sure, datafied citizens are highly vulnerable to oppression. But violent force may not be the greatest danger. It is a blunt and inefficient instrument. Shaping and satisfying citizens’ burgeoning needs and wants within an IoT may provide autocratic regimes a surer hold on power. Digital distraction and diversion, not surveillance and subjugation, may be the larger threat. Aldous Huxley was prophetic in this regard. His Brave New World limns a hedonistic future. The genetically modified residents of Huxley’s World State have every need met and every want satisfied. In this alltoo-benign dystopia, desires are generated and fulfilled by powerful technologies, including a soothing, happiness-inducing drug called soma, and the feelies —a haptic, virtual reality experience that can provide the titillations of world travel and realistically mimic the sensuality of lovemaking on a bearskin rug.
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
51
Current efforts at engineering the “sensor-filled metropolis of tomorrow” across the capitalist, democratic world (Fussell 2018) suggest that Huxley, not Orwell, better forecasted the future. As Postman (1985, vii– viii) observed, “Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture…. As Huxley remarked [those] who are ever on the alert to oppose tyranny, ‘failed to take into account man’s almost infinite appetite for distractions.’” Future citizens, with their aptitudes atrophied and cognitive skills diminished, might eagerly swallow their daily dose of digital soma, and happily embrace all the comforts and commodities provided. Even China appears to be more Huxleyan than Orwellian in this respect. To be sure, digital Big Brother watches, listens, locates, and analyzes prolifically. But China’s approach to political stability and social control is subtler than this suggests. The state and its corporate partners flood cyberspace with benign (consumer) options while increasing the time and effort required for the public to access or promote politically disruptive information. Broad public compliance is gained more by “distraction and diversion” than heavy-handed repression (Roberts 2018). As Morozov observes, “Today’s authoritarianism is of the hedonism- and consumerism-friendly variety” (Morozov 2011, ix). Just as ancient Roman emperors supplied “bread and circuses” to keep their citizens docile, IoT-enabled “smart states” will have the capacity to foster heightened consumption and distraction while hamstringing dissidence. Historically, autocratic regimes’ efforts to separate the wheat (conformists) from the chaff (dissidents) were a difficult and error-prone endeavor. Today, political stability can be more efficiently secured by diverting the cognitively deskilled masses into consumptive distractions while employing IoT-enabled datafication to micro-target, and suppress, the dissident few. Corporations within the “free world” are blazing this trail. Carr observes that “it’s in Google’s economic interest to make sure we click as often as possible. The last thing the company wants is to encourage leisurely reading or slow, concentrated thought. Google is, quite literally, in the business of distraction” (Carr 2011, 157). Social media and hardware (smartphone and tablet) corporations are in the same line of work. Digital platforms, apps, and devices are designed to stimulate constant monitoring and engagement. As a result, heavy users display symptoms
52
L. P. THIELE
that mimic those of Attention Deficit Hyperactivity Disorder (ADHD), including “distraction, fidgeting, having trouble sitting still, difficulty doing quiet tasks and activities, restlessness, and difficulty focusing and getting bored easily when trying to focus” (Kushlev et al. 2016). All this hyper-active distractedness promotes fast-paced, impulsive consumption. It does not stimulate the cognitive development and discipline required for the cultivation of civic skills.
Foxes, Hedgehogs, and Centaurs Will humans sustain the cognitive capacities needed to well navigate our “new era of technomoral responsibility” (Vallor 2015, 122)? Human downgrading across a growing array of fields and professions appears imminent as networked smart machines increasing outpace, and subsequently displace, human agents and practices. Those fields and professions that demand high levels of creativity and practical judgment are more likely to be characterized by truly collaborative centaur relationships. AI has advanced precipitously in recent years, gaining near-human or superhuman capacities in a widening array of arenas including predictive modeling, strategic decision-making, and complex pattern recognition. As these capacities are conjoined and synthesized, AI may soon exhibit, or surpass, human standards of general intelligence. But the capacity for creativity, not general intelligence, may be the best indicator of whether AI is at par with our species (Bringsjord et al. 2001). Brynjolfsson and McAfee (2014, 191) maintain that “we’ve never seen a truly creative machine, or an entrepreneurial one, or an innovative one.” With these hurdles in mind, IBM (2019)—which stands at the forefront of AI research—deems creativity the “ultimate moonshot for artificial intelligence.” While human creativity remains rather mysterious, studies indicate that creative individuals exhibit high levels of persistence and discipline, which allows them to become masters in their respective fields through extensive practice (circa 10,000-hours). In turn, creative individuals regularly expose themselves to other people, ideas, and sources of learning and inspiration. This exposure and interactivity allow for otherwise distinct and unrelated concepts and orientations to collide, introducing novel combinations and metaphoric linkages. Intuitive insights typically arise from the (often sudden) apprehension and integration of such hidden
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
53
connections. Like the bubble generated by the fermenting organic material at the bottom of a pond, human creativity has a mysterious, murky period of gestation. Novel ideas percolate below the surface in persistent, disciplined, interactive individuals before catalytic connections precipitate their abrupt birth (Koestler 1964; Boden 2004; Johnson 2010). An AI program can exceed the persistence and discipline that begets human expertise and the creativity it allows. And AI programs can easily be exposed to extensive, diverse sources of information, rapidly digesting massive data streams. The crucial question is whether AI can demonstrate intuitive apprehension of metaphoric linkages and original combinations, and then shape these intuitions into new ideas and artifacts. Theorists of AI observe that computer programs can exploit various means of bringing together distinct concepts and processes to create novel linkages, and subsequently explore the viability of these linkages iteratively and comparatively or in a winnowing, evolutionary process. And by introducing randomness, AI programs can prevent algorithmic biases from unduly limiting the field of potential connections. In this manner, AI can explore the overlapping boundaries and hidden linkages between expanding frontiers of knowledge (Kurzweil 2012, 116). Employing such processes, AI programs have already demonstrated creativity—or something akin to it—in fashion design, film making, literature, poetry, music composition, food product development, painting and related fine arts, and strategic game-playing. There is no reason to suspect that the boundaries of AI creativity have been reached (Steinert 2015; Metz 2020). To date, however, it remains inferior to human creativity in virtually every regard. Practical judgment is similar to creativity in this respect: AI is making steady progress, yet human capacities remain far superior. That is because practical judgment cannot be acquired solely by crunching data, following rules, or calculative analysis. It entails psychological and social insight acquired through well-integrated experience (Thiele 2006). Practical judgment develops by direct observation of human behavior, responsive communication and interaction informed by such observation, and the revision of expectations and practical interventions based on iterations of this process. As a form of learning by doing, practical judgment is the product of lived experience critically reflected upon and put into the service of discretionary assessment and creative problem-solving. The ancient Greeks called it phronesis, which the Romans translated as prudentia. Prudence or practical judgment was considered one of the
54
L. P. THIELE
four cardinal virtues, as it grounds moral and political decision-making and facilitates the exercise and development of all other virtues. Practical judgment entails understanding why people behave the way they do. Importantly, human actions are directed as much by feelings as facts and as much by the stories people embrace as the circumstances they inhabit. Though an exercise in reason, practical judgment also exhibits understanding of how and why people emotionally experience and narratively navigate their worlds. It entails access to the inner cognitive and affective states of others, their motivations, and the stories that channel their behavior (Thiele and Young 2016). Being creatures with emotions and intentions who experience a sense of place and purpose, most of us are relatively well equipped to understand other peoples’ inner states and narrative frameworks. Computers lack this capacity. Still, AI programs can correctly interpret human intentions and moods. So-called affective computing already outperforms humans in the identification of emotional states from facial coloration and expressions. Marketing for such programs is robust, and progress is steady (http://online-emotion.com; Wiggers 2019). The capacity of Amazon’s Alexa to detect and respond to human emotion based on voice tones, for example, is being steadily improved. And China is currently deploying related technology in high schools, monitoring facial expressions for emotive markers that inform teachers which students are not fully engaged in their studies (Vanderklippe 2018). The power of affective computing undoubtedly will grow. Emotional intelligence is a necessary but not sufficient condition for practical judgment. To judge well, one must exercise critical thinking, remain open to and seek disconfirming facts, be wary of inherent biases, and regularly revise and reform one’s assessments in light of experience and the best available evidence. The best judges, Tetlock (2005, 2) demonstrates, embody the intellectual traits of what Isaiah Berlin identified as a fox: a creature who knows “many little things” drawn from a broad field of knowledge. Foxes gather evidence from diverse sources, are methodical in their analyses, assess probabilities rigorously, collaborate well, and self-reflectively revise forecasts based on systematically reviewed feedback. Foxes also accept ambiguity and contradiction as inevitable features of life. The worst judges, Tetlock argues, are more like hedgehogs that confidently know “one big thing” and, seeing the world through this monochromatic lens, becoming “prisoners” of their own preconceptions.
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
55
Digital information systems can ensconce us in “filter bubbles” that limit exposure to diverse opinions, strengthen biases, and rigidify ideological predispositions. The filter bubble effect is neither uniform nor invariable (Scharkow et al. 2020), but it does appear pervasive and robust (Morris 2007; Stroud 2008; Nie et al. 2010; Pariser 2012; Carr 2014; Urbinati 2014; Gakshy et al. 2015; Nichols 2017; Madrigal 2018). If the IoT contributes to the spread and strengthening of filter bubbles, we will increasingly become prisoners of our own preconceptions. Our vulpine capacities will suffer. Most people, including experts, already employ faulty intuitions, engage in insufficient analysis, employ poor reasoning, and suffer from confirmation biases and other forms of self-deception (Tetlock 2005; Tetlock and Gardner 2015). If cognitively deskilling and ideological selfenclosure deepen, these common shortcomings will be exacerbated. We face the prospect of the AI-enabled IoT becoming increasingly fox-like, while humans come to think and act more like hedgehogs. The deterioration of practical judgment will likely be accelerated by employment disruptions in an increasingly automated world. As practical judgment is learning by doing, and as much human doing occurs in work environments, unemployment and underemployment will impede its development. The McKinsey Global Institute assesses that about half the activities that people do across all sectors of most economies are susceptible to replacement by machines, as they involve either data collection and processing or physical work in highly structured and predictable environments (Manyika and Bughin 2018, 4–5). The jobs and professions least vulnerable to automation involve personal interactions and the exercise of certain forms of expertise. The expertise in question is of a practical nature, grounded in direct experience and related to human interactions. As Lee (2018, 21, 155) observes, jobs that are “strategy based,” demanding creativity and practical judgment because inputs and outcomes are neither uniform nor easily quantified, are likely to remain in human hands. The upshot is that humans retain a comparative advantage in the arenas of creativity and practical judgment. The most secure careers of the future, accordingly, will be those which develop and deploy these capacities. As virtually all of these professions will have IoT components, it is not a matter of racing against the machines. Rather, the challenge we face is developing centaur relationships that bring out the best in us.
56
L. P. THIELE
Conclusion Many see AI as an existential threat to humanity. It is a reasonable worry. An AI apocalypse is not out of the question. But the tragedy we are most likely to confront in the coming decades is that machines supersede human capacities owing as much to the withering of the latter as the burgeoning of the former. We should be worried as much about human downgrading as computer upgrading. Ethically, politically, and cognitively, human beings become what they do. For the vast majority of human participants in future IoT systems, that may be very little. The potential of an IoTIA world is the widespread amplification of human capacities. Its common effect is cognitive deskilling. The personal computer started off as a bicycle for the mind. But digital devices and networks are becoming ever more powerful vehicles and increasingly self-driving ones. Unless we change course, many of us will be relegated to the role of inept passengers. The rise of the centaurs holds great promise. As digital ecosystems expand in scope and functionality, however, keeping people from becoming dangerously deskilled will be a crucial challenge. Creativity and practical judgment are arenas where humans far exceed the capacities of today’s smartest machines. For humanity to thrive in an IoTIA world, societies and economies will have to be organized to maintain its comparative advantage. At some point in the future, our species may have the opportunity, and choose, to secure this advantage by constraining the development of artificial intelligence. The more immediate challenge is to sustain the human potential. To this end, activities and careers that cultivate creativity and practical judgment need to be academically developed, economically rewarded, professionally deployed, and culturally celebrated. Realizing these goals is a worthy pursuit of the best minds and the best human-machine collaborations.
References Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. 2018. Prediction Machines: The Simple Economics of Artificial Intelligence. Boston: Harvard Business Review Press. Boden, Margaret. 2004. The Creative Mind: Myths and Mechanisms. 2nd edition. New York: Routledge.
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
57
Bringsjord, Selmer, Paul Bello, and David Ferrucci. 2001. “Creativity, The Turing Test, and the (Better) Lovelace Test.” Minds and Machines 11 (1, February): 3–27. Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age. New York: W. W. Norton. Carr, Nicholas. 2011. The Shallows: What the Internet Is Doing to Our Brains. New York: Norton. Carr, Nicholas. 2014. The Glass Cage: How Computers Are Changing Us. New York: W.W. Norton. Case, Nicky. 2018. “How to Become a Centaur.” Journal of Design and Science, 3. Accessed at: https://jods.mitpress.mit.edu/pub/issue3-case. Cheney-Lippold, John. 2017. We Are Data: Algorithms and the Making of Our Digital Selves. New York: New York University Press. Cisco. 2016. “At a Glance: Internet of Things.” Accessed at: www.cisco.com/c/ dam/en/us/products/collateral/se/internet-of-things/at-a-glance-c45-731 471.pdf. Corrigan, Jack. 2018. “Inside the Pentagon’s Plan to Make Computers ‘Collaborative Partners’.” Defense One. Accessed at: https://www.nextgov.com/eme rging-tech/2018/09/inside-pentagons-plan-make-computers-collaborativepartners/151014/. Corrigan, Jack. 2019. “Inside DARPA’s Ambitious ‘AI Next’ Program.” Defense One. Accessed at: https://www.defenseone.com/technology/2019/03/ins ide-pentagons-big-plans-develop-trustworthy-artificial-intelligence/155427/. Daugherty, Paul R., and H. James Wilson. 2018. Human + Machine: Reimagining Work in the Age of AI . Cambridge. de Jouvenal, Bertrand. 1993. On Power: The Natural History of Its Growth. Indianapolis: Liberty Fund. Ding, Jeffrey. 2018. Deciphering China’s AI Dream: The Context, Components, Capabilities, and Consequences of China’s Strategy to Lead the World in AI . Oxford: Future of Humanity Institute, Oxford University. Accessed at: https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/. Fussell, Sidney. 2018. “The City of the Future Is a Data-Collection Machine.” The Atlantic. Accessed at: https://www.theatlantic.com/technology/arc hive/2018/11/google-sidewalk-labs/575551/?utm_term=2018-11-21T10% 3A00%3A12&utm. Gakshy, Eytan, Solomon Messing, and Lada Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science (5 June) 6239: 1130–1132. Gaudin, Sharon. 2019. “Armed with Artificial Intelligence, Scientists Take on Climate Change.” Enterprise.nxt, April 17. Accessed at: https://www. hpe.com/us/en/insights/articles/armed-with-artificial-intelligence-scientiststake-on-climate-change-1904.html?jumpid=ba_kqdp2tfwri_aid-520000028.
58
L. P. THIELE
Harari, Yuval Noah. 2018. “Why Technology Favors Tyranny.” The Atlantic, October. Accessed at: https://www.theatlantic.com/magazine/arc hive/2018/10/yuval-noah-harari-technology-tyranny/568330/. Hawking, Stephen, Stuart Russel, Max Tegmark, and Frank Wilczek. 2014. “Stephen Hawking: ‘Transcendence Looks at the Implications of Artificial Intelligence—But Are We Taking AI Seriously Enough?’” The Independent, May 1. http://www.independent.co.uk/news/science/stephen-hawking-tra nscendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-tak ing-9313474.html. Howard, Philip N. 2015. Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up. New Haven: Yale University Press. Howell, Robert J. 2014. “Google Morals, Virtue, and the Asymmetry of Deference.” Nous 48(3): 389–415. Hughes, James. 2004. Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. Cambridge, MA: Westview Press. IBM. 2019. “The Quest for AI Creativity.” Accessed at: https://www.ibm.com/ watson/advantagereports/future-of-artificial-intelligence/ai-creativity.html. IE: Center for the Governance of Change. 2019. “European Tech Insights 2019.” Accessed at: https://www.ie.edu/cgc/research/tech-opinion-poll2019/. Jobs, Steven. 2010. Memory & Imagination: New Pathways to the Library of Congress. Film Clip available at Steve Jobs, “Computers are Like a Bicycle for Our Minds.” Michael Lawrence Films. Accessed at: https://youtu.be/ ob_GX50Za6c. Johnson, Steven. 2010. Where Good Ideas Come From: The Natural History of Innovation. New York: Riverhead Books. Kelly, Kevin. 2012. “Better than Human: Why Robots Will—And Must—Take Our Jobs.” Wired, December 24. Accessed at: https://www.wired.com/ 2012/12/ff-robots-will-take-our-jobs/. Koestler, Arthur. 1964. The Act of Creation. Hutchison and Company. Krakauer, David. 2015. Ingenius: David Krakaue. Accessed at: http://nautil.us/ issue/23/dominoes/ingenious-david-krakauer. Kurzweil, Ray. 2012. How to Create a Mind. New York: Penguin. Kushlev, Kostadin, Jason Proulx, and Elizabeth W. Dunn. 2016. “‘Silence Your Phones’: Smartphone Notifications Increase Inattention and Hyperactivity Symptoms.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1011–1020. Accessed at: http://dl.acm.org/cit ation.cfm?doid=2858036.2858359. Lee, Kai-Fu. 2018. AI Superpowers: China, Silicon Valley, and the New World Order. Boston: Houghton Mifflin Harcourt.
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
59
Licklider, J. C. R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics. Volume HFE-1, pp. 4–11, March 1960. Accessed at: https://groups.csail.mit.edu/medg/people/psz/Licklider.html. Loeffler, John. 2018. “Why Chinese Artificial Intelligence Will Run the World: How the Chinese tech Giants Baidu, Alibaba, and Tencent Will Develop the Systems That Will Run the World.” Interesting Engineering, November 6. Accessed at: https://amp.interestingengineering.com/why-chinese-artificialintelligence-will-run-the-world. Madrigal, Alexis C. 2018. “What Facebook Did to American Democracy.” The Atlantic, October 12, 2017. Accessed at: https://www.theatlantic.com/tec hnology/archive/2017/10/what-facebook-did/542502/. Maes, Pattie. 1998. “Intelligence Augmentation.” Edge, January 20. Accessed at: https://www.edge.org/conversation/pattie_maes-intelligence-augmentation. Malone, Thomas. 2018. Superminds: The Surprising Power of People and Computers Thinking Together. New York: Little, Brown and Company. Manyika, James, and Jacques Bughin. 2018. “The Promise and Challenge of the Age of Artificial Intelligence, Briefing Note prepared for the Tallinn Digital Summit, McKinsey Global Institute, New York. Accessed at: https://www.mckinsey.com/featured-insights/artificial-intellige nce/the-promise-and-challenge-of-the-age-of-artificial-intelligence. Markoff, John, 2015. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. New York: HarperCollins. Metz, Cade. 2020. “Meet GPT-3.” New York Times, November 24. Accessed at: https://www.nytimes.com/2020/11/24/science/artificial-intelligenceai-gpt3.html?campaign_id=9&emc=edit_nn_20201124&instance_id=24415& nl=the-morning®i_id=71023177&segment_id=45301&te=1&user_id= 1b8a7a6c418ba0b24e53c8b7ae5ab191. Michelucci, Pietro, and Janis L. Dickinson. 2016. The Power of Crowds. Science 351: 32–33. Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. Morris, Jonathan S. 2007. “Slanted Objectivity? Perceived Media Bias, Cable News Exposure, and Political Attitudes.” Social Science Quarterly 88: 707– 728. Nichols, Tom. 2017. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. New York: Oxford University Press. Nie, Norman H. Darwin W. Miller, Saar Golde, Daniel M. Butler, and Kenneth Winneg. 2010. “The World Wide Web and the U.S. Political News Market.” American Journal of Political Science 54: 428–439. Pariser, Eli. 2012. The Filter Bubble. New York: Penguin. Pence, Mike. 2018. “Remarks by Vice President Pence on the Administration’s Policy Toward China.” White House Briefings (October
60
L. P. THIELE
4). Accessed at: https://www.whitehouse.gov/briefings-statements/remarksvice-president-pence-administrations-policy-toward-china/. Proudhon, Pierre-Joseph. 1923. General Idea of the Revolution in the Nineteenth Century. London: Freedom Press. Postman, Neil. 1985. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York: Penguin Books. Rheingold, Howard. 2003. Smart Mobs: The Next Social Revolution. New York: Perseus. Roberts, Margaret. 2018. Censored: Distraction and Diversion Inside China’s Great Firewall. Princeton: Princeton University Press. Ryan, Fergus. 2018. “An Orwellian Future Is Taking Shape in China.” The Sydney Morning Herald, January 8. Accessed at: https://www.smh.com. au/opinion/an-orwellian-future-is-taking-shape-in-china-20171220-h07vbw. html. Saran, Cliff. 2019. “Stanford University Finds That AI Is Outpacing Moore’s Law.” ComputerWeekly.com, December 12. Accessed at: https://www.comput erweekly.com/news/252475371/Stanford-University-finds-that-AI-is-outpac ing-Moores-Law?fbclid=IwAR0Ovr-0AoiIfCxik3RwmjmAF56ya3iRcHmX5 6yUlLPO5Dp-vn-WUD4QMhs. Scharkow, Michael, Frank Mangold, Sebastian Stier, and Johannes Breuer. 2020. “How Social Network Sites and Other Online Intermediaries Increase Exposure to News.” PNAS 117(6): 2761–2763. Scharre, Paul. 2016. Autonomous Weapons and Operational Risk. Washington, DC: Center for a New American Security. Silver. Nate. 2012. The Signal and the Noise. New York: Penguin. Statista. 2020. “Internet of Things (IoT) Connected Devices Installed Base Worldwide from 2015 to 2025” Accessed at: https://www.statista.com/sta tistics/471264/iot-number-of-connected-devices-worldwide/. Steinert, Steffen. 2015. “Art: Brought to You by Creative Machines.” Philosophy and Technology 30: 267–284. Stroud, Natalie J. 2008. “Media Use and Political Predispositions: Revisiting the Concept of Selective Exposure.” Political Behavior 30: 341–366. Susskind, Jamie. 2018. Future Politics: Living Together in a World Transformed by Tech. Oxford: Oxford University Press. Tetlock, Philip E. 2005. Expert Political Judgment. Princeton: Princeton University Press. Tetlock, Philip, and Dan Gardner. 2015. Superforecasting: The Art and Science of Prediction. New York: Crown. Thiele, Leslie Paul. 2006. The Heart of Judgment: Practical Wisdom, Neuroscience, and Narrative. Cambridge: Cambridge University Press. Thiele, Leslie Paul. 2018a. “Against Our Better Judgment: Decision Making in an Age of Smart(er) Machines.” In The Political Economy of Robots: Prospects
3
RISE OF THE CENTAURS: THE INTERNET OF THINGS …
61
for Prosperity and Security in the Automated 21st Century. Ed. Ryan Kiggins, pp. 183–209. Palgrave Macmillan. Thiele, Leslie Paul. 2018b. “Digital Politics Is the Game: See What Happens When Scholars Play It Well!” Perspectives 16(4): 1123–1128. Thiele, Leslie Paul, and Marshall Young. 2016. “Practical Judgment, Narrative Experience, and Wicked Problems.” Theoria 63: 35–52. Thompson, Clive. 2013. Smarter Than You Think: How Technology Is Changing Our Minds for the Better. New York: Penguin. University of Cambridge. 2019. “Using AI to Avert Environmental Catastrophe.” Cambridge, UK. Accessed at: https://www.cam.ac.uk/research/ news/using-ai-to-avert-environmental-catastrophe. Urbinati, Nadia. 2014. Democracy Disfigured: Opinion, Truth, and the People. Cambridge: Harvard University Press. Vallor, Shannon. 2015. “Moral Deskilling and Upskilling in New Machine Age.” Philosophy and Technology 28: 107–124. Vanderklippe, Nathan. 2018. “Chinese School Installs Cameras to Monitor Students.” Globe and Mail, June 2, p. A3. Wiggers, Kyle. 2019. “Amazon’s AI Improves Emotion Detection in Voices.” VentureBeat. Accessed at: https://venturebeat.com/2019/05/21/amazonsai-improves-emotion-detection-in-voices/. Yarrrow, Jay. 2010. “Sergey Brin: “We Want Google to Be the Third Half of Your Brain.” Accessed at: https://www.businessinsider.fr/us/sergey-brin-wewant-google-to-be-the-third-half-of-your-brain-2010-9/.
CHAPTER 4
AI in Public Education: Humble Beginnings and Revolutionary Potential Kenneth Rogerson and Justin Sherman
Popular culture has had a profound impact on contemporary global discussions of artificial intelligence. How will lethal autonomous weapons change the dynamics and the morals of war? Could predictive algorithms revolutionize modern medicine and usher in a new era of diagnosis and prevention? Will the unchecked deployment of facial recognition in major cities bring the advent of a truly Orwellian dystopia?
The authors would like to thank the participants in the Modern Technology and International Relations Conference at the Institute of Global Studies at Shanghai University in April 2019 who engaged in valuable discussion around an earlier working version of this chapter. K. Rogerson (B) Sanford School of Public Policy, Duke University, Durham, NC, USA e-mail: [email protected] J. Sherman Georgetown University, Washington, DC, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_4
63
64
K. ROGERSON AND J. SHERMAN
These are pressing questions, and each illustrates the need for crossdisciplinary approaches to understanding the realities, opportunities, risks, and challenges of artificial intelligence. But less frequent in discussions of artificial intelligence—both in how AI systems are presently used, and how they may be deployed in the future—is mention of AI’s impact in civil society. This is particularly true with respect to how AI is impacting, and will impact, public education systems around the world. There is a strong focus on education in the service of AI, not the other way around—asking such questions as how can institutions of higher learning provide AI research and development, how to best prepare the workforce for a new era of automation, or how to train the most skilled machine learning researchers, as opposed to how AI will change the design and practice of public education itself. We could attribute this lack of discussion to many factors, including (but not limited to) the slowness of public education systems to adopt new technologies; the relative newness of AI’s deployment in public education systems around the world; and uncertainty about exactly how a range of AI applications could even be applied in an educational sphere in service of a public education mission. Given these factors, it is arguably unsurprising that many substantive discussions of AI’s role in public education have yet to hit the mainstream. Hence, in this chapter, we analyze the current uses—as well as potential uses—of AI in primary and secondary education around the world, primarily in China, India, and the United States. Our focus rests mainly on these three countries not because they are the only AI players on the global stage (far from it), but because they are poised to perhaps be the most influential. Combinations of their strong economies, military power, political influence, technological resources, and large populations are all reasons why these three countries will be rule-setters and innovators in how countries develop, deploy, and/or regulate artificial intelligence in the twenty-first century. First, we discuss the global landscape of cooperation and competition around artificial intelligence technologies. Then, we discuss plans, investments, and implementations of AI in public education in the United States, in China, and in India. Finally, we conclude with a discussion of the relationships between such considerations as AI use, school demographics, gender, ethnicity, socioeconomic status, and privacy.
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
65
The Global AI Landscape Global research on artificial intelligence happens heavily cross-borders, with great interdependence and interconnection between the “AI spheres” of different countries. Companies, universities, civil society organizations, individuals, and even government entities in different countries may collaborate on a host of AI research areas, from natural language processing to image recognition. Many AI-related research goals of these various groups—such as improving public health outcomes or boosting transportation safety through autonomous cars—are relatively aligned, at least technologically speaking, and are not zero-sum (“Artificial Intelligence Index” 2019; Ding 2018). Furthermore, most global AI research is also open-source, occurring largely in the public domain on code-sharing sites like GitHub, datasharing sites like Kaggle, and paper-sharing sites like Arxiv.org. The latest research, datasets, and other software-based elements in the field are frequently available to anyone with an internet connection (though, of course, access to this information and these resources is just one of many elements of AI research, such as having the requisite computing hardware on which to train and test algorithms). There are always exceptions— when nations secretly develop lethal autonomous weapons in military research centers, for example—but these characteristics generally hold (Sherman 2019). Due to their large economies, military power, political influence, technological prowess, and sizable populations, China and the United States are quite influential when it comes to the future of AI. But other countries, from Canada to India, from Japan to Israel, are also investing in AI research, curating and developing domestic AI talent, and looking to use their regulatory levers to influence the future of AI development within and outside of their sovereign borders. This is especially true for India, the most populous democracy on earth, which is arguably at the forefront of explicitly laying out a national vision for how artificial intelligence could be used to revolutionize public education and further a mission of providing quality and accessible public education for all. In the next few sections, we lay out the current state and future potential of AI in public education in the United States, then in China, and then in India. In each of these sections, we examine government strategies and investments, currently deployed technologies and those prospectively on the table for the future, and how these investments and implementations
66
K. ROGERSON AND J. SHERMAN
may reflect unique or shared values around privacy, technological growth, and other issues. Finally, we conclude with a look forward—laying out challenges for AI in public education and detailing important research questions that deserve further exploration.
The United States The 30,000-Foot View In October 2016, US President Barak Obama’s administration released two major documents on AI strategy. The first, Preparing for the Future of Artificial Intelligence, provided recommendations to federal agencies and other actors to better develop and prepare for the AI-driven future (National Science & Technology Council 2016a). The second, The National Artificial Intelligence Research and Development Strategic Plan, laid out a strategic plan for federal investment in AI development (National Science & Technology Council 2016b). The former briefly mentions AI in education, noting that “AI-enhanced education may help teachers give every child an education that opens doors to a secure and fulfilling life,” but focuses widely on issues like monitoring progress in AI, optimally targeting federal research funding, and ensuring fairness and accountability in artificially intelligent systems (National Science & Technology Council 2016a). The latter discusses education as well, noting that AI can “improve educational outcomes” in areas such as automated tutoring and customized, in-person teaching supplementation, though the comment is brief and high-level (National Science & Technology Council 2016b). The Trump administration’s June 2019 update to the National Artificial Intelligence Research and Development Strategic Plan briefly mentioned AI improving education, but it mostly focused on AI and education in the context of workforce development (National Science & Technology Council 2019). US government funding for AI research, meanwhile, has been heavily focused within the military. The US military certainly has a history of funding vitally important technological breakthroughs that now reside, even primarily, in the civilian and commercial spheres, from the military’s invention of GPS technology (Mazzucato 2013) to DARPA’s role in building the foundation for the internet (Cerf). Further, many AI applications have utility in both civilian and military domains—what some historically call dual-use. It is therefore possible
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
67
that the military’s investments in AI development have wider applicability, including because many of the military’s AI research focuses are on logistical and command support, not solely autonomous weapons as some might assume (Congressional Research Service 2019). While the United States seems to be focusing much of its public discussions on AI for military and security issues, there are other social issues, like education, that make an appearance at times. Interestingly, the reality is that most uses of artificial intelligence in American public schools that we uncovered in our research focused on security and surveillance applications—that is, programs explicitly built and marketed for the purposes of monitoring student social media posts to predict risk of student suicide or in-school violence, for example. Very few examples focused on applications such as automatically grading student assignments. The private sector has driven this AI development in the United States, but it’s perhaps equally relevant to note the US government’s lack of explicit focus on alternatives. There is much federal and state policy discussion about AI’s impact on the workforce, specifically around job displacement or replacement due to improved automation, and there is also much discussion in the general public, related to this point, about how public education and higher education should adapt to prepare American citizens for an increasingly AI-driven future. But the US government has focused little on how artificial intelligence could improve or even revolutionize the design and practice of public education. Current AI Implementations in Public Education Several public school systems in the United States are using artificial intelligence applications in ways that illuminate opportunities, risks, and possible future directions for AI use in American schools. This is especially because most applications we found of artificial intelligence in US public schools pertain to school safety—everything from running programs to predict student self-harm to object recognition systems that identify firearms. Some uses of artificial intelligence in American public schools center around safety and security—predicting (with an eye toward preventing) student violence. Uses for these reasons come in different shapes, including facial recognition systems and data mining/machine learning analyses of student communications.
68
K. ROGERSON AND J. SHERMAN
The Aegis system, for example, is an object recognition system that uses deep learning to monitor students. One school district implementing the system in Lockport, New York has faced some obstacles in getting started. Lockport superintendent Michelle Bradley said that “the test [was in] an ‘initial implementation phase’ meant to troubleshoot the system, train district officials on its use, and discuss proper procedures with local law enforcement in the event of an alert triggered by the facial recognition tech.” These “alerts,” said an FAQ distributed to parents with children at the school, are defined by the “ability [to screen] every door and throughout buildings to identify people or guns. Early detection of a threat to our schools allows for a quicker and more effective response” (Alba 2019). When the New York State Department of Education heard about it, it asked the district to delay testing Aegis for privacy reasons (Alba 2019). Facial recognition is only one way that AI is being used in schools. A company called Gaggle offers schools the ability to monitor student online conversations through email and social media. “Using a combination of in-house artificial intelligence and human content moderators,” BuzzFeed News reported, “Gaggle polices schools for suspicious or harmful content and images, which it says can help prevent gun violence and student suicides. It plugs into two of the biggest software suites around, Google’s G Suite and Microsoft 365, and tracks everything, including notifications that may float in from Twitter, Facebook, and Instagram accounts linked to a school email address” (Haskins 2019). While school policy requires that the monitored communication must be connected to a school-issued email, there are quite a few intersecting information flows between that email and other publicly available social media. “[S]tudent work and behavior are scrutinized for indicators of violence or a mental health crisis, and profanity and sexuality are [also] policed” (Haskins 2019). Implementation delays similar to those that happened with Lockport occurred in Florida in the spring of 2019. In this case, the purpose was the same: to improve security after the Parkland High School Shooting (Herrold 2019). But while the reason for the delay was stated as a concern for privacy, this situation was also about the legality of the state accessing and merging a number of different databases, rather than just gathering information through the school as with Gaggle. The database, managed by the state, would include “people’s social media posts with millions of records on individuals who have been bullied, placed in foster care,
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
69
committed a crime, or even been mentioned in unverified tips made to law enforcement.” The article continued that the organization’s “investigation found that across the country, social media and related monitoring services used by schools are generating vast torrents of information— some of which is alarming, but much of which is ambiguous, irrelevant, or ridiculous” (Herrold 2019). In addition to monitoring security issues, schools are looking to AI as a supplemental teacher. While some bemoan the potential replacement of teachers with machines, most understand that the reality will be a combination of AI and human interaction for the foreseeable future (Houser 2017). In 2015, Stanford University’s One Hundred Year Study on Artificial Intelligence issued a report that included an observation about the relationship between AI and education. The authors noted: Though quality education will always require active engagement by human teachers, AI promises to enhance education at all levels, especially by providing personalization at scale. Interactive machine tutors are now being matched to students for teaching science, math, language, and other disciplines. Natural Language Processing, machine learning, and crowdsourcing have boosted online learning and enabled teachers in higher education to multiply the size of their classrooms while addressing individual students’ learning needs and styles. Over the next fifteen years in a typical North American city, the use of these technologies in the classroom and in the home is likely to expand significantly, provided they can be meaningfully integrated with face-to-face learning. (“The One-Hundred Year Study” 2016)
This nuanced acknowledgment is more meaningful than the all-ornothing approach some take, since human touches on AI are still the norm for now. From the Stanford study, deeper curricular uses of AI are suggested by the phrase “personalization at scale.” John Allen describes this as a mixture of an in-person and virtual classroom. He writes, “students can become more deeply involved in the pathways of their own learning,” but this will require better assessment and remediation, both of which can also potentially be realized through AI (Allen 2019). Initial observations about the uses of AI in the US educational system are that it is difficult to dream big and implement technological change quickly. Much of what happens is reaction rather than action. Schools are hesitant to adopt technologies (a) on which they aren’t tested and (b) which aren’t required by the state. And AI’s potential applications
70
K. ROGERSON AND J. SHERMAN
are much broader than, say, mathematical curricular standards, or professional teaching development. While some school districts, possibly those with more resources, are able to experiment with these new technologies before required by rule or practical necessity, most US schools are not using a holistic approach to the integration of AI in the classroom or school.
China The 30,000-Foot View Beijing’s investments in and focus on artificial intelligence predate the year 2016. But when Google’s AlphaGo system defeated a top Go player in March 2016 (Go is a popular and centuries-old Chinese strategy board game), it was a major event in mainland China and spurred a landmark shift in government focus on AI planning and investment. Prior to 2016, “AI was presented merely as one technology among many others, which could be useful in achieving a range of policy goals.” But the AlphaGo victory was a “Sputnik moment” for the Chinese government, according to two government officials (Roberts et al 2019). In July 2017, the Chinese government released its Next Generation Artificial Intelligence Development Plan, a comprehensive strategy aimed at “[seizing] the major strategic opportunity for the development of AI, to build China’s first-mover advantage in the development of AI, [and] to accelerate the construction of an innovative nation and global power in science and technology” (Webster et al. 2017). The document is comprehensive, discussing AI as the “new engine of economic development” that will be “the core driving force for a new round of industrial transformation” (Webster et al. 2017). Since its debut, the government has released other documents on AI strategy and development, such as the Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry, which put forth “major tasks” for the country such as “deepening the development of intelligent manufacturing” and “[building] a public support system for industry training resources” (Triolo et al. 2018). Overall, however, “Beijing’s AI plan serves less as a ‘plan’ and more as a ‘wish list’ of technologies the central government would like to see built. It then incentivizes ambitious local officials to use all the tools at their disposal—subsidies, public contracts, and AI-friendly policies—to guide
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
71
and aid the private sector in developing those technologies,” with many questions to be answered about this approach’s effectiveness (Sheehan 2018). Like many ambitious Chinese government plans, it remains to be seen how much will be implemented and how well. It is also worth noting that despite perhaps troubling early uses of AI in China, such as AI used as part of unchecked government surveillance, “it is too early to know what approach China will take” across the board with respect to AI governance (Sacks 2019). Current AI Implementations in Public Education As with the US government, the Chinese government’s documents around artificial intelligence development primarily refer to public education as a mechanism by which to prepare students—and future workers— for an AI-driven future. There are fewer references to public education as an institution in which AI applications might be used to improve and expand upon core educational objectives. In China, educational uses of AI can be categorized broadly in two ways: first, high-level references to the importance of understanding AI and its impact on politics, society, and the economy; and second, very granular implementations of AI for improving educational efficiency. The Chinese government regularly references artificial intelligence as vital to Chinese societal and economic development. In the spring of 2019, Chinese leaders in science and engineering announced an AI code of ethics, signaling “a willingness from Beijing to rethink how it uses the technology” (Knight 2019). This code spells out guidelines that encourage the idea that “human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected” when implementing AI programs (though, of course, some would contend that Beijing’s view of privacy doesn’t exactly track with views in the United States or Europe). These principles are sometimes connected to discussions on the goals of AI in education, implying that the Chinese government would like to see technology more broadly—and AI more specifically—as part of the country’s curricular goals. Current AI Implementations in Public Education In the second category, there are a variety of ways that AI is being implemented with the goal of improved efficiency. In the classroom, teachers
72
K. ROGERSON AND J. SHERMAN
are able to outsource some types of grading. A free app created by the educational company Yuanfudao allows teachers to take a photo of their students’ homework and upload it to the site, at which point an algorithm will respond with whether the answer is correct. The company “claims to have checked an average of 70 million arithmetic problems per day.” It then uses the resulting database of math problems and solutions to identify the most common mistakes (Jeng 2019). Another pedagogical example is one that is expected: helping students work at their own pace in a variety of subject areas. AI-based individualized learning is being adopted in many places, and China is no exception. But China may be one of the first places to build entire schools where every student is engaged in this type of learning. A Chinese company called Squirrel AI has, in addition to providing AI-based platforms for public schools, created its own network of schools in which students sit in front of a computer all day and interact with human instructors as needed. The company’s goals are to improve student performance and “[i]t also designed its system to capture ever more data from the beginning, which has made possible all kinds of personalization and prediction experiments” (Hao 2019). This begs the question of proper uses of data collected by AI-based systems, the answers to which vary by public and private sector actors. But there are also examples of daily, functional activities that are intended to help students. Though at the college level, a school in Hangzhou has started using an AI app to track attendance. Students must use a verification code when they arrive in class. If they don’t, the program will nudge them by text to ask why they aren’t in class. The algorithm will then analyze the reasons so that the university can address the issues (Dai 2019). While there are a number of caveats here—that students may not tell the truth about why they are absent or may discover a way to have someone reply for them—the university is experimenting with AI to address what it sees as a chronic absenteeism problem. In another, non-pedagogical, example, a school district (also interestingly in Hangzhou) has installed AI-powered cameras to monitor whether high school students are paying attention in classes. “The system works by identifying different facial expressions from the students, and that information is then fed into a computer which assesses if they are enjoying lessons or if their minds are wandering” (Connor 2018). While people have raised privacy concerns, administrators say it is improving educational standards. It doesn’t film or store student activities; it simply
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
73
watches student facial movements and interprets them as an emotion. If that emotion equates to not paying attention, the teacher is notified. In sum, for Chinese public education, AI seems to be an extension of the country’s educational values: becoming a world leader in every technological arena and controlling as much of the pedagogical process as possible. There does seem to be some flexibility in the creation of AI tools for the classroom, with the government encouraging technological entrepreneurs in the development of educational apps.
India The 30,000-Foot View In June 2018, India published its National Strategy for Artificial Intelligence, a document produced by NITI Aayog, a Government of India policy think tank. The strategy was the result of a 2018 push by India’s Finance Minister to guide research and development in new and emerging artificial intelligence technologies, with the aim of establishing India as a global leader in artificial intelligence—under the “unique brand of #AIforAll” (“National Strategy” 2018, 5). “An integral part of India’s strategy for AI,” it reads, “involves tackling common and complex global challenges that can be solved through technology intervention, and India’s scale and opportunity landscape provides the ideal test-bed to ensure sustainable and scalable solutions” (ibid., 6). To that end, the document discusses research and development priorities in areas such as health care, agriculture, smart mobility, retail, manufacturing, and energy (ibid., 20). Much like the strategies released by the American and Chinese governments, this approach recognizes the full range of sectors that could be positively impacted by development of AI technologies. There is a positive outlook on the future of AI, despite clear concerns about privacy, security, and other ethical and regulatory issues. Of the three countries analyzed in this chapter, India is the only one whose government strategy addresses at length the need for education to better prepare an AI-era workforce and the role that AI can play in improving education itself. “AI has the potential to bring about changes in the [education] sector by supplementing pedagogy and establishing systems to inform and support decision making across stakeholders and administrative levels,” it reads (ibid., 36). The document highlights five key areas where AI tools can be “adapted to the Indian context
74
K. ROGERSON AND J. SHERMAN
to target specific challenges”: adaptive learning tools for customized learning; intelligent and interactive tutoring systems; predictive tools to preempt possible student dropouts; automated management of teacher posting and transfer systems; and customized professional development courses (ibid., 37–38). Current AI Implementations in Public Education Paradoxically, despite the level of thought in Indian government, industry, and civil society that has gone into how artificial intelligence could improve public education, examples of few tangible implementations presently exist in the country. But the AI startup ecosystem’s focus on education is notable in India, and there are likely to be more actual implementations of AI in Indian public education in the near future. A few examples highlighted below capture this reality. In the fall of 2019, Microsoft India partnered with India’s Central Board of Secondary Education to integrate educational technology into the classroom. This partnership includes a focus on AI, whereby 1,000 teachers nominated by the Central Board will undergo a threeday training that includes how to leverage artificial intelligence in the classroom. “AI has become a strategic lever for economic growth across nations around the world,” a general manager at Microsoft India declared, and the partnership will help “transform the education ecosystem with the power of AI and the cloud” (Mathur 2019). Several public schools are also using natural language processing systems to assist with teaching English to students. EnglishHelper, one US-based technology platform, partnered with the Maharashtra government in July 2019 to roll out the platform across the state. The goal is to eventually cover 100,000 Indian schools (Sangani 2019). One estimate puts the software company’s current reach within India at almost 8 million students (Acharya 2020). These initiatives are a likely indication of what is to come. For instance, one study has found that out of 300-plus Indian startups with AI as a “core product offering,” 11% were in the education sector. Yet, writes one observer, “while this is a great sign that there is progress being made to make education efficient using AI, no single application in this regard has come on top in India, as opposed to China where we can see multiple of such companies” (Chawla 2019).
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
75
Further, the focus placed at the level of national government and state and local governments on AI’s potential for changing public education stands out compared to other countries, and coupled with the Indian government’s general focus on using technology to boost the economy and other government functions, there may be top-down pushes to implement more of these kinds of technologies in public schools as well.
AI and Existing Educational Structures The use of artificial intelligence in public education holds particular promise with respect to student interactions with teachers, and with curriculum development and innovation. Of course, the implementation of AI applications in public education will not occur without challenges (discussed in the next section). From public schooling systems’ slowness to adopt new technologies in the classroom to institutionalized access barriers, the adoption of AI systems in education is impacted by constraints that already impact the use of other technologies in schools and the quality and delivery of education outside of the technology sphere. For these reasons, artificial intelligence in public education poses thought-provoking relationships with school demographics, curriculum innovations, and cultural values. First and foremost, when discussing AI use and school demographics, there is the consideration that surveillance systems are most often turned against already underrepresented, oppressed, or marginalized communities. In other words, surveillance always hits the marginalized the hardest. In the United States, for instance, this has been true from the tracking of enslaved peoples with so-called plantation ledgers (Bell 2018) to the surveillance of female suffragists in the early 1900s (Travis 2003) to government spying on civil rights activists in the 1960s (Kayyali 2014) to post-9/11 surveillance of Muslim communities (Apuzzo and Goldman 2011). Digital surveillance systems are a continuation of this history, and AI surveillance tools are also faster, cheaper, and more scalable than surveillance systems that are either manual or digital without the use of intelligent automation. For this reason, they may be harder to resist, remove, or overhaul once implemented—and once dependencies upon these systems are created (Feldstein 2019). This applies not just to AI data collection and analysis in public school systems but also more widely, such as with AI-powered surveillance in city downtowns or at large entertainment events like concerts or the Olympics.
76
K. ROGERSON AND J. SHERMAN
When it comes to public schools, there are concerns that surveillance powers “are likely to be wielded disproportionately against students of color, who already face disciplinary bias at school,” the Brennan Center for Justice’s Rachel Levinson-Waldman argues. Particularly in light of American schools using AI for safety and security ends, there are also risks that AI use in public schools could harm demographics already prone to the school-to-prison pipeline and who belong to populations that already face inequity in public education (Kofman 2018). It is possible, for instance, that lower-income students in India could be subject to AI surveillance in schools without the power to effectively resist administrators, or that students in China who do not desire surveillance in schools have no legal protections under which to complain to the government. For all these reasons, AI in public education does not just relate to the demographics of the students in school themselves but to the broader communities in which those schools reside. Stefanie Coyle and Naomi Dann at the New York Civil Liberties Union, in looking at uses of facial recognition in New York public schools, did not just find that AI surveillance “could turn our school environment from one of learning and exploration into one of suspicion and control”; they also found a lack of community input in the use of these systems. “The decision to implement this technology using funding from the Smart Schools Bond Act,” they wrote, “appears to have been made without sufficient public involvement as required by law.” Absent strong regulations, there was nothing to stop the particular individuals in question from self-dealing and implementing AI surveillance systems without community input (Coyle and Dann 2018). There are also risks of AI systems themselves making discriminatory decisions. Documented bias in artificial intelligence tools—when systems violate certain definitions of decision fairness—underscores potential issues with hurriedly implementing AI in public education, or in implementing certain AI tools at all. Facial recognition systems trained mostly on light-skinned faces, for instance, often produce wildly inaccurate and inconsistent outputs when analyzing darker-skinned individuals. Natural language processing tools optimized to understand certain languages, dialects, and kinds of voices, to use another example, could likewise perform significantly worse on the voices of those for whom the algorithm was not optimized during the development process. When implementing AI in public education, then, where there are already notable disparities on the basis of socioeconomic class, social
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
77
race, ethnicity, and geographic region, among other measures of identity, educators and policymakers will have to grapple with existing structural challenges and inequities. India’s AI strategy recognizes this fact, noting: “AI has the potential to bring about changes in the sector by supplementing pedagogy and establishing systems to inform and support decision making across stakeholders and administrative levels. However, implementation of AI must be preceded by efforts to digitise records of teacher performance, student performance, and curriculum” (“National Strategy” 2018, 36–37). Ideally, AI implementation would thus be in ways that optimize equity and fairness. In some cases, this could mean optimizing a technical definition of algorithmic fairness in an AI application, such as ensuring that natural language processing tools used in a classroom setting work equally well for all students. In other cases, however, this could mean delaying or abandoning the adoption of certain AI applications in a classroom if their use would only exacerbate existing inequalities between various demographics. There is potential for artificial intelligence to greatly benefit public education in the context of school demographics—for instance, highly customized uses of AI deployed and regulated so as to eliminate educational disparities between populations, like between remote rural and connected urban communities in India—but these risks cannot be overlooked. Second, one of the most discussed uses of AI in the classroom is changing the way teachers teach and learners learn. The opportunities are greater individualized learning and instructor flexibility. The challenges are equal distribution of AI-related resources and instructor ability to integrate AI material into lesson plans and smoothly implement AI use in the daily activities. For example, theoretically, natural language processing technology could be used to increase student engagement in remote or digital learning environments (i.e., through online classes). It could also adapt to individual learning styles, such as visual vs. textual learners. It could be a resource in classrooms that have students with disparate levels of preparation or helping target the material that is needed by the greatest number of students. On the challenges side, there may be students who are more adept at technology use than others, which may or may not be germane to the
78
K. ROGERSON AND J. SHERMAN
subject being studied. At the same time, teachers may have different levels of training and skill with AI-related programs and platforms. Third and finally, AI is a tool. John Street’s foundation work, “Politics and Technology,” makes the argument that technology is mainly instrumentalist rather than determinist, that is, people and groups with very differing values can use the same technology for radically differing ends (Street 1992). Even though we haven’t made the same argument here, this project does support the idea that different countries—and the cultures embedded in them—may adapt and implement educational goals differently while using similar AI-based platforms. The beauty of technology is its adaptability, but that can also be a hurdle in trying to implement ideas across cultures. Cultural values may show up at the macro level in national goals, plans, and architectures or can be manifest in the different ways that programs are implemented at the micro, individual student level. For example, different cultures have different views on privacy and that is far more nuanced than just a country level. Low-income communities of color in the United States, for instance, against whom institutionalized surveillance has been most strongly applied, may be less trusting of data collection systems than high-income white communities who (accurately) perceive less potential harm to themselves from that data aggregation. In China, citizens have accepted—to a certain extent—that the government will be monitoring their activities both online and through the social credit system (Ma 2018). In each country’s schools, these privacy values play out differently as the Chinese use facial recognition software to monitor attendance and the United States “schools have mentioned … that it’d be a convenient tool for taking attendance, but on the whole, … they recognize the costs aren’t worth the benefits” (Tate 2019).
Charting a Path Forward Artificial intelligence applications like automated homework graders or educational content generation systems do not exist independently from social, political, and economic structures. They may in some cases, if implemented well, lower or remove barriers to providing quality public education in different countries, but those barriers may still prove too much to surmount with just technology itself. Further, it can be very challenging to improve public education when there are other complicating factors like limited access to electricity, limited access to the internet,
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
79
teacher unfamiliarity with educational technology, and lack of funding to provide certain technologies or to train teachers on the use of those technologies. All of that said, the relatively low cost, speed, and online availability of AI-powered systems promises ways for public education systems to potentially improve their offerings. So while “even the most advanced governments have little insight into what sort of R&D to fund” due to the uncertainty of AI’s future (Agrawal et al. 2016), we can still see and imagine the promise of AI for improving public education. It comes down to how governments, the private sector, civil society, educators, and students—and the communities in which they reside—want to use them.
Bibliography Acharya, Nish. 2020. “Challenging the Low Expectations of the United States— India Relationship.” Forbes. February 25. Accessed December 3, 2020. https://www.forbes.com/sites/nishacharya/2020/02/25/challenging-thelow-expectations-of-the-united-states–india-relationship/#2ac170393700. Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. 2016. “The Obama Administration’s Roadmap for AI Policy.” Harvard Business Review. December 21. Accessed December 3, 2020. https://hbr.org/2016/12/the-obama-admini strations-roadmap-for-ai-policy. Alba, Davey. 2019. “The First Public Schools in the US Will Start Using Facial Recognition Next Week.” Buzzfeed News. May 29. Accessed December 8, 2020. https://www.buzzfeednews.com/article/daveyalba/lockport-schoolsfacial-recognition-pilot-aegis. Allen, John R. 2019. “Why We Need to Rethink Education in the Artificial Intelligence Age.” January 31. Accessed December 8, 2020. https://www. brookings.edu/research/why-we-need-to-rethink-education-in-the-artificialintelligence-age/. Apuzzo, Matt and Adam Goldman. 2011. “With CIA Help, NYPD Moves Covertly in Muslim Areas.” Seattle Times. August 25. Accessed December 3, 2020. https://www.seattletimes.com/seattle-news/politics/with-cia-helpnypd-moves-covertly-in-muslim-areas/. “Artificial Intelligence Index: 2019 Report.” 2019. Stanford Human-Centered Artificial Intelligence Institute—Stanford University. Accessed December 8, 2020. https://hai.stanford.edu/sites/default/files/ai_index_2019_report. pdf. Bell, Sam Adler. 2018. “Privacy for Whom?” The New Inquiry. February 21. Accessed December 3, 2020. https://thenewinquiry.com/privacy-forwhom/.
80
K. ROGERSON AND J. SHERMAN
Cerf, Vint. “A Brief History of the Internet & Related Networks.” Internet Society. Accessed December 8, 2020. https://www.internetsociety.org/int ernet/history-internet/brief-history-internet-related-networks. Chawla, Vishal. 2019. “How China Is Revolutionising Education Using Artificial Intelligence.” Analytics India Magazine. August 26. Accessed December 3, 2020. https://analyticsindiamag.com/china-artificial-intelligence-education/. Congressional Research Service. 2019. “Artificial Intelligence and National Security.” Federation of American Scientists. Accessed December 8, 2020. https:// fas.org/sgp/crs/natsec/R45178.pdf. Connor, Neil. 2018. “Chinese School Uses Facial Recognition to Monitor Student Attention in Class.” The Telegraph. May 18. Accessed December 3, 2020. https://www.telegraph.co.uk/news/2018/05/17/chinese-schooluses-facial-recognition-monitor-student-attention/. Coyle, Stefanie and Naomi Dann. 2018. “We Asked for Answers on Facial Recognition in Schools. Our Questions Remain.” New York Civil Liberties Union. August 28. Accessed December 3, 2020. https://www.nyclu.org/en/news/ we-asked-answers-facial-recognition-schools-our-questions-remain. Dai, Sarah. 2019. “Chinese University Uses AI to Check Class Attendance Rates and Find the Reasons Behind Absenteeism.” South China Morning Post. March 18. Accessed December 3, 2020. https://www.scmp.com/tech/pol icy/article/3002107/chinese-university-uses-ai-check-class-attendance-ratesand-find. Ding, Jeffrey. 2018. “Deciphering China’s AI Dream.” University of Oxford. Accessed December 8, 2020. https://www.fhi.ox.ac.uk/wp-content/upl oads/Deciphering_Chinas_AI-Dream.pdf. Feldstein, Steven. 2019. “The Road to Digital Unfreedom: How Artificial Intelligence is Reshaping Repression,” Journal of Democracy 30, No. 1. https://www.journalofdemocracy.org/articles/the-road-to-digital-unf reedom-how-artificial-intelligence-is-reshaping-repression/. Hao, Karen. 2019. “China Has Started a Grand Experiment in AI Education.” MIT Technology Review. August 2. Accessed December 3, 2020. https://www.technologyreview.com/s/614057/china-squirrel-has-sta rted-a-grand-experiment-in-ai-education-it-could-reshape-how-the/. Haskins, Caroline. 2019. “Gaggle Knows Everything About Kids and Teens in School.” November 1. Accessed December 8, 2020. https://www.buzzfeedn ews.com/article/carolinehaskins1/gaggle-school-surveillance-technology-edu cation. Herrold, Benjamin. 2019. “Florida Plan for a Huge Database to Stop School Shootings Hits Delays, Legal Questions.” Education Week. May 30. Accessed December 8, 2020. https://www.edweek.org/ew/articles/2019/05/30/flo rida-plan-for-a-huge-database-to.html.
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
81
Houser, Kristin. 2017. “The Solution to Our Education Crisis Might be AI.” Futurism. December 11. Accessed December 8, 2020. https://futurism.com/ ai-teachers-education-crisis. Jeng, Ming. 2019. “Tencent-Backed AI Firm Aims to Free Up Parents and Teachers from Checking Children’s Maths Homework.” South China Morning Post. February 11. Accessed December 3, 2020. https://www.scmp. com/tech/start-ups/article/2185452/tencent-backed-ai-firm-aims-free-par ents-and-teachers-checking. Kayyali, Dia. 2014. “The History of Surveillance and the Black Community.” Electronic Frontier Foundation. February 13. Accessed December 3, 2020. https://www.eff.org/deeplinks/2014/02/history-surveillance-andblack-community. Knight, Will. 2019. “Why Does Beijing Suddenly Care About AI Ethics?” MIT Technology Review. May 31. Accessed December 3, 2020. https://www.tec hnologyreview.com/s/613610/why-does-china-suddenly-care-about-ai-eth ics-and-privacy/. Kofman, Ava. 2018. “Face Recognition Is Now Being Used in Schools, But It Won’t Stop Mass Shootings.” The Intercept. May 30. Accessed December 3, 2020. https://theintercept.com/2018/05/30/face-recognition-schoolsschool-shootings/. Ma, Alexandra. 2018. “China Has Started Ranking Its Citizens.” Business Insider. October 29. Accessed December 3, 2020. https://www.businessinsider.com/ china-social-credit-system-punishments-and-rewards-explained-2018-4. Mathur, Nandita. 2019. “Microsoft, CBSE Join Hands to Build AI Learning for Schools.” Live Mint. September 5. Accessed December 3, 2020. https://www.livemint.com/companies/news/microsoft-cbse-joinhands-to-build-ai-learning-for-schools-1567681716865.html. Mazzucato, Mariana. 2013. The Entrepreneurial State: Debunking Public vs. Private Sector Myths. London: Anthem Press. National Science & Technology Council. 2016a. “Preparing for the Future of Artificial Intelligence.” Executive Office of the President. Accessed December 8, 2020. https://obamawhitehouse.archives.gov/sites/default/files/whiteh ouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf. National Science & Technology Council. 2016b. “The National Artificial Intelligence Research and Development Strategic Plan.” Executive Office of the President. Accessed December 8, 2020. https://www.nitrd.gov/PUBS/nat ional_ai_rd_strategic_plan.pdf. National Science & Technology Council. 2019. “The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.” Executive Office of the President. Accessed December 8, 2020. https://www.nitrd.gov/ pubs/National-AI-RD-Strategy-2019.pdf.
82
K. ROGERSON AND J. SHERMAN
“National Strategy for Artificial Intelligence.” 2018. NITI Aayog—Government of India. https://niti.gov.in/writereaddata/files/document_publication/Nat ionalStrategy-for-AI-Discussion-Paper.pdf. Roberts, Huw, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang, and Luciano Floridi. 2019. “The Chinese Approach to Artificial Intelligence: an Analysis of Policy and Regulation.” Social Science Research Network. October 23. Accessed December 8, 2020. https://papers.ssrn.com/sol3/pap ers.cfm?abstract_id=3469784. Sacks, Samm. 2019. Written Testimony to the Senate Committee on Commerce, Science, and Transportation’s Subcommittee on Security, March 7. Page 5. Sangani, Priyanka. 2019. “Startups Turn to AI Improve Teaching Quality at Government-Run Schools.” Economic Times. October 4. Accessed December 3, 2020. https://economictimes.indiatimes.com/small-biz/startups/fea tures/startups-turn-to-ai-improve-teaching-quality-at-government-run-sch ools/articleshow/71433816.cms?from=mdr. Sheehan, Matt. 2018. “How China’s Massive AI Plan Actually Works.” MacroPolo. February 12. Accessed December 3, 2020. https://macropolo. org/analysis/how-chinas-massive-ai-plan-actually-works/. Sherman, Justin. 2019. “The Pitfalls of Trying to Curb Artificial Intelligence Exports.” World Politics Review. June 9. Accessed May 26, 2021. https://www.worldpoliticsreview.com/articles/27919/the-pitfalls-oftrying-to-curb-artificial-intelligence-exports. Street, John. 1992. Politics and Technology. New York: Guilford Press. Tate, Emily. 2019. “With Safety in Mind, Schools Turn to Facial Recognition Technology. But at What Cost?” EdSurge. January 31. Accessed December 3, 2020. https://www.edsurge.com/news/2019-01-31-with-safety-in-mind-sch ools-turn-to-facial-recognition-technology-but-at-what-cost. The One Hundred Year Study on Artificial Intelligence Study Panel. 2016. “Artificial Intelligence and Life in 2030.” September 2016. Accessed December 8, 2020. https://ai100.sites.stanford.edu/sites/g/files/sbiybj9861/f/ai100r eport10032016fnl_singles.pdf. Travis, Alan. 2003. “Big Brother and the Sisters.” The Guardian. October 9. Accessed December 3, 2020. https://www.theguardian.com/world/2003/ oct/10/gender.humanrights. Triolo, Paul, Elsa Kania, and Graham Webster. 2018. “Translation: Chinese Government Outlines AI Ambitions Through 2020.” New America. January 26. Accessed December 8, 2020. https://www.newamerica.org/cybersecu rity-initiative/digichina/blog/translation-chinese-government-outlines-aiambitions-through-2020/.
4
AI IN PUBLIC EDUCATION: HUMBLE BEGINNINGS …
83
Webster, Graham, Paul Triolo, and Elsa Kania. 2017. “Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017).” New America. August 1. Accessed December 8, 2020. https://www.newamerica. org/cybersecurity-initiative/digichina/blog/full-translation-chinas-new-gen eration-artificial-intelligence-development-plan-2017/.
CHAPTER 5
Chinese and U.S. AI and Cloud Multinational Corporations in Latin America Maximiliano Facundo Vila Seoane
Introduction Many analysts and pundits believe that Artificial Intelligence (AI) is becoming a major disruptive force to world order. Research has so far mainly focused on AI’s potential effects on the balance of power, particularly the struggle for digital leadership between the U.S. and China, and to a lesser extent India, Russia, and the European Union.1 In a nutshell, (neo)realist readings assume that to understand the impact of AI on the world order only state matters, specifically Great Powers and their military. The remaining nations will become data colonies, mere norm takers, or passive adopters of technologies developed elsewhere. By contrast, this chapter argues that the so-called Global South does matter, since it has become the playing field where the main multinational corporations (MNCs) of leading nations compete in exporting its AI technologies and services, shaping a private governance of AI.
M. F. Vila Seoane (B) National University of San Martín (UNSAM), Buenos Aires, Argentina e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_5
85
86
M. F. VILA SEOANE
Despite the AI academic field has almost seven decades, recent advances in data, algorithms and digital infrastructures have brought it to the agenda of high politics. Indeed, the proliferation of digital devices has paved the way to a massive increase of data, producing a data deluge prone to sophisticated analysis. These big data require specific algorithms to detect patterns, among which deep learning approaches have produced outstanding results.2 Besides, these algorithms are accessible to programmers for free through open-source deep learning frameworks, many developed by leading U.S. Big Tech companies, such as TensorFlow by Google or PyTorch by Facebook. Yet, these breakthroughs would have been impossible without more powerful Graphic Processing Units (GPUs). Although expensive, the access to such processing capacity at scale has become easier by buying computing power from cloud companies. This blurry concept refers to the provision of computing capability as a service,3 which was the dominant model of computing before the invention of the PC.4 Recognizing the importance of computing power for AI, MNCs are moving fast to build global networks of data centers, that is huge infrastructures consuming considerable energy in order to provide the needed computing power for AI services for its clients across the globe. Hence, if we want to understand the societal impacts of AI, we need to examine AI-Cloud-MNCs,5 which are at the frontiers of developing and disseminating globally AI and the required cloud infrastructure. Latin American states lack the capabilities to compete at the frontier of AI. Yet, these states have many firms and citizens capable of producing the critical resource of the AI-based economy: data.6 Not surprisingly, we find that several Big Tech corporations, mainly from the U.S. and to a lesser extent from China, are offering AI-based products and services through the cloud to citizens, firms, and governments to help them embrace digitalization, a process that the COVID-19 pandemic has accelerated. Consequently, Latin American states are becoming increasingly entangled with these foreign AI-Cloud-MNCs that are shaping a private governance of AI and the cloud. This raises the following questions: What are the political strategies employed by MNCs of different home countries to shape the global governance of AI and the cloud in Latin America? What are the potential implications of these trends for development? How are states and civil society organizations engaging with these enterprises?
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
87
In order to shed light on these issues, the chapter will employ a Neo-Gramscian perspective to compare the material, discursive, and organizational resources mobilized by such MNCs in attempting to establish an hegemony for the governance of AI and the cloud, together with concepts from the critical global political economy literature. Methodologically, the chapter employs a case study approach, with MNCs as the unit of analysis. Specifically, I analyze the most relevant AI-Cloud-MNCs that are active in Latin America in terms of their home country and of global market share,7 namely, Amazon and Microsoft from the U.S. and Huawei from China. The main argument is that the spread of AI and the cloud will not necessarily usher a new era of abundance nor generate a new condition of dependence, but rather accentuate problematic processes of uneven and combined development in the region. This is a consequence of the ongoing war of position8 among U.S. and Chinese AI-Cloud-MNCs to spread and govern these technologies, which so far is targeted to the most resourceful actors. Besides, states have been engaging in different ways with these foreign corporations, depending on specific social forces in each country, their preexisting relations with MNCs’ home countries and their technological capabilities, while there are scarce counterhegemonic social forces criticizing these foreign AI-Cloud-MNCs. The rest of the chapter is organized as follows. Section “Analytical Approach” outlines the analytical approach. Section “Digitalization and Latin America” describes relevant macro trends in Latin America regarding digitalization, followed by a synthesis of the main AI policies in AI-Cloud-MNCs’ home states. Section “AI-Cloud-MNCs Strategies in Latin America” characterizes the material, discursive, and organizational strategies employed by AI-Cloud-MNCs in Latin America, whereas section “Uneven and Combined Development” describes how they contribute to processes of uneven and combined development. Finally, section “Latin American Strategies to Face AI-Cloud-MNCs” examines how states and civil society are relating with these strategies by AI-Cloud-MNCs operating in the region.
Analytical Approach This section introduces the analytical approach to criticize the nascent AI governance in the Global South, which draws ideas from the literature
88
M. F. VILA SEOANE
on business strategy, international environmental governance, and uneven and combined development. Levy and Newell9 seminal contribution employed ideas inspired by Gramsci’s analysis of hegemony to comprehend the political dimension of international environmental governance. By contrast to previous perspectives that focused on the interaction between MNCs and states,10 the authors stressed the multiple other actors that may also influence governance processes. Besides, this approach goes beyond a purely economic analysis by indicating that other types of sources of power are equally important. Indeed, in order to comprehend how MNCs attempt to shape a regime, Levy and Newell propose to study the discursive and organizational strategies of such corporations, besides their material power. The latter covers, for example, the financial and infrastructural resources that a firm can mobilize, whereas the discursive strategies include concepts and slogans that corporation use to present their products and services under a positive light. Finally, the organizational dimension refers to the types of alliances that MNCs establish with states, other firms, NGOs, and intergovernmental organizations to protect and legitimize their market shares. As regards Bieler and Morton’s historical materialist approach to the international economy, it rejects the fictitious clear-cut separation of concepts, such as dividing the national from the international, or considering the state as an homogeneous and indivisible unit distinct from other social forces.11 Instead, this perspective has a radical ontology that accepts internal relations among concepts, such as the forces of production, state-society relations, and class struggle. Three concepts from this approach matter for understanding the emerging patterns of Global South relations with foreign AI-Cloud-MNCs: (a) class struggle; (b) interstate competition; and (c) uneven and combined development. First, a historical materialist perspective assumes that capital, as a social relation, creates divisions between those who own it and the means of production versus those who must sell their labor power to survive.12 This leads to specific social-property relations that vary in time; thus, it is not a deterministic economic approach. Contrarily, the social forces and struggles that unfold in varied particular historical contexts are at the center of the analysis. Besides, these social forces operate within the boundaries of preexisting material structures, thus, in this perspective, class struggle
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
89
is the link between structure and agency. Therefore, this historical materialist approach departs from realist, state-centric analysis, which is based on dichotomous understandings of states and markets. Second, the need for social reproduction by capital and labor through the market explains the competitive drive in capitalism, which incentives innovation within and among countries,13 encouraging economic rivalry in world politics. Indeed, with any new wave of technological innovation, states are pressured to imitate the leading countries in developing more productive relations of production to avoid lagging behind, such as AI currently illustrates. Firms from leading countries also need to expand to other markets to prevent the problem of overproduction14 or excessive competition. Yet, internationalization processes depend on the specific social forces and their class struggles within their countries of origin, and how they are internalized by the recipient countries.15 In this analysis, the role of ideas matter. In this line, constructivists and post-structuralists have made significant contributions; however, they are unable to explain why some sort of ideas come to be expressed and not others.16 By contrast, a historical materialist approach understands that the ideas shaping such internationalization processes depend on the existing material structures and on the agency of specific, historically situated social forces and their organic intellectuals.17 Finally, not all states will relate equally to the internationalization of foreign MNCs, because these processes are shaped by their link to global capitalism. This is precisely the intuition that the concept of Uneven and Combined Development (U&CD) introduced by Trotsky attempts to capture. Instead of assuming that development is a linear process, U&CD accepts that there are multiple paths of development that societies might follow, shaped by how different sectors within a society connect to global capitalism. Taking the case of Russia, Trotsky18 argued how some sectors were linked with the most advanced methods of capitalist production, whereas others remained excluded from such processes, thus the uneven. However, the advanced sectors in Russia still operated co-existing with stagnating ones, producing a specific set of production relations to the country, hence the combined. It is important to observe that Rosenberg19 has recently expanded these ideas to claim that U&CD is a social theory of the international; however, the use of the concept in this paper is limited to the geographical unequal nature of the capitalist development process.20 This is pertinent, because it challenges optimistic neoliberal beliefs on the positive outcomes derived from free trade.
90
M. F. VILA SEOANE
In sum, the categories provided by Levy and Newell are useful to characterize the operation of AI-Cloud-MNCs in the Global South, whereas the concepts developed by Bieler and Morton help to understand the specificities of the emerging patterns of relation between states and foreign AI-Cloud-MNCs.
Digitalization and Latin America In order to contextualize how Latin American states link to foreign AICloud-MNCs, it is important to bear in mind three macro-trends shaping digitalization in the region, namely, the Science, Technology, and Innovation (STI) deficit; the increased competition between the U.S. and China, and recurrent political instability. The first trend is the persistent STI deficit in the region. While in developed countries there are dense links between firms and universities to commercialize new knowledge, Latin American universities remain scarcely linked to regional firms.21 Additionally, in 2017, the total amount invested in R&D in Latin American countries only represented 3.1% of total global investments,22 most coming from the public sector (58%), while in developed countries the private sector leads. There is an additional divide between states in the region, since most of the resources assigned to STI are concentrated in Brazil, México, and Argentina.23 In this landscape, it is not surprising that in the specific field of AI, regional actors hardly appear in global rankings. Consequently, there is a clear need of partnerships with foreign AI-Cloud-MNCs to access such state-of-the-art technology. The rise of China as a key investor and trading partner for many Latin American states is a second relevant and recent trend.24 According to the U.S. foreign policy establishment, this is an unacceptable meddling to its historical influence in the region. Although such threat perception is not new, it has intensified since the expansion of President Xi Jinping’s landmark project to Latin America, the Belt and Road Initiative (BRI).25 Indeed, since the Trump administration launched its so-called Trade War against China, U.S. foreign policy has been systematically attempting to undermine the Asian superpower’s global influence. This includes biased criticism of the BRI, spread of conspiracy theories over China’s response to COVID-19, and the notorious boycott campaign against Huawei, among others. In this context, Secretary of State Mike Pompeo has popularized the realist reading that countries will have to
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
91
pick a side “[…] between freedom and tyranny.”26 Latin American states are not oblivious of such disputes, since several Chinese technology firms have been operating regionally, such as Alibaba, Didi, Huawei, Lenovo, Xiaomi, and ZTE to name a few. Unsurprisingly, the Trump administration has broken with the unwritten rule that a Latin American leads the Inter-American Development Bank, selecting instead a Cuban American hardliner, Mauricio Claver-Carone, who publicly stated that he seeks to instrumentalize the organization to push back against China’s spread into the region. Yet, the influence that the U.S. or China may have to favor their corporations in Latin American states hinges on the changing regional and national politics. While during the 1990s the ideas of the Washington consensus prevailed, the first decade of the twenty-first century saw the rise to power of the Pink-Tide. This term covered different left-wing leaning governments that put again the state at the center of the development process, chiefly to address social inequalities, based on the export of natural resources27 and closer relations with China. However, the fall of commodity prices smoothed the path to a change in the political cycle. Many countries, such as Argentina, Brazil, and Ecuador, swayed again to the center- and extreme-right of the political spectrum. These and other like-minded nations adapted to President Trump’s America First policy by attempting to close the best possible deals with its powerful Northern neighbor, in many cases echoing Trump’s anti-China rhetoric and undermining regional integration processes.28 However, these rightwing parties remain highly contested, and so does the foreign policy approach to take with China and the U.S. In fact, after national elections, left-leaning national parties are back in power in Argentina and Bolivia, whereas the aftermath of the pandemic has put other right-wing governments under serious stress.
AI Policies in the U.S. and China Interstate competition is important to understand the profit-oriented logics of firms internationalizing abroad. Thus, the expansion of AICloud-MNCs to Latin America cannot be analyzed separately from the AI policies of their home states. Below I briefly synthesize the main trends in China and the U.S., which mold the strategies of their national AI-Cloud-MNCs.
92
M. F. VILA SEOANE
The U.S. government is actively supporting AI. On February 2019, the government released the American AI Initiative,29 which aims to ensure its leadership in this area, reason why it is against overregulation.30 Besides, the support of AI is argued in order to protect the country’s economic and national security (Salas-Pilco, Chapter 9 in this book). These aims must be understood in the context of the broader technological competition with its main strategic rival, China. To face the challenge, the U.S. government is fostering public-private partnerships, for instance, executives of the most important technology firms, such as Alphabet, Facebook, and Microsoft, are already part of the government’s Defense Innovation Board, contributing to the militarization of AI (Arif, Chapter 10 in this book). Despite such strategic interest on AI, its deployment remains contested. For example, several human rights organizations have been seriously questioning the implementation of AI systems that may endanger civil and political rights, such as facial recognition. Likewise, even leading technology firms have faced stiff internal opposition from employees, who oppose developing AI systems for questionable military and surveillance projects. As regards Latin America, on December 2019, the U.S. State Department launched an initiative competing with the BRI, Growth in the Americas, which aims to encourage private investments in regional infrastructures, including 5G that is considered central for AI’s future growth. Regarding China, in 2017 the government released the Next Generation Artificial Intelligence Development Plan, which outlined steps to become the world leader by 2030.31 As in the case of the U.S., China is cultivating both civilian and military applications of AI (Salas-Pilco, Chapter 9 in this book). Furthermore, President Xi Jinping frequently mentions AI as part of the digital dimension of the BRI, which seeks to facilitate the internationalization of Chinese technology companies in partner countries. Several Chinese AI champions, such as Alibaba, Baidu, Huawei, Tencent, among others, are actively supporting these plans. Regarding discussions on AI ethics, China certainly lags behind the U.S. The implementation of a social credit system or the notorious use of AI in Xinjiang has already raised criticism from Western states and human rights organizations. These cases undermine the legitimacy of the export of Chinese AI. Nonetheless, there are signs of citizens opposition to the unchecked use of AI, though far less than in the U.S.
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
93
AI-Cloud-MNCs Strategies in Latin America The global expansion of MNCs from the U.S. and China is paving the way to distinct strategies of commercializing and governing AI and the cloud. It is important to observe from the outset that the empirical material shows the preeminence of U.S.-based MNCs in providing AI and cloud services. Due to space limits, I considered the two most important cases, AWS and Microsoft, which illustrate the tough competition between U.S. companies. Chinese firms do lag behind, but they have been ramping up their interest in the region, particularly Huawei. These three firms are building partnerships with governments, firms, and civil society organizations in order to legitimize their operations, and build an AI hegemony. Yet, none totally dominates, which points to an ongoing war of positions among them. Table 5.1 synthesizes the main features of the three AI-Cloud-MNCs that are further analyzed below. Amazon Web Services (AWS) AWS is the cloud business unit of Amazon that began proving IT infrastructure as a service in 2006. Pioneering this business opportunity, AWS has become the largest company in terms of market share of IaaS Table 5.1 Features of the main Chinese and U.S. AI-Cloud-MNCs operating in Latin America Amazon
Microsoft
Huawei
AI and Cloud business unit Home Country Data centers in Latin America
Amazon Web Services (AWS) US 1 in São Paulo
Microsoft Azure
Huawei Cloud
US 1 in São Paulo
Revenue (2019) in billion US$ Net income (2019) in billion US$ Research & Development (2019) in billion US$
280.52
125.5
China 1 in Chile 1 in Lima 1 in México City 1 in Buenos Aires 1 in São Paulo 121.72
11.59
39.24
N/A
35.93
16.88
17.4 (estimated)
94
M. F. VILA SEOANE
(47.8%), gaining 15.5 billion US$ of revenues in 2018. Besides infrastructure, AWS is the most influential firm in providing AI as a service by making state-of-the-art machine learning algorithms available to its clients. Organizationally, although AWS does not have decades long experience in Latin America, its leading position in the cloud sector has paved the way for the company’s expansion in the region. In 2011, Amazon opened its first data center in São Paulo, Brazil, which reduced the communication delays of its services for regional users.32 Since then, AWS has been enabling the digitalization of multiple actors in Latin America; for example, it provides the digital infrastructure to the main AI-based unicorns of the region, such as Rappi, the Colombian services delivery platform or Nubank, a Brazilian fintech. Even MercadoLibre, the main regional e-commerce company that competes with Amazon uses AWS. Public institutions, such as the Mexican national electoral institute, universities, and NGOs are also clients of AWS cloud and AI services. In comparison with other US-based firms, AWS does not advocate an ethical approach to AI. The firm has no public AI ethics board, nor has it established a committee to regulate how its services are used. It has neither developed a specific discursive approach to attract new business partners in the region based on AI. This has raised criticism in the U.S., where law enforcement agencies have deployed AWS’s facial recognition technologies for surveillance purposes, despite their far higher error rates with non-white people.33 In reply, AWS has made public a guide rejecting much of the criticism, blaming clients instead for improper application of its AI.34 Only after the repercussions caused by the death of George Floyd did the firm introduce a one-year ban to the use of facial recognition technology by police forces in the U.S. Nonetheless, it is fair to conclude that AWS prioritizes keeping and augmenting its global market share in the cloud and AI, rather than expressing concern for the unethical uses of such technologies. In Latin America, AWS’s market and organizational power has become more visible since it announced its plans to build a new regional data center. In a sort of remake of the competition between U.S. states to attract Amazon’s second headquarters, Argentina’s and Chile’s governments have been competing to persuade AWS to build its data centers in their territories, striving to offer the best conditions. Leaks revealed that the Chilean Production Development Corporation, in charge of promoting national production and economic growth, classified AWS’s
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
95
project as strategic; thus, it deserved a 9 million US$ subsidy.35 Likewise, Argentina pledged a reduction of taxes and labor costs to AWS, and given the data center would be operating in a free trade zone, the firm would neither pay taxes for energy consumption,36 an excessive offering for cloud infrastructure that utilizes a lot of energy. Both right-wing administrations prioritized the attraction of AWS under the promise that everyone would equally benefit from the externalities of such a project, a claim that needs to be seriously questioned, since it omits the fact that the profits to be extracted by AWS do seem far more substantial. Nevertheless, by 2020, AWS’s final decisions remained to be confirmed, since the ongoing political and economic upheavals in both countries have clouded the stable economic situation that the company expected. Microsoft Since its foundation in 1975, Microsoft has expanded to several business units, reaching a total market capitalization of over US$1.2 trillion, thus, its material power is significant. The firm has built a sophisticated network of subsidiaries and business partners across the world, including Latin America, where it has been investing for decades. Microsoft Azure is the firm’s brand for cloud services, which provides several AI-related services. In 2013, the firm built its first data center in Brazil in order to meet regional demands and to compete with AWS. In contrast to Amazon, Microsoft has taken an advocacy role in promoting the potential of AI. This involves investment in several social and environmental initiatives of AI for Good, including a program named AI for Health that became highly demanded during the pandemic. This optimist view on AI has been detailed in a book, where Microsoft stresses that this technology is a tool to augment human ingenuity,37 a catchphrase frequently repeated by its employees in presentations across Latin America. Thus, instead of fear, we ought to be open to the new opportunities and societal transformations that AI will unleash. However, the firm is also concerned about potential abuses of AI, such as facial recognition, reason why Microsoft calls for more regulation in this area in particular, and the need of global regulation of AI more generally. In this line, Microsoft has made public principles for developing responsible AI systems and created internal structures to ensure such deployments, something absent in Amazon.
96
M. F. VILA SEOANE
To disseminate these views in Latin American, the company has organized two Microsoft AI tours across selected countries in the region, which consist in the arrangement of a national event, where its subsidiaries show AI applications. The events are free and organized in trendy parts of the cities. Apart from marketing, the main aim is to sell paid technical certifications for those interested in learning Azure’s services. Microsoft employees tout that they are democratizing AI by making such new technologies accessible to everyone.38 This global slogan resonates well in Latin American countries, which have gone through a process of democratization during the 1980s; thus, besides access, the term refers to a shared value for the importance of democracy, which is something that Chinese firms cannot pitch about. During Microsoft AI tours, the company has presented tailored reports for each country. The 2018 series were elaborated with an Argentinean thing tank, CIPPEC, whereas the 2019 reports were written by a U.S.based consultancy company, DuckerFrontier.39 Overall, these reports reproduce Microsoft’s main discursive strategy to justify the urge to adopt AI. In a nutshell, they argue that if Latin American countries maximize AI’s adoption, then, it would lead to better paid jobs, more profits for firms and higher levels of economic growth based on the increase of productivity. The allure of AI that Microsoft describes is certainly attractive to local politicians and firms willing to spur sluggish regional economic growth. Despite the framing of ethical AI, in practice, things do not always turn as expected. For instance, in 2018, the government of the Argentinean Province of Salta announced they entered into partnership with Microsoft to use AI to predict the name and address of teenagers destined to suffer from undesired pregnancy. According to the provincial government, this foresight would allow the health services to address such a serious public health problem in advance. However, soon after its announcement, Argentine AI researchers denounced grave technical and conceptual errors in the application of the system that cast doubts on its claims,40 such as predicting the future based on a problematic database of past unwanted teenage births. Even worse, the initiative was highly criticized for its racist and misogynist presumptions, discriminating teenagers of lower-resources, instead of facing the endemic sexist violence in the Province. Although Microsoft may not bore the whole responsibility of this outcome, this case illustrates the limits of the grandiloquent promises
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
97
by corporations to apply AI to address urgent regional social challenges ethically. Huawei Huawei was founded in 1987 by Ren Zhengfei as a private company headquartered in Shenzhen, originally providing network equipment to telecommunication carriers.41 From its early years, Huawei follows a strategy to invest considerably in R&D, both in China and by strategically opening R&D centers in foreign markets to gain competitive advantages.42 These investments certainly paid off, allowing Huawei to pass from a latecomer to an innovative company, even leading in some areas, such as 5G. The internationalization of the company began in the 1990s with its expansion to developing countries in order to gain trust and know-how from these markets to later access more advanced ones.43 Since then, the firm has diversified to several business units, such as the electronics consumer market, or its cloud business unit, Huawei Cloud.44 Huawei now operates in over 170 countries, with a market capitalization value similar to those of leading U.S. technology companies. Regarding Latin America, Huawei started opening subsidiaries around two decades ago, hence, it already has knowledge of local markets and a network of partners and customers. In several countries it organizes periodic Huawei summits to show its products and services. In contrast to U.S. competitors, Huawei Cloud launched data centers in smaller markets, such as Chile and Perú, both countries that had recently joined the BRI. There are two main differences between the AI discourse of Huawei and the other corporations analyzed in this chapter. First, Huawei makes a distinct bet on the technologies that will transform the future, for example, during the launch of Huawei’s data center in Chile, the company’s president, Edward Deng, said: Cloud, AI, IoT, and 5G will be important drivers of digital transformation. Every industry can make significant progress by adopting these technologies. […] We will empower governments and enterprises across Latin America and facilitate regional economic development.45
This statement explicitly links AI to other technological drivers of digitalization, namely, the Internet of Things (IoT) and the fifth generation
98
M. F. VILA SEOANE
of telecommunication networks (5G), which is expected to be 100 times faster than 4G and able to service multiple devices simultaneously. As such, analysts perceive 5G as a game changer, that will pave the way to massive applications of AI. As one of the leading firms controlling patents and know-how on 5G, Huawei is supposed to have a competitive advantage over its foreign competitors, representing a challenging threat to the commanding market share of U.S. AI-Cloud-MNCs. Second, the firm does not have an active global AI ethics advocacy position. Yet, Huawei has released a White Paper discussing AI Security, where it recognizes the technical, societal, and legal challenges that its application faces, and pledges to work together with all relevant stakeholders to develop new “[…] codes of conduct, standards, and laws to enhance AI security and privacy protection.”46 More broadly, Huawei frames its services as a contribution to the sustainable development goals by stressing its aim of “[…] bridging the digital divide and promote digital inclusion.”47 In this line, the firm has also launched several corporate social responsibility initiatives to train students from Latin America and to propel AI applications that address the digital divide, environmental challenges, the COVID-19 pandemic, among others. Notwithstanding, Huawei faces considerable challenges. The Trump administration banned the sale of semiconductors to Huawei, which has put the sustainability of many of its business units under considerable pressure. Furthermore, the U.S. has led a global demonization campaign against Huawei, which is accused of facilitating espionage from the Chinese State and the export of an authoritarian model of governance. This has paved the way to the U.S. Clean Network program which aims “[…] to address the long-term threat to data privacy, security, human rights and principled collaboration posed to the free world from authoritarian malign actors.”48 By December 2020, Brazil, Ecuador, and Dominican Republic joined the program, which is explicitly targeted against Huawei and other Chinese corporations.
Uneven and Combined Development Despite the differences among foreign AI-Cloud-MNCs, their discursive power aims to convey the promise that AI will augment humans and firms’ capabilities. As such, there is nothing to fear from AI powered innovations. As the argument goes, previous technological revolutions did cause loss of jobs, but these were only temporarily disruptions until
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
99
the workforce adapted. AI-Cloud-MNCs argue it will be no different this time, and they will provide the tools to embrace digitalization. This vision, advanced for obvious self-interest reasons, is also shared by techno-utopians, idealist engineers and scientists, in many cases naïve or not sensitive enough to the persisting asymmetries in our contemporary global capitalist economy. By contrasts, techno-pessimists fear that this new wave of automation is not as the previous ones, rather a far more threatening process that will considerably reduce employment, generating more global instability and even posing an existential menace to the human race. Instead of siding with these extreme positions, in the following I argue that AI will intensify processes of uneven and combined development in Latin America. One usual trope advanced by AI-Cloud-MNCs is that this new technology will unleash a spur in productivity and economic growth. However, this claim neglects that in Latin America, the first adopters and main beneficiaries of AI-based innovations are the most dynamic exportoriented sectors, which specialize in the exploitation of natural resources. Unsurprisingly, the regional events organized by AI-Cloud-MNCs are targeted toward firms from these sectors, where they even showcase the successful use of AI by early adopters in agriculture, mining, and oil exploration. For instance, Fig. 5.1 shows Microsoft’s poster advertising its 2018 AI event in Argentina. The image shows a female agricultural worker and a legend that asks: What are you going to achieve today?, in clear reference to how AI could help to improve the productivity of the already very wealthy agricultural export-oriented industry of the country. Even Huawei’s CEO, when asked by journalists about potential AI applications in the region, he answered that “[…] if Latin American countries can make better use of natural resources with artificial intelligence, they will generate a huge bonanza.”49 The upshot of these trends is that it seems unlikely that every industry and citizen will equally benefit from the adoption of AI. Besides, statements of AI accelerating the export of natural resource-based industries should be seriously questioned, because they neglect the historical and very problematic nature of the natural resource curse that the region suffers, which most countries have been trying to combat, though unsuccessfully. Surely, few tech savvy and resourceful start-ups may triumph in adopting AI solutions, but from the literature we know that innovation capabilities are geographically clustered and contribute to regional
100
M. F. VILA SEOANE
Fig. 5.1 Poster advertising Microsoft’s AI at Buenos Aires’ downtown (Source Photo by the author taken on March 2019)
development unevenly.50 Hence, it is unreasonable to expect a different outcome from AI. The future of work is another central issue in global debates on AI, which is even more problematic in Latin America, a region that has been enduring high levels of unemployment, underemployment, and of informal jobs during decades. Indeed, on average, 53% of the population in the region works in the informal economy.51 Although some workers may benefit from few high-paid technical jobs in the formal economy, large parts of the population lack the skills and resources to become AI literate, thus, they may simply be excluded from the high paying and new digital jobs that the AI revolution promises. Instead, they may lose
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
101
jobs from automation or become precarious workers for gig economy AIbased platforms, such as Uber or Rappi, which are already causing social conflicts in many Latin American countries. Obviously, these are not new issues. It could even be argued that the creation of inequalities is a central aspect of capitalism.52 Besides, the transition to information of knowledge societies that began in the 1990s already brought the discussion of the growing digital divide to the fore. Indeed, in the case of ICT sector, there are many countries where the success of its firms and workers contrasts notably with those from lower productivity ones. But AI has the potential to worsen these divides, since it hinges on capabilities, such as knowing how to code, having advanced calculus levels, access to modern digital infrastructure, and so on, which take time to develop, something hard to accomplish in a region where education is a privilege for the few. Therefore, AI has the potential to exacerbate the class gap, and the ongoing conflict among them.
Latin American Strategies to Face AI-Cloud-MNCs Given these stakes, Latin American states have important decision to make with regard to how they interact with foreign AI-Cloud-MNCs. In a contribution discussing data, growth and development, Weber53 identified four strategies that states could follow to promote growth via transnational data value chains: (a) linking with U.S. firms; (b) linking with Chinese firms; (c) mix of links with a and b; or (d) creating independent data value chains. Although Weber thought these categories for developed, but data dependent countries, such as European ones, they are still useful to examine the emerging patterns of Latin American states with AI-Cloud-MNCs. However, these analytical categories can be complemented by a historical materialist approach to offer better insights into specific case studies and non-state actors. In this line, this section applies and extends Weber’s approach to Latin America. U.S.-based AI-Cloud-MNCs have been operating in most countries during years, thus the default option for states is to engage with them to accelerate the process of digitalization. Only those countries with sharp geopolitical rivalry with the U.S., such as Cuba and Venezuela, are linking exclusively with Chinese firms. For instance, Cuba has signed agreements with China to create a joint center for AI, while President Maduro has stated that Venezuela will rollout the country’s future 5G network with Chinese firms. Yet, the ongoing polarization in Venezuela, product of
102
M. F. VILA SEOANE
class struggle, may alter the situation if U.S.-backed opposition forces arrive to power. Despite its potential to incorporate Latin American countries to the AI revolution, both strategies are like a double-edged sword, because they link local actors to AI in a dependent way to foreign corporations. Indeed, foreign AI-Cloud-MNCs as key nodes in the provision of AI services seem to be the ones to benefit the most from the promised AI revolution, rather than local firms since they are becoming providers to most of them. Furthermore, the digital infrastructure of local firms will depend on the policies from the foreign AI-Cloud-MNCs home countries, making them vulnerable to arbitrary changes, espionage, and other cyber risks. The dependence will be even more problematic when AICloud-MNCs become indispensable for the provision of AI-based services in the public sector and the military, which might cause a hard to reverse loss of digital sovereignty. For these reasons, the challenges identified by Weber54 for devising a leapfrogging development strategy based on foreign AI-Cloud-MNCs from just one country do seem considerable. In order to offset such an excessive dependence, it would seem reasonable for states to diversify links with AI-Cloud-MNCs from multiple home countries. Understandably, the current trend is a hedging strategy between U.S.-based and China-based AI-Cloud-MNCs. Take the case of Chile, which despite being a close U.S. strategic ally, it has joined the BRI aiming to position the nation on top of the regional interstate competition over which country benefits the most in trading with China. The strategy has attracted investments by Huawei, which opened a new data center in the capital. Illustrating these points, during the announcement of the investment, the director of Chile’s agency in charge of attracting foreign investments said: Huawei’s investment to offer its public cloud in Chile to cover Chile and the rest of Latin America reinforces our country’s position as a digital hub and leader in the region in technological and infrastructure transformation […] We are currently living with the Artificial Intelligence, so we have to be well prepared as a country and prepare our infrastructure for the work of the future.55
However, the unraveling of the bilateral relation between the U.S. and China may put under pressure such an approach. As previously explained, the Trump’s administration Clean Network program has intended to
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
103
persuade other states to ban Chinese technology firms. However, the efforts have not been entirely successful in Latin America, since only three countries subscribed to the program. Therefore, although they may be limits to the hedging strategy, the large number of countries resisting U.S. pressure indicates that there are many other factors shaping national policies toward Chinese AI-Cloud-MNCs. For instance, the country’s historical relation with the U.S.; the balance between different social forces, such as the sectors with strong economic links with China and the U.S., the military’s strategic position, the president’s view, the increased anti-American perception after the arrival of Trump to the presidency, and so on; and national AI capabilities. Techno-nationalist policies are feasible in Latin American countries with enough scale and policy autonomy to support national data companies and massively promote national research in AI, the cloud, and other digital technologies. Yet, it seems a farfetched option even in the countries with historical experience in industrialization most likely to apply them, namely, Argentina, Brazil, and Mexico. After all, AI is not the main priority of the current crises-laden administrations, and the capabilities gap with leading AI-Cloud-MNCs is significant. Nonetheless, given the history of resource nationalism in the region, the introduction of new regulations demanding data localization, the promotion of national AI industries, and other measures that strengthen national AI endeavors cannot be overruled in the future. There are other options that Latin American countries could explore to balance such uneven and combined development process. First, there are different mechanisms for regionalism, such as Mercosur, UNASUR, CELAC, and so on. Although the last years have seen a weakening of these integration processes, it should not be precluded that in a future, if political alliances converge, digitalization could be incorporated into the regional agenda in order to harmonize an approach to balance the excessive asymmetry with extra-regional actors. Second, Latin American states could prefer to converge toward the norms that the European Union is developing to regulate AI and the cloud. This may increase regional states’ leverage to regulate Chinese and U.S.-based AI-Cloud-MNCs and curtail negative impacts of these digital technologies. Third, Latin American states could increase its support and participation in the multilateral initiatives launched by the United Nations to regulate new emerging technologies.
104
M. F. VILA SEOANE
Finally, AI-Cloud-MNCs have been employing its resources to construct links with different organizations that legitimize its strategies. By contrast, there is only a nascent group of civil society organizations attempting a war of positions that could challenge the discursive power and alliances of these corporations, pressuring them to develop AI technologies in really more sustainable pathways. Latin American countries have a long record of popular mobilization, which could be activated if the worst fears on AI start becoming a reality, or if demands for data sovereignty increase. Although this is still not the case, the scenario is likely under the light of emerging evidence that the so-called ethical approach to AI advanced by U.S. MNCs is a coordinated strategy to ensure that the technology is not over-regulated, safeguarding its profits.56
Conclusions The spread of AI to Latin America depends on foreign MNCs offering such services through their cloud infrastructure in the region. Contrary to those who believe in a new digital dependence or a new future of endless opportunities and sustainable development based on AI, this chapter has argued that foreign AI-Cloud-MNCs may exacerbate patterns of unequal and combined development in Latin America. By analyzing the strategies of the main firms operating in the region, namely Amazon, Microsoft, and Huawei, this chapter has argued that the companies are in a war of position to dominate the cloud and AI market, attempting to become key partners that help private and government actors alike to spur digitalization. Despite these firms have considerable market and organizational power, they differ in their discursive strategies. Microsoft is actively advocating for the ethical use of AI, though in terms convenient for the company, whereas Amazon has shunned from entering into such discussions, preferring to extend and preserve its market advantage. As regards Huawei, it bets to outsmart the rest with its rollover of 5G and by spreading a development-oriented discourse on AI. Yet, all firms are primarily targeting the most resourceful actors in the region, casting doubt on claims to the shared benefits of AI. Except for Cuba and Venezuela, most Latin American countries are trying to engage both with companies from China and the U.S. Thus, there is still no hegemony by any firm, and the decoupling of Internet that so many Western analysts predict and fear seems unlikely. This hedging
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
105
strategy seems rational, given the STI deficits and the uncertainty over how the AI race between China and the U.S. may turn out. However, it may not be enough to reverse the already troublesome patterns of uneven and combined development in Latin America that AI and the cloud may deepen. Therefore, more carefully thought national AI and cloud development policies seem urgently needed to increase the beneficiaries of this new wave of digitalization.
Notes 1. Johnson, “Artificial Intelligence & Future Warfare”; European Commission, “Artificial Intelligence for Europe”; Hoadley and Sayler, “Artificial Intelligence and National Security”; Lee, AI Superpowers; Scott, Heumann, and Lorenz, “Artificial Intelligence and Foreign Policy”; Tinnirello, “Offensive Realism and the Insecure Structure of the International System.” 2. Chollet, Deep Learning with Python. 3. The specific terminology employed is Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). 4. Mosco, To the Cloud. 5. With this term I refer to technology MNCs whose business models depend on providing AI and the cloud infrastructure to other state and non-state actors. 6. Vila Seoane and Saguier, “Cyberpolitics and IPE: Towards a Research Agenda in the Global South.” 7. According to Gartner “Worldwide IaaS Public Cloud Services Market Grew 31.3% in 2018.”, in 2018 the Infrastructure as a service cloud market was dominated by the following firms AWS (47.8%), Microsoft Azure (15.5%), Alibaba (7.7%), Google (4%) and IBM (1.8%). 8. A military metaphor used by Gramsci on how subaltern groups could use different sources of power and alliances to beat more powerful adversaries Levy and Newell, “Business Strategy and International Environmental Governance.”. Here I adapt it to the strategies of business firms. 9. Levy and Newell. 10. Fagre and Wells, “Bargaining Power of Multinationals and Host Governments.” 11. Bieler and Morton, Global Capitalism, Global War, Global Crisis, 6. 12. Bieler and Morton, 37. 13. Bieler and Morton, 38. 14. Bieler and Morton, 39. 15. Bieler and Morton, 127. 16. Bieler and Morton, 52.
106
M. F. VILA SEOANE
17. Bieler and Morton, 74. 18. Trotsky, The History of the Russian Revolution. Volume I . 19. Rosenberg, “Basic Problems in the Theory of Uneven and Combined Development. Part II.” 20. Bieler and Morton, Global Capitalism, Global War, Global Crisis; Kiely, “Spatial Hierarchy and/or Contemporary Geopolitics”; D’Costa, “Uneven and Combined Development.” 21. RICYT, “El Estado de la Ciencia: Principales indicadores de ciencia y tecnología Iberoamericanos.” 22. RICYT. 23. RICYT. 24. CEPAL, “Iniciativa China de La Franja y La Ruta Es Una Oportunidad Para Inversiones Inclusivas y Sostenibles: CEPAL.” 25. Bousquet, “Celac Driving Latin America and the Caribbean Along the New Silk Road Route!” 26. Pompeo, “Communist China and the Free World’s Future.” 27. Ruckert, Macdonald, and Proulx, “Post-Neoliberalism in Latin America.” 28. Deciancio and Dalponte. 29. White House, “Artificial Intelligence for the American People.” 30. Stolton, “Avoid Heavy AI Regulation, White House Tells EU.” 31. State Council of the People’s Republic of China, “Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan.” 32. Barr, “Now Open—South America (Sao Paulo) Region—EC2, S3, and Much More.” 33. For more details on the relation between AI and surveillance, see PerezDes Rosiers, Chapter 6 in this book. 34. Punke, “Some Thoughts on Facial Recognition Legislation.” 35. Palacios and Orellana, “Corfo Busca Seducir a Amazon Con Subsidio Para Data Center de US$1.000 Millones.” 36. Do Rosario and Soper, “Amazon Plans $800 Million Data Center in Argentina.” 37. Microsoft, “The Future Computed: Artificial Intelligence and Its Role in Society.” 38. Microsoft, “Democratizar La IA, Center LATAM.” 39. Albrieu et al., “Inteligencia Artificial y Crecimiento Económico. Oportunidades y Desafíos Para México”; Microsoft, “Futuro Del Trabajo: En Los Próximos Diez Años, Argentina Podría Tener Un 56% de Empleo Calificado Si Maximizara La Adopción de Inteligencia Artificial.” 40. LIIA, “Sobre La Predicción Automática de Embarazos Adolescentes, Laboratorio de Inteligencia Artificial Aplicada.” 41. Huawei, “Annual Report.” 42. Fan, “Innovation, Globalization, and Catch-Up of Latecomers.”
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
107
43. Fu, Sun, and Ghauri, “Reverse Knowledge Acquisition in Emerging Market MNEs.” 44. Huawei, “Annual Report.” 45. Huawei Cloud, “HUAWEI CLOUD Empowers Digital Transformation of Industries in Latin America with New Chile Region.” 46. Huawei GSPO Office, “Thinking Ahead About AI Security and Privacy Protection: Protecting Personal Data & Advancing Technology Capabilities,” 30. 47. Huawei, “Annual Report,” 3. 48. U.S. Department of State, “The Clean Network.” 49. La República, “Huawei Sobre Perú: ‘Sacar Provecho de Sus Recursos Con Inteligencia Artificial Para Generar Más Bonanza.’” 50. Fan, Wan, and Lu, “China’s Regional Inequality in Innovation Capability, 1995–2006”; Iammarino, Rodriguez-Pose, and Storper, “Regional Inequality in Europe.” 51. Salazar-Xirinachs and Chacaltana, “Políticas de Formalización En América Latina: Avances y Desafíos.” 52. Bieler and Morton, Global Capitalism, Global War, Global Crisis. 53. Weber, “Data, Development, and Growth.” 54. Weber. 55. Huawei Cloud, “HUAWEI CLOUD Empowers Digital Transformation of Industries in Latin America with New Chile Region.” 56. Ochigame, “The Invention of ‘Ethical AI’: How Big Tech Manipulates Academia to Avoid Regulation.”
References Albrieu, Ramiro, Martín Rapetti, Caterina Brest López, Patricio Larroulet, and Alejo Sorrentino. “Inteligencia Artificial y Crecimiento Económico. Oportunidades y Desafíos Para México.” Inteligencia Artificial y Crecimiento Económico En América Latina. Buenos Aires, Argentina: CIPPEC, 2018. https://news.microsoft.com/uploads/prod/sites/41/2018/11/IA-yCrecimiento-MEXICO.pdf. Barr, Jeff. “Now Open—South America (Sao Paulo) Region—EC2, S3, and Much More.” AWS Blog, December 14, 2011. https://aws.amazon. com/blogs/aws/now-open-south-america-sao-paulo-region-ec2-s3-and-lotsmore/. Bieler, Andreas, and Adam David Morton. Global Capitalism, Global War, Global Crisis, 2018. Bousquet, Earl. “Celac Driving Latin America and the Caribbean Along the New Silk Road Route!” Telesur English, January 29, 2018. https://www.
108
M. F. VILA SEOANE
telesurenglish.net/opinion/Celac-Driving-Latin-America-and-the-CaribbeanAlong-the-New-Silk-Road-Route-20180129-0007.html. CEPAL. “Iniciativa China de La Franja y La Ruta Es Una Oportunidad Para Inversiones Inclusivas y Sostenibles: CEPAL.” Comisión Económica para América Latina y el Caribe, December 7, 2018. https://www.cepal.org/ es/noticias/iniciativa-china-la-franja-la-ruta-es-oportunidad-inversiones-inclus ivas-sostenibles-cepal. Chollet, François. Deep Learning with Python. Shelter Island, NY: Manning Publications Co, 2018. D’Costa, Anthony P. “Uneven and Combined Development: Understanding India’s Software Exports.” World Development 31, no. 1 (January 2003): 211–26. https://doi.org/10.1016/S0305-750X(02)00182-1. Deciancio, Melisa, and Bruno Dalponte. The Future of U.S. Empire in the Americas: The Trump Administration and Beyond, edited by Timothy M Gill, 328–50. New York, USA: Routledge, 2020. Do Rosario, Jorgelina, and Spencer Soper. “Amazon Plans $800 Million Data Center in Argentina.” Bloomberg, October 3, 2019. https://www.bloomb erg.com/news/articles/2019-10-03/amazon-web-services-poised-to-build-adata-center-in-argentina. European Commission. “Artificial Intelligence for Europe.” Brussels, Belgium, 2018. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=51625. Fagre, N., and L. T. Wells. “Bargaining Power of Multinationals and Host Governments.” Journal of International Business Studies 13, no. 2 (1982): 9–23. Fan, Peilei. “Innovation, Globalization, and Catch-Up of Latecomers: Cases of Chinese Telecom Firms.” Environment and Planning A: Economy and Space 43, no. 4 (April 2011): 830–49. https://doi.org/10.1068/a43152. Fan, Peilei, Guanghua Wan, and Ming Lu. “China’s Regional Inequality in Innovation Capability, 1995–2006.” China & World Economy 20, no. 3 (May 2012): 16–36. https://doi.org/10.1111/j.1749-124X.2012.01285.x. Fu, Xiaolan, Zhongjuan Sun, and Pervez N. Ghauri. “Reverse Knowledge Acquisition in Emerging Market MNEs: The Experiences of Huawei and ZTE.” Journal of Business Research 93 (December 2018): 202–15. https://doi.org/ 10.1016/j.jbusres.2018.04.022. Gartner. “Worldwide IaaS Public Cloud Services Market Grew 31.3% in 2018,” 2019. https://www.gartner.com/en/newsroom/press-releases/2019-07-29gartner-says-worldwide-iaas-public-cloud-services-market-grew-31point3-per cent-in-2018. Hoadley, Daniel S., and Kelley M. Sayler. “Artificial Intelligence and National Security.” Congressional Research Service, 2019. https://fas.org/sgp/crs/ natsec/R45178.pdf.
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
109
Huawei. “Annual Report.” Shenzhen, China, 2018. https://www.huawei.com/ en/press-events/annual-report. Huawei Cloud. “HUAWEI CLOUD Empowers Digital Transformation of Industries in Latin America with New Chile Region,” 2019. https://www. huaweicloud.com/intl/en-us/news/huawei-cloud-empowers-digital-transf ormation-of-industries-in-la.html. Huawei GSPO Office. “Thinking Ahead About AI Security and Privacy Protection: Protecting Personal Data & Advancing Technology Capabilities.” Shenzhen, China: Huawei, 2019. https://www-file.huawei.com/-/media/ CORPORATE/PDF/trust-center/Huawei_AI_Security_and_Privacy_Protect ion_White_Paper_en.pdf. Iammarino, Simona, Andrés Rodriguez-Pose, and Michael Storper. “Regional Inequality in Europe: Evidence, Theory and Policy Implications.” Journal of Economic Geography 19, no. 2 (March 1, 2019): 273–98. https://doi.org/ 10.1093/jeg/lby021. Johnson, James. “Artificial Intelligence & Future Warfare: Implications for International Security.” Defense & Security Analysis 35, no. 2 (April 3, 2019): 147–69. https://doi.org/10.1080/14751798.2019.1600800. Kiely, Ray. “Spatial Hierarchy and/or Contemporary Geopolitics: What Can and Can’t Uneven and Combined Development Explain?” Cambridge Review of International Affairs 25, no. 2 (June 2012): 231–48. https://doi.org/10. 1080/09557571.2012.678299. La República. “Huawei Sobre Perú: ‘Sacar Provecho de Sus Recursos Con Inteligencia Artificial Para Generar Más Bonanza,’” 2019. https://larepu blica.pe/economia/2019/12/20/huawei-sobre-peru-sacar-provecho-de-susrecursos-con-inteligencia-artificial-para-generar-mas-bonanza-ren-zhengfeigoogle-guerra-comercial/. Lee, Kai-Fu. AI Superpowers: China, Silicon Valley, and the New World Order. Boston: Houghton Mifflin Harcourt, 2018. Levy, David L., and Peter J. Newell. “Business Strategy and International Environmental Governance: Toward a Neo-Gramscian Synthesis.” Global Environmental Politics 2, no. 4 (November 2002): 84–101. https://doi.org/ 10.1162/152638002320980632. LIIA. “Sobre La Predicción Automática de Embarazos Adolescentes, Laboratorio de Inteligencia Artificial Aplicada.” Laboratorio de Inteligencia Artificial Aplicada, 2018. https://liaa.dc.uba.ar/es/sobre-la-prediccion-automatica-deembarazos-adolescentes/. Microsoft. “Democratizar La IA, Center LATAM,” 2016. https://news.micros oft.com/es-xl/features/democratizar-la-ia/. ———. “Futuro Del Trabajo: En Los Próximos Diez Años, Argentina Podría Tener Un 56% de Empleo Calificado Si Maximizara La Adopción de Inteligencia Artificial.” News Center Latinoamérica, December 3, 2019.
110
M. F. VILA SEOANE
https://news.microsoft.com/es-xl/futuro-del-trabajo-en-los-proximos-diezanos-argentina-podria-tener-un-56-de-empleo-calificado-si-maximizara-la-ado pcion-de-inteligencia-artificial/. ———. “The Future Computed: Artificial Intelligence and Its Role in Society.” Redmond, WA, 2018. https://news.microsoft.com/cloudforgood/_media/ downloads/the-future-computed-english.pdf. Mosco, Vincent. To the Cloud: Big Data in a Turbulent World. Boulder: Paradigm Publishers, 2014. Ochigame, Rodrigo. “The Invention of ‘Ethical AI’: How Big Tech Manipulates Academia to Avoid Regulation,” December 20, 2019. https://theintercept. com/2019/12/20/mit-ethical-ai-artificial-intelligence/. Palacios, J., and G. Orellana. “Corfo Busca Seducir a Amazon Con Subsidio Para Data Center de US$1.000 Millones.” Online. La Tercera, April 19, 2018. https://www.latercera.com/pulso/corfo-busca-seducir-ama zon-subsidio-data-center-us1-000-millones/. Pompeo, Michael R. “Communist China and the Free World’s Future.” U.S. Department of State, July 23, 2020. https://www.state.gov/communistchina-and-the-free-worlds-future/. Punke, Michael. “Some Thoughts on Facial Recognition Legislation.” AWS Machine Learning Blog (blog), February 7, 2019. https://aws.amazon.com/ blogs/machine-learning/some-thoughts-on-facial-recognition-legislation/. RICYT. “El Estado de la Ciencia: Principales indicadores de ciencia y tecnología Iberoamericanos.” Buenos Aires, Argentina, 2019. https://www.ricyt.org/ wp-content/uploads/2019/10/edlc2019.pdf. Rosenberg, Justin. “Basic Problems in the Theory of Uneven and Combined Development. Part II: Unevenness and Political Multiplicity.” Cambridge Review of International Affairs 23, no. 1 (March 2010): 165–89. https:// doi.org/10.1080/09557570903524270. Ruckert, Arne, Laura Macdonald, and Kristina R. Proulx. “Post-Neoliberalism in Latin America: A Conceptual Review.” Third World Quarterly 38, no. 7 (July 3, 2017): 1583–1602. https://doi.org/10.1080/01436597.2016.1259558. Salazar-Xirinachs, José Manuel, and Juan Chacaltana. “Políticas de Formalización En América Latina: Avances y Desafíos.” Lima, Perú: Organización Internacional del Trabajo (ILO), Regional Office for Latin America and the Caribbean, 2018. https://www.ilo.org/wcmsp5/groups/public/—americas/ —ro-lima/documents/publication/wcms_645159.pdf. Scott, Ben, Stefan Heumann, and Philippe Lorenz. “Artificial Intelligence and Foreign Policy.” Berlin, Germany: Stiftung Neue Verantwortung, 2018. https://www.stiftung-nv.de/sites/default/files/ai_foreign_policy.pdf.
5
CHINESE AND U.S. AI AND CLOUD MULTINATIONAL CORPORATIONS …
111
State Council of the People’s Republic of China. “Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan,” 2017. https://flia.org/wp-content/uploads/2017/07/A-NewGeneration-of-Artificial-Intelligence-Development-Plan-1.pdf. Stolton, Samuel. “Avoid Heavy AI Regulation, White House Tells EU.” Euractiv, January 7, 2020. https://www.euractiv.com/section/digital/news/ avoid-heavy-ai-regulation-white-house-tells-eu/. Tinnirello, Maurizio. “Offensive Realism and the Insecure Structure of the International System.” In Artificial Intelligence Safety and Security, edited by Roman V. Yampolskiy, 339–57. New York, USA: Chapman and Hall/CRC, 2018. Trotsky, Leon. The History of the Russian Revolution. Volume I . Translated by Max Eastman. Online. Marxists Internet Archive, 1932. U.S. Department of State. “The Clean Network,” 2020. https://www.state.gov/ the-clean-network/. Vila Seoane, Maximiliano Facundo, and Marcelo Saguier. “Cyberpolitics and IPE: Towards a Research Agenda in the Global South.” In Routledge Handbook of Global Political Economy: Conversations and Inquiries. Routledge, 2020. Weber, Steven. “Data, Development, and Growth.” Business and Politics 19, no. 3 (September 2017): 397–423. https://doi.org/10.1017/bap.2017.3. White House. “Artificial Intelligence for the American People,” 2019. https:// www.whitehouse.gov/ai/.
CHAPTER 6
AI Application in Surveillance for Public Safety: Adverse Risks for Contemporary Societies David Perez-Des Rosiers
The world is currently undergoing a technological revolution with the rapid development and implementation of artificial intelligence (AI) in every sector. Multiple governments are seeing such technological innovation as an economic motor leading to increased investments in research and application of AI driven platforms and devices, but also as a way to monitor more efficiently the population. At the same time, tech companies such as the GAFAM (Google, Apple, Facebook, Amazon, Microsoft), BAT (Baidu, Alibaba, Tencent) and others have been leaders in research, development and implementation of AI platforms in their services and products positioning them at the forefront of data access and use. Unsurprisingly, AI has been increasingly applied in surveillance in recent years, especially following September 11, 2001. New capabilities in surveillance are facilitated by broad implementation of AI technologies, large data collection the digitalization of services‚ and the affordable
D. Perez-Des Rosiers (B) Institute for Global Studies, Shanghai University, Shanghai, China © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_6
113
114
D. PEREZ-DES ROSIERS
costs of devices. Many scholars have studied to some extent the relation between technology, societies and surveillance related to public security and its impacts.1 As knowledge intensifies on AI’s role and evolution in surveillance, this chapter evaluates some of the social risks in regard to AI-enabled surveillance for public safety. Interdisciplinary elements have been integrated into the analysis of risks to offer a comprehensive representation. Based on a literature review, it examines prominent theories on technology in societies, contextualizes surveillance employed in public safety and illustrates the current applications of AI in surveillance. From the literature review are extracted risks associated with AI-enabled surveillance for public safety. Those risks are examined individually to reflect their social impacts followed by an integrated analysis. The framework of analysis revolves around a philosophical undersanding of AI implementation in surveillance with the integration of knowledge issues from different fields of study such as political science, psychology, sociology and criminology.
Technology in Societies In order to contextualize the role of surveillance and technology in society, it is important to define a theoretical framework of analysis. There are multiple perspectives on the impact of technology in society with diverging and converging arguments. The two prominent theories on the subject are technological determinism and social constructivism of technology (SCOT).2 The technological determinists view technology as a metanarrative that has been incorporated in Western industrialized societies since the first industrial revolution. Determinist interpretations have two main ideas. First, the development of technology is a predictable, traceable path beyond cultural or political influence and, second, technology affects societies.3 Technological determinism is an approach characterized by an anxious vision on technological development being applied to every sphere of life bringing societies to depend heavily on it. It is represented as a guiding force in our evolution that can’t be stopped making it a driver of social changes and positions people as powerless actors. It supposes that technology would make humans passive, purposeless and machine-conditioned limiting their role to submissive control. Ellul claimed that technology identifies the social aspects that are the best fits for its progress through an autonomous process.4 The biggest
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
115
weakness of determinists is their incapacity to explain what drives technological evolution.5 Another critic is that technology and society can’t be reduced to a simple cause and effect relation.6 In answer to these critics, the SCOT proposes that innovation and its social consequences are mainly shaped by the society in a dynamic way with culture‚ politics and economic settings. Technology is conceptualized more dynamically than in determinism by rejecting the idea of an autonomous entity with its unique rationality. Instead, it views humans as important actors in the process of shaping technology. This theory opposes the idea of a path already set in technological development as there is the emergence of similar technologies in a define period offering different choices. Technology can’t be neutral as it promotes the interests of specific groups to the detriment of others. The implementation of large socio-technical systems is closely related to political dynamics.7 Technology is a social construction interacting with other social forces. There is a dichotomy between the social system’s values and the morality promoting technological development as the groups leading technological progress aim to increase their power. This paper uses mainly a social construction to theorize AI as a socio-technical system that dynamically redefines social structures t conceptualizes the risks discussed in this paper through that system of dynamic redefinition of social structures. It conceptualizes the risks discussed in this paper through that system of dynamic redefinition of social structures. Technological development has become the dominant question in national and global politics influencing the issues of power.8 Technology, most precisely AI, is a central factor in military, economic development, national security, transportation and surveillance. Market competition, international politics and people’s perception are influencing the direction of its implementation. Race between states closely influences the development of AI in corporate profit, reflexive public opinion, researchers’ ethics and values, national wealth, national security, international agreements or enlightened human interests.9 The responsibility of societies in technological development shouldn’t be put aside since its implementation is also favored by the extensive use made of it by citizens through different software, systems and tools. Fritsch reminds the idea that actors and technological systems influence each other leading to innovation and diffusion.10 That dynamic relation between society and technological development is applicable to the central position of AI in multiple sectors of the society. Even if they disagree on the dynamic aspects and
116
D. PEREZ-DES ROSIERS
the static predeterminate position of technology, both theories described previously recognize its ability to influence, redefine and shape societies making difficult any alteration to its evolution. In regard to surveillance, AI software, programs and tools are accelerating and strengthening the process of social control. Actors involved in setting the surveillance standards are using strategies to promote and use AI application. The public is not always involved and represented in decisions regarding technological advancement as it is largely ruled by governments and transnational corporations. While technology can be considered as a neutral entity, the use made of it by specific entities in social processes is not. The relationship between the society and AI development and the role of powerful actors leading AI implementation is described in this paper through the lenses of the SCOT. It focuses on the risks associated with the social impacts AI-enabled surveillance has or can have on the society.
AI in Surveillance The current section defines the concept of surveillance by identifying the goals pursued in surveillance as well as the ways to achieve it. It also explains how AI is applied in surveillance, more precisely in video and digital surveillance. What Is Surveillance? Surveillance can be understood as the systemic observation of individuals, groups or space by visual, auditory, photographic and electronic means to monitor behaviors, activities or information for specific purposes such as influencing, managing or directing. It is done in several ways to target one or a set of goals. These goals aim to normalize a population and eliminate the abnormal or undesirable behaviors from the society.11 The role of surveillance is to ensure the conformity with societal values and regulate challenges or changes to these set of values. Social values determine the standards that are deemed important for social stability. It leads different societies to apply, accept and challenge differently the means of surveillance. Traditional surveillance relied on unaided senses and was found in preindustrial societies as a local and compartmentalized instrument. With new technologies and devices, surveillance became more centralized, efficient and broader. The new surveillance technologies apply to everyone and are defined as “scrutiny of individuals, groups
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
117
and contexts through the use of technical means to extract or create information.”12 It is important to identify the goals from surveillance, which sometimes are hidden or unclear. Among the goals for collecting personal information in surveillance for public safety are compliance, verification, discovery, prevention and protection it also includes components related to documentation, profit and strategic advantage. While some new ways of surveillance enabled by technologies turn out to be more ubiquitous and apparent, others became less invasive. That can be explained by the idea that even if surveillance devices can be located like cameras, they can act as less repressive and invasive than traditional surveillance involving direct intervention. The dehumanization of the surveillance and the absence of hierarchical interaction with the device represents partial explanation for that process. Moreover, computer, telecommunication devices, online platform and other contemporary technologies often mix surveillance with communication functions making it less perceptible. Those devices became part of people’s daily routine and the constant exposure to them makes the benefits provided by the use of technological devices greater than the consequences of data collection. Digital surveillance is very subtle and non-material for citizens as its ability to record digital, tracking, visual and auditive information is integrated into routine activities. In terms of actors, even if it is a dynamic process under which mutual surveillance can be applied, it is achieved in large part by governmental, justiciary and corporate entities. The line between what is private ownership from what is public in contemporary technological tools remains unclear. As mentioned, the consumer is not involved directly in the development of surveillance technology making more a reactive actor toward it. Surveillance became so ubiquitous that few people challenge its legitimacy and efficiency.13 It is important to recognize that consumer behaviors will also shape how it will be applied. The information collected and the usage made by different entities is becoming more public, but remains very opaque, raising a web of concerns related to the new surveillance. Application of AI in Surveillance The contemporary surveillance is favored by the implementation of AI. AI isn’t related to a specific set of technology as it is at the core of a diversified techniques such as machine learning, robotization and data science. Machine learning is a statistical process that analyzes a large
118
D. PEREZ-DES ROSIERS
amount of data to discern a pattern to act as an explanative and predictive tool for data.14 The AI programs currently used in surveillance are offering a meaning to data. Facial recognition, camera surveillance, location tracking, suspicious comments platforms are tools involving AI that are related to public safety to prevent crime, identify suspicious behaviors and promote social harmony. The AI Global Surveillance (AIGS) index identifies AI technologies that directly support surveillance objectives such as smart city platforms, facial recognition systems and smart policing systems.15 Smart cities refer to those using sensors that transmit real-time data that are easing local services, management and public safety. Among the technologies used are incorporate sensors, facial recognition cameras, police body cameras and rapid telecommunication networks. Facial recognition systems are biometric technologies that capture, store and match images or videos to databases. Smart policing is a data-driven analytic technology used to facilitate investigations and police intervention by using algorithm analysis to predict crime. It consists of automated platforms that can use the data collected from different sources to fine-tune collection of individual information. However, those technologies need other technologies that enable their surveillance potentials such as Cloud Computing. The Internet of Things (IoT) connects different devices to the Internet allowing data collection for analytic processing in the cloud. Concerning AI-driven surveillance, its application is often associated with authoritarian governments that are using it for repression. This argument often comes with ideas promoting democratic values and liberalism. The perspective of identifying one specific political system as a threat in that field doesn’t represent its global tenure as 176 countries are actively using AI technologies for surveillance purposes including all types of political systems. Facial recognition systems represent the most spread application of AI surveillance in terms of numbers of countries. Technological development and application in surveillance have been implemented by different governments as a tool for social control. This paper questions the association of AI in surveillance to a specific political system since the challenges identified are applicable to the majority of countries applying it. However, it acknowledges that countries differ in their application of such tools with some being more intrusive, less transparent and less ethical. It is tranforming the process by which governments carry out surveillance for national security and public safety. Security in the context of state security refers to the protection of the state, its people, institutions and values.16 State security is established
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
119
by the political power that identifies adequate social values and norms while identifying concerning behaviors that are considered threats. This is related to the notion of eliminating undesirable behaviors. The use of AI tech surveillance by governments is difficult to specifically pin down due to the opacity of its use in platforms systems and, in certain cases, its access to privately owned data.17 AI systems are integrated into every facet of our personal life such as guiding social media feeds, powering smartphones, driving banking systems just to name a few. Devices and social media platforms access personal data that can drive an algorithm leading to precise assumptions and prediction of behaviors. Surveillance with AI patterns represents a discreet‚ but intrusive, force by collecting large amounts of personal data large proportion of the population to analyze social movement, predict outcomes on elections or other specific events. Those platforms can be used in social safety as data can be sold to security entities and information can be regulated leading to social influence by the companies owning those data. It brings arguments related to the importance of ethical AI in social issues especially regarding privacy, trust, fairness, transparency and accountability. However, this paper doesn’t aim to define the ethical framework of AI as it privileges a moralistic the current analysis will adopt a moral approach to AI application based on a philosophical perspective that includes interdisciplinary empirical researches. This is in accordance with Gilliom’s suggestion that the conversation should move away from law and the defense of rights to discuss the issues of power and domination.18 This aspect represents a core component of the analysis of this paper. Video Surveillance Video surveillance can be preventative by encouraging someone to behave a required way or repressive to take action toward undesirable actions. Perception toward surveillance was mainly negative in the 1970s due to the lack of benefits but changed in the 1990s as it became perceived as a convenient and positive safety tool.19 After September 11, 2001, video surveillance with facial recognition software became an important tool in US surveillance to prevent terrorism. Today, facial recognition is often integrated in video surveillance systems. However, they implement other recognition tools targeting specific things such as car plates, physical features and temperature tracking. Facial recognition in video surveillance
120
D. PEREZ-DES ROSIERS
is used for crime prevention in public safety, homeland security and traffic management. In the case of the Royal Malaysia Police Cooperative, facial recognition is promoted as a step forward in improving public safety.20 Other countries have also implemented a system of facial recognition to improve social order. Associated with facial recognition and other video surveillance tools, there has been the implementation of legal automation. Automation is a system that function with no or limited human operator involvement. Red light cameras are a good example of robotic law enforcement in public security. It is relevant to mention that facial recognition and other tools associated with video surveillance aren’t only applied in surveillance but also in public services. In certain countries, regulations have been put in place for video surveillance. An example is from the PFPDT, which identifies four criteria for public bodies which are good faith, proportionality, finality and legality.21 Other examples are that people need to agree to be filmed, cameras must be visible, it must strictly film material that needs to be and it has to carry out specific tasks. These principles don’t apply to private institutions as it needs to meet the principle of a sufficient legal basis. The argument related to the benefit of greater good over self-privacy isn’t strongly supported in current applications as exposed in the Risks of AI-Enabled Surveillance section. Video surveillance is a socio-technological tool that can’t be separated from a social and technological domain. The socio-technical construction of video surveillance rejects the idea of being an inert object. Instead, it is a system that is in a sequence of building itself. Control has been transferred from people in a physical place to a distant system that converts information into an abstract space. AI is part of an indiscriminate video surveillance that doesn’t target a specific citizen but monitors people within a space. Foucault’s theoretical model of the Panopticon can be used to understand the driving forces of video surveillance.22 A key if this theory is the omnipresent observation that is visible by the watched but unverifiable. However, that model can’t apply to modern surveillance as it isn’t centralized in the government anymore. Yesil argues that video surveillance is not a neutral tool, but the articulation of specific social and political goals.23 AI video surveillance helps the one owning or controlling the technology to achieve their goals more efficiently as cameras now have intelligent capabilities with the algorithm being able to notice certain abnormalities and send the information directly, allowing a more efficient
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
121
and broader realm of surveillance. The argument of the omnipresent character of surveillance enabled by AI video surveillance and the dichotomy between the watcher and the watched is further described in the risks identified in this chapter. Digital Surveillance Digital surveillance is related to the process under which computer databases are used to store and process personal information. The Internet enables a global network form of surveillance. Web 2.0 refers to the emergence of specific social qualities that are supported by the Internet. The estimation of 50 billion connected devices around the world generating data indicates that information is becoming a major tool for development, coordination, persuasion and coercion.24 Surveillance of individuals behavior can be achieved through multiple digital networks using AI such as email, spatial tracking, face recognition, figure prints and other tools. The fast evolution of these systems allows for more accurate, assertive and broad surveillance of populations. A consequence of digitalization is that criminal and anti-social behavior on the Internet is weighty in recent years such as fake news, the promulgation of propaganda and proliferation of terrorism. AI offers great capabilities for the international community, governments and civil society to predict and prevent these threats and crimes.25 Indeed cybersecurity is a form of surveillance that can be applied to detect people with malicious intent. Even if these means of surveillance can have positive effects for the greater good in public safety, it also comes with some perversive aspects as explained in the next paragraph. Clarke suggests that surveillance through abstract data, also called dataveillance, is less intrusive and threatening than other tools such as cameras.26 This argument was before the expansion of dataveillance to most technological devices used by a large proportion of the population. It seems to lack clear understanding of the realms of data surveillance as data are being collected in large part by specific entities such as corporations and governments to observe and influence citizens. Data are a capital commodity and an economic resource that is important for corporations and governments to control without strict regulations to maintain constant access. This leads to economic surveillance as data are now monetized since corporations sell personal data to advertisers. Egovernment is a demonstration of the adaptation by the political power
122
D. PEREZ-DES ROSIERS
of the Internet to monitor their political needs.27 The state application of digital networks reveals cooperation with private entities in R&D and the application of technology. They also can obtain some information from Internet service providers under certain laws with subpoenas. Andrejevic sees the Internet as a virtual digital enclosure for consumers from commercial and state surveillance.28 Chisnall retake the term digital slavery as is owning another human being through growing sophistication and capabilities or data analytics and AI and the amount of personal data that enable unprecedented levels of control and manipulation.29 Modern digital networks position political and private clusters with an increase authority through access and use of personal data. The different means of AI application in surveillance are now omnipresent and highly controlled by a small group of entities allowing deeper, more precise and constant tracking to pursue their interests. AI offers a more efficient, less labor-intensive and cheaper way of surveillance and repression of the population.
Risks of AI-enabled Surveillance The speed of development in AI has now outpaced Moore’s Law according to the Stanford University AI Index 2019.30 The research found that within the last 18 months, the time needed to train a network on cloud infrastructure for supervised image recognition passed from 3 hours to 88 seconds. These results demonstrate the fast pace of development in AI, especially machine learning, which is heavily used in the surveillance systems previously mentioned and can partly explain the extended application of AI in surveillance. While it allows important progress that can benefit people’s lives, it is also increasing the problems and risks that will be discussed in the current section. Surveillance societies are infusing all aspects of social life and shape activities of public institutions, corporations, governments and individuals. The following section identifies some of the main problems, limits and inaccuracies related to the application of AI surveillance systems, software and tools to promote public safety. It describes the social risks associated with these problems.
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
123
Biases in Algorithms A problem that is often mentioned in the literature regarding AI algorithms, especially in risk assessment tools (RAT), is their biases.31 RATs are surveillance tools used to manage the risk of certain individuals or groups to maintain social safety. There are many RATs that use algorithms for decision-making to predict the likelihood of outcomes for offenders. They help rely less on humans’ subjectivity to avoid biased assessments. However, it doesn’t make the risk score generated by a computer automatically fair or trustworthy according to professor Rudin from Duke University.32 The validity of those tools has been challenged on multiple occasions as their algorithms can even increase existing biases. Indeed, machine learning-based algorithms can amplify biases by taking an assumption from their input data in a feedback loophole and creating increasingly inaccurate conclusions.33 An important limit is their difficulty in differentiating correlation and causality, which is of major importance in risk assessment. The replication or aggravation of human biases is associated with initial biases in dataset or with the identification of correlated variables as wrongfully causal. People naturally tend to exaggerate the risk of a person regarding offenses. If such bias is implemented in an AI algorithm, it can lead to negative impacts for multiple citizens such as excessive interventions, stigmatization and discrimination. Moreover, the fast development of algorithms can make it difficult to understand their decisions which complicates the identification of biases in their decisionmaking. Another major concern is that AI algorithms can be employed in surveillance to voluntarily target certain social groups. There are some examples of the up-mentioned arguments. Fusion centers are security organizations that use data from social networking platforms or private-sector data aggregators to share information of a large proportion of the American population.34 Even if such organizations work to prevent terrorism and other social threats due to their involvement in surveillance practices, abuses have been reported as well. Such centers have been involved in racial profiling, political profiling, illegal data mining and illegal data collection by exceeding policies in their practices.35 Their assessment tools contained biases against individuals or groups. The Harm Assessment Risk Tool (HART) is an AI-based technology that uses histories of 104,000 people previously arrested in Durham.36 It helps scale the risk of reoffending from high to low. A limit of this tool has been that human decision-makers can adapt immediately
124
D. PEREZ-DES ROSIERS
to different contexts, which is not the case of an algorithmic tool. The overall accuracy of this tool has been 63% in the previous study. The lack of precision brings concerns regarding inaccurate identification of people. Research demonstrates the discriminatory aspect of AI systems in detecting skin colors.37 A briefing note from McKinsey Global Institute mentions that facial recognition trained on a specific population may not apply to other populations.38 There also been gender discrimination as Amazon has stopped using an AI tool in hiring people due to its bias against women.39 However, there are algorithms tools such as CORELS40 that have demonstrated increased efficiency. The argument is not to ban BATs, but to be aware of the biases that can be incorporated in them and the consequences that it can have on certain populations. From the previous examples, many social risks have been exposed. The problems related to algorithms’ biases have caused harm to people due to errors made in risk assessment. In certain cases, they have amplified human biases voluntarily or involuntarily resulting in disproportionally affecting specific social groups. The replication of human biases can intensify existing social problems such as racial discrimination, group stigmatization, excessive intervention, gender inequality and more. Algorithm risk assessment can also dehumanize the risk assessment process by removing human judgment and comprehension from the evaluation. This works in the opposite direction of a dynamic risk assessment. Indeed, dynamic risk factors are constantly changing and they must be included in the assessment. It remains difficult to train an algorithm with such human characteristics. This leads to the assumption that such AI systems should be used to assist human decision-making instead of replacing it. AI technology is not advanced enough to be used independently in risk assessment from human judgment as it is a dynamic and complex process. It can have positive effects on public safety, but its diligent application remains a concern. Arbitrary Factors in Surveillance The previous section shows the difficulties of developing algorithms that are exempted from biases in RATs as they often reproduce or aggravate existing ones. Law enforcement is reactive as it investigates after a crime is committed while intelligence bodies are proactive by collecting information in advance and independently of any event. Intelligence bodies do surveillance on citizens broadly and not only on suspicious
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
125
cases. However, with new technologies and surveillance, law enforcement became more proactive justified by the idea of diminishing the risk of offense and making society safer. It is important to explore the veracity of arguments employed to justify surveillance for public security. In regard to video surveillance that includes AI software, the positive effects on crime are still debated. Certain researches on closed-circuit television (CCTV) surveillance found a positive effect on crime reduction while others found no significant effect.41 It’s very difficult to measure the effectiveness of surveillance programs and the impact of strategic intelligence on the decision-making process. As mentioned, video surveillance is used to identify abnormal, worrisome and dangerous behaviors. One problem with this surveillance tool is that it can displace, instead of decrease criminal activities.42 It also has little impact on violent crimes.43 Time, attention and operators don’t represent limiting factors in AI surveillance with automated bot constantly monitoring. It tends to reflect a classification of every citizen as a potential social threat. While AI algorithms ease reactive actions on crime, it is difficult to justify the collection of data of every citizen through camera surveillance with arguments related to crime prevention for public safety. This reflects a struggle to meet legal standards to legitimatize surveillance. The problem is that mass surveillance is justified by arguments that are still questionable resulting in mass collection of data through AI devices. A credit system can be linked to surveillance, but it is more difficult to relate it to public safety. Initially the credit score in the USA in the 1970s was an attempt to reduce all the information of the credit into one score. The credit system was an attempt to identify the risk of an individual for financial loans. The new wave of credit systems has integrated a large diversity of data that are transferred in AI algorithms. That wide variety of data that aren’t always related to financial security encourage citizens to follow social norms in order to receive loans and benefit from social privileges. Previously, citizens faced consequences imposed by the state when they weren’t respecting the law. Now, the wide variety of data involved in AI allows to integrate more components in arbitrary risk evaluation. The idea of implementing the extended social aspects of an individual into its credit score acts as a way of surveillance that can be justified by public safety arguments. As an example, a social credit system plans to include social media data, criminal infraction, volunteer activity, city and neighborhood record to name only a few.44 It plans to come up with a trustworthiness score to allow citizens to enjoy benefits. People with
126
D. PEREZ-DES ROSIERS
good behaviors that are defined by governments can be rewarded and the ones not acting under the norms without necessarily being a social threat can see their score being reduced. However, many of these components haven’t been empirically associated with social, credit or criminal risk. This can limit access to multiple public services such as credit service, transportation and others to citizens that are not undeniably representing a social risk. The risk relies on the wide variety of factors that can be integrated into the credit score to push citizens to behave accordingly to an enlarge set of social norms regulating more citizens’ actions. It can result in multiple social consequences such as censorship, stigmatization and isolation. Lack of Transparency in Disproportionate Collection of Data Data collection is accomplished through the softening process of surveillance which refers to the idea that it has become less visible and coercive. Nevertheless, the Internet allows surveillance on all its users through search engines, social media, applications and more. The users know very little about the information that is collected on them which can be described as exploitation and loss of control on personal data. Often consumers have to agree to terms in order to use platforms or online services. Even if some consumers consider surveillance made on social media as data abuse, the potential advantages of accessing those platforms outstrip the disadvantages.45 This is created by a larger integration of services and social activities in digital platforms increasing the dependency of users to do daily convenient actions such as communications, payments, orders, access services and much more. The subject’s cooperation for data collection is also increased by soft, technical seduction and communication techniques to boost persuasion and reward. Digital marketers target clients’ vulnerabilities while promoting their practices resulting in a form of social control.46 Google’s strategy is the collection of data through its multiple applications to better analyze consumer behavior while Facebook practices a constant surveillance on its users.47 That permanent surveillance allows a certain control on people’s behavior, the information shared and the identification of suspicious behaviors. The manipulation, transmission and collection of those data are very opaque. Many online government’s digital decision-making system barely gives citizens access, if in any sense they have, to the overall logic of the system.48
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
127
Marx mentions that the greater the distance between data and the process of the measurement device, the stronger is the need for explaining such a tool.49 Most AI technologies can have a deleterious impact on the right to privacy.50 Some major transnational corporations have expressed concerns about AI application in surveillance by the state. Indeed, the president of Microsoft called governments to regulate facial recognition technology for protecting privacy and freedom of expression.51 Google employees expressed concerns about censorship of content that raised apprehensions about morality and ethics.52 The problem stated in this section is the lack of transparency in regard to the use of data and the constant surveillance resulting in significant amounts of data in the control of a small group composed mainly of governments and corporations. This results in increased control over citizens, manipulation and information regulation. Yet, rules are limited to regulate the collection and use of personal data. It is a good example of Walker-Osborn’s argument that laws are far behind technological development.53 The tendencies to over-collect data are related to its central role in machine learning training, making it very important for constant AI development and implementation. However, that race for AI development should integrate more transparency in the use and collection of data as well as integrate a better-defined ethical framework. Negative Psychological and Social Impacts The previous sections have reflected multiple problems related to the implementation of AI in surveillance for public security such as a loss of privacy, extended control, opaque collection of data, discrimination of specific groups and broad involvement of non-causal factors in risk. These problems can all have negative impacts on the individuals and the society as the current section illustrates. It describes more in details certain risks stated previously. When building relationships, people give progressively more personal information as trust is built and expose a certain vulnerability to others. A lot of relations and exchanges are partially achieved in online communication as social media are an important tool of socialization. As people become aware that their conversations can be registered or monitored, they tend to circumspect their communications. People who are aware of being observed can have a feeling of not belonging to themselves which drains individual sovereignty away.54 In those circumstances, they tend
128
D. PEREZ-DES ROSIERS
to alter their behavior and modify their decisions. It generally brings people to demonstrate more conformity and self-censorship when they are aware of being watched. Online communication already redefines the means of communication among people by moving a lot of the personal interaction into digital communications that brings new complexity to the development of relations. This reality mixed with conscious monitoring from other parties increases the complexity of digital communication. The surveillance on social media platforms can be mainly used to prevent hate speech, discussion on sensitive subjects and others. However, these rules have been set, as mentioned previously, by specific actors and often defined arbitrarily. Anonymity has also been affected in workplaces with the use of video surveillance. The intense monitoring has canceled some of the profits that it was supposed to bring in the workplace. Video surveillance was intented to increase the productivity of employees, but it also affected the physical and mental well-being of workers. Privacy allows individuals to form judgments and express opinions without fear of consequences, which doesn’t represent criminal behaviors. Nowdays, citizens are more aware of the means of digital and camera surveillance. If anonymity is permanently removed, you take the risk to decrease personal expression, creativity and critical thinking in a society. Surveillance is omnipresent in contemporary societies collecting data through a variety of channels. People are no longer simply tangible entities as they became part of the Internet in which privacy is very opaque. Undeniably, algorithmic surveillance increases the risk of disregard toward privacy as it can reveal personal information of users. Privacy has decreased with the expansion of surveillance in private environments through the Internet and the IoT. There’s a decentralization of the individual with part the self-being transferred to data that are owned by third parties. It is acknowledged that personal data became an important part of an individual making it important to regulate.55 According to Skyler Hawk, a social psychologist at the Chinese University of Hong Kong, experiencing privacy is a basic human need that transcends culture.56 Humans need space to experiment and develop their identities. Lack of privacy can lead to more health problems called internalizing behaviors such as anxiety, depression and withdrawal. In that sense, mass surveillance currently occurring mixed with increased awareness of people on the matter might result in an increase of mental disorders in the society. Surveillance that is deemed unproportioned can bring insecurity, lack of trust in the system
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
129
and negative emotions. The argument of this paragraph must be conceptualized in the logic that people are becoming more aware of the personal data collection by third parties. It can be argued that someone who hasn’t committed any wrong doing shouldn’t fear personal consequences of such surveillance even if aware of it. However, the consequences of lost of privacy that are mentioned brings the idea that some questions should be raised on constant digital monitoring. Technologies for surveillance and banks of data filled with personal information meet the interest of social control.57 The collection of data and the de-anonymity of people are two major concerns for societies and individuals. It could shape societies by influencing expression, association, political participation and information. It also increases the risk of ostracization and stigmatization of minorities, political opposition and other groups as it has been explained in the biases mentioned previously. A great deal of consequences in the society resulting from such surveillance will emanate. Incentive for high security can find its source in government agencies and local communities instead of real needs. A study on school security reflects that argument and demonstrates that high security can have unintended, negative consequences. One of the results is that students in schools with high-security measures often feel less secure. It also leads to more intervention such as arrestations and suspensions which interrupt their learning process. Based on these findings, citizens dealing with more surveillance from authorities can get stuck in a vicious circle similar to these students. AI-enabled surveillance can add pressure on populations and even increase the risk. In the past, the idea of being tough on crime, related to social surveillance, had no positive effects leading to carceral overpopulation.58 More implementation of AI in public surveillance could result in a rise of crime, more social pressure and maladapted intervention in societies. Indeed, it will allow for more detection of crime with broader surveillance resulting in possibly more sanctions and incarceration rates, often affecting specific communities. This may increase social inequality by targeting specific groups of people. If surveillance is to be applied in this specific way, it will necessitate a comprehensive understanding of the best way to apply sanction and conduct interventions on individuals. Another problem is related to the transformation of the workplace. Millions of workers will have to change occupations and redirect their careers.59 Job displacement is likely to heavily affect the security and surveillance workers as new tools are more efficient and cost effective. It
130
D. PEREZ-DES ROSIERS
may be difficult for these professionals to find a new position that suits their training and qualification.
Integrated Analysis The difficulty of exclusively analyzing surveillance for public security relays in the multiplicity of goals that are pursued in contemporary surveillance and the practice that is made in its different applications. First, the argument that it is a cheaper and less labor-intensive way to ensure social safety represents a positive point in certain aspects. However, this new form of surveillance is not less repressive, but simply pursued through different means, affecting citizens in new ways. Even if it offers a less repressive manner than human authority figures such as policemen and soldiers, it also represents omnipresent surveillance. This logic indicates a transformation of surveillance with the application of AI-driven devices that moved away from human to digital surveillance. While this reality remains quite new and its real consequences yet to be fully understood, the current paper describes some of the risks related to it. A critical understanding through a multidisciplinary approach will be required to understand the mechanisms of this decrease in privacy will affect societies in the longer term. Moreover, the digitalization of surveillance can also result in job loss as security agents are less dependable. AI can be viewed as a way to solve some of the social challenges such as racism, inequality, isolation and others by bringing independent analysis of data that moves away from bias human analysis. However, this paper includes examples of algorithms reproducing or aggravating existing human biases in their analysis. The biases exposed in algorithm represent a risk to increase the current challenges in a world where multiple countries have domestic populations that are more heterogeneous and interconnected. There are already examples of algorithms that have been used to target specific groups. With the intense surveillance enabled by AI and its integrated biases, consequences are occurring such as marginalization, fear and non-risk related abnormal behaviors. As of now, artificial intelligence in surveillance hasn’t offered the solution to tackle such challenges in a moral way as explained in the risks previously described. In some examples, it has even been used to repress certain groups and to contribute to the current social challenges. Despite the possible positive outcomes of algorithms to move away from biases analysis and diminish discrimination, progress remains necessary. The argument isn’t to ban the
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
131
efforts in training algorithms that are more accurate and less discriminative than people. It aims to reflect that such tools can be used in a way that isn’t moralistic good and that the development and implementation of these new platforms must be regulated and overseen before their use. The possibility that information can be utilized to control multiple aspects of citizens’ lives such as mobility, credit, access to services and reputation holds several concerns in regard to social consequences. For this point, it is relevant to consider that the majority of citizens are breaking the law at one point in their life, often minor offenses as part of exploration, social learning and simply ignorance of the numerous rules. Most of these offenses are minor, non-violent and don’t represent an important risk for people. The contemporary surveillance through AI allows to detect with ease a larger amount of these behaviors, but also intervene without human actors being involved. This dehumanizes the intervention and allows for a larger monitoring. If people constantly fear to face consequences imposed by the authorities, following actions deemed sanctionable, like jaywalking, might alter their learning and exploration process. The idea of abolishing any criminal or risky behaviors in a society appears logical from a moralistic point of view. However, this doesn’t consider the whole spectrum of other points. First, many rules leading to sanctions, such as fines, have been based on arbitrary arguments that aren’t necessarly associated to an increase risk by citizen. Secondly‚ the possibility that undesirable behaviors can be largely eradicated would result in an oppressed society as a society with a certain crime rate is a society that allows enough freedom to its citizens for exploration, social learning and self-development. A major challenge is to balance the achievement of different goals while maintaining a moral approach in the application of AI surveillance to limit negative social impacts in a collective and enriching environment allowing personal expression and freedom. It is a positive outcome to reduce harmful, violent and aggraving crimes in a population, but a discussion must be held about the balance that needs to be met in surveillance for public safety to allow enough freedom for citizens to prosper in a humanistic way that respects privacy, freedom and promotes self-actualization. The fast speed of development and implementation of AI in surveillance is set to expand the risks discussed risks and create new problems since laws and regulations can’t adapt rapidly enough. As the international system is based on competition between states and private entities for technological innovation, it limits periods to reflect on the social
132
D. PEREZ-DES ROSIERS
impacts of AI applications in surveillance. This set in a need for entities in power to maintaining economic leverage and power with sometimes limited accountability in regard to their moral and social responsibilities. This is not to say that it is a new dynamic occurring specifically through the application of AI in surveillance, but simply to reflect that it can now be accelerated in the current technological revolution. Finding solutions for a safer society through research and education before prioritizing intrusive technological tools might offer more positive social outcomes. However, these processes are lengthy and complex and they can appear as a contra-productive approach. While some governments may use more transparency and patience to implement such technology in surveillance, it is not a priority for all states in their AI strategies. Finding the balance between the moral use of new technological techniques and the pace of development is difficult to find. Another aspect to take into consideration is the increase awareness of the population about data collection and security. This may result in less social acceptability and increase dissatisfaction towards data collection and intrusion in privacy. These elements go against certain powerful entities’ interests. As a way to justify such practices, abstract arguments and manipulation are put forward. The idea that AI can be used to increase the security of citizens is an argument that has some flaws as demonstrated. It is important to conceptualize these arguments within national interests. Events, ideologies and fear are all tools that can be exploited to build arguments that promote the intensification of surveillance for public safety, but as discussed, some of those arguments lack empirical evidence. Arguments associated with national security and social stability can aggravate biases toward specific groups and intensification of means of surveillance through AI. Liberal democracies are more likely to face social discontent toward means of surveillance driven by AI while other political systems might have more ease to push their rhetoric in their population. As reflected in social constructivism, technological development is pushed by economic and political settings. It is driven by the power elite which refers to the ones who have money and resources. A major challenge in a moral utilization of AI in surveillance for public safety is that many social norms are set by the power elite. They are often pursuing specific interests in an opaque way. As corporations are driven by profit, they tend to overlook the moral aspect of their application of AI in surveillance through mass data collection. More surveillance can be perceived as a tool to decrease financial risks by increasing social control. The power
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
133
elite benefits from veiled practices to maintain social stability, pursue their interests and gain power. This reality is not new but is facilitated by the implementation of AI in surveillance. As AI is a core component of the current industrial revolution, it is a focus for powerful entities to keep pursuing an extended collection of data through different means of surveillance to remain competitive in the development of this technology, understanding consumers and use it as a control tool for narratives and behaviors. This will be achieved partially through constant surveillance and by increasing social dependence on digital services. Digital payments, IoT, surveillance devices and online services will become even more ubiquitous. While transparency on data collection might increase, consumers will still have to accept lengthy contracts to use different services and platforms. Considering those aspects, it can be projected that AI will see its role increase in public surveillance by making societies more dependant on technological devices to assure social stability. Such dynamics will maintain a large flow of data by concentrating the social interactions and the economic operations in software and devices. Those data and software also eliminate the anonymous aspect of social interaction, development and functioning. In the modern digital world, people who will refuse to use these platforms will face common consequences as social interactions moved in a big part toward social media. Citizens will have to choose between sacrificing privacy to access the positive advantages of social platforms that integrate AI-enabled surveillance or staying away from this digital surveillance and dealing with the social consequences. These choices will influence the evolution of the current technological revolution in surveillance. The risks addressed in this paper will be influenced by the evolution of this technological revolution. The current paragraph aims to illustrate the possible outcome of AI in public surveillance if the risks described in this paper aren’t discussed and partially solved. As technology doesn’t a defined path of development, it is difficult to project the evolution of AI-enabled surveillance, but projections can be offered based on the current dynamics. Governments and corporations are playing a major role in the development of AI and its application in surveillance. The current surveillance offers greater dichotomic visibility of power as the entities controlling it see considerably people’s life, but this population doesn’t see what is collected on them. That sequence allows more opportunities to extend control, intervention, punishment and exclusion in the population. Such surveillance is accompanied by lives of citizens being more oriented toward specific goals set by
134
D. PEREZ-DES ROSIERS
states or other powerful actors. The omnipresent surveillance is facilitating the normalization of behaviors for the sake of social safety that is conceptualized by the power elite. The information received by corporations and governments is in return used to adapt their means of surveillance and discourse. Societies are set to face more regulations and rules defining the role of individuals with the commodification of AI. Societal control will be more important leading to more regulations, intrusion, data collection and implementation of AI software. These impacts might lead to a rigid frame in which individuals will have to function resulting in less personal actualization through control of space and expression. It may result in less social creativity, a lack of trust toward governments and standardizing social behaviors. Society could fall into feudalistic conditions in surveillance where algorithmic power will be controlled by the actors in charge of its modes of production. Similar to Marcuse’s vision of technocentric societies, AI would facilitate monitoring and control through digital surveillance, network and data profiling leading to direct repression through extended censorship, citizens tracking, and communication surveillance.
Conclusion Over time, people have been pushing the limits of what was possible. They moved away from their initial nature and created a fast-developing world in which technology became at its core. The application of AI technologies in surveillance can have beneficial social effects. Though, it is accompanied by multiple challenges as it creates or amplifies existing social problems. The balance between the positive and negative components of AI implementation is very delicate and complex as it involves a diversity of actors, perceptions, interests and dynamics. This chapter illustrates the challenges involved in the implementation of AI-enabled surveillance for public safety. To propose solutions such as more transparency in AI applications represent only a partial answer to those problems as it doesn’t solve the dynamics of power and social transformation initiated in this technological revolution. It offers a limited consideration of the international dynamics of competition between states and of the corporations’ interests. It is important to take into consideration the actors applying technological devices in surveillances and their intentions in order to limit the risks mentioned in this paper. It can have positive impacts for societies if applied with diligence, neutrality
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
135
and reasoning that protects citizens. On the other hand, if it is applied to promote a social perspective based on the idea of an elite group and block the access of the population to important, real and relevant information, it could lead to a break in societal development, critical thinking and human transmission of knowledge. A balance between the socioeconomic benefits of technological surveillance and the immoral practices related to it must be found. The unrealistic solution would be a slow down in AI implementation in surveillance by governments and corporations to clearly understand the supporting challenges and avoid their long-term impacts. However, this seems mainly idealistic in the current context and more realistic solutions must be proposed. Discussions, negotiations and agreements through multidisciplinary committees or groups dealing with different governments would be a positive progress. In order to solve the problems cited, all parties concerned must be involved in the discussion.
Notes 1. Jacques Ellul, The Technological Society (New York: Alfred A. Knopf, 1964); Steven Feldstein, “The Global Expansion of AI Surveillance.” Working Paper. Carnegie Endowment for International Peace, Washington, DC, 2019b; David Lyon, The Electronic Eye: The Rise of Surveillance Society (Minneapolis, MN: University of Minnesota Press, 1994); Gary T. Marx, Windows into the Soul: Surveillance and Society in an Age of High Technology (Chicago, IL: The University of Chicago Press, 2016); Bilge Yesil, Video Surveillance: Power and Privacy in Everyday Life (El Paso: LFB Scholarly Publishing LLC, 2009). 2. Stefan Fristch, “Technology and Global Affairs.” International Studies Perspectives, 12 (2011), 29. 3. William Kunz, Culture Conglomerates: Consolidation in the Motion Picture and Television Industries (Lanham, MD: Rowman & Littlefield, 2006), 2. 4. Ellul, “The Technological Society”, 125–155. 5. Fristch, “Technology”, 31. 6. Andrew Murphie and John Potts, Culture and Technology (London: Palgrave). 7. Winner, Langdon, The Whale and the Reactor: A Search for Limits in an Age of High Technology (Chicago: The University of Chicago Press, 1986, 2003), 21. 8. Marx W. Wartofsky, “Technology, Power and Truth: Political and Epistemological Reflections on the Fourth Revolution.” In Democracy in a Technological Society, ed. Langdon Winner (Norwell, MA: Kluwer Academic Publishers, 1992), 16.
136
D. PEREZ-DES ROSIERS
9. Allen Dafoe, “AI Governance: A Research Agenda.” V1.0 August 27, 2018. Future of Humanity Institute. University of Oxford, Oxford, UK, 2018, 34. 10. Fristch, “Technology”, 34. 11. Hille Koskela, “The Gaze Without Eyes’: Video-Surveillance and the Changing Nature of Urban Space.” Progress in Human Geography 24, 2 (2000), 251. https://doi.org/10.1191/030913200668791096. 12. Marx, “Windows”, 20. 13. Didier Bigo, “Security, Exception, Ban and Surveillance.” In Theorizing Surveillance, ed. David Lyon (Portland: Willan, 2006), 49. 14. Feldstein, “The Global”, 5. 15. Ibid., 1. 16. Sadako Ogata, “Striving for Human Security.” United Nations Chronicle. Accessed March 3, 2020. Striving for Human Security. United Nations Chronicle, 2015. https://www.un.org/en/chronicle/article/str iving-human-security. 17. Feldstein, “The Global”, 3. 18. John Gilliom, Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy (Chicago, IL: The University of Chicago Press, 2001), 21. 19. Yesil, “Video Surveillance”, 45–52. 20. Steven Feldstein, “How Artificial Intelligence Is Reshaping Repression.” Journal of Democracy 30, 1 (2019), 40. 21. PFPDT. 2001. Surveillance par vidéo dans les transports publics —exigences minimales de la protection des données. 8e rapport d’activités 2000–2001. https://www.edoeb.admin.ch/edoeb/fr/home.html. 22. Michel Foucault, Discipline and Punish: The Birth of the Prison (Harmondsworth, UK: Penguin, 1977). 23. Yesil, “Video Surveillance”, 12. 24. Heather Roff, “Advancing Human Security Through Artificial Intelligence.” Research Paper. International Security Department and US and the Americas Program, 2018, 2. https://www.chathamhouse.org/sites/ default/files/publications/research/2017-05-11-ai-human-security-roff. pdf. 25. Ibid., 2. 26. Roger Clarke, “While You Were Sleeping…Surveillance Technologies Arrived.” 2001. http://www.rogerclarke.com/DV/AQ2001.html. 27. Christian Fuchs, “Critique of the Political Economy of Web 2.0 Surveillance.” In Internet and Surveillance: The Challenges of Web 2.0 and Social Media, ed. Christian Fuchs, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval (New York: Routledge Studies in Science, Technology and Society, 2012), 10. 28. Mark Andrejevic, iSpy: Surveillance and Power in the Interactive Era (Lawrence: University Press of Kansas, 2007), 2.
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
137
29. Mick Chisnall, “Digital Slavery, Time for Abolition?” Policy Studies 41 (2020), 1. https://doi.org/10.1080/01442872.2020.1724926. 30. Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, “The AI Index 2019 Annual Report”, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA (December 2019), 65. 31. Ellora Thadaney Israni, “When an Algorithm Helps Send You to Prison.” The New York Times, October 26, 2017. https://www.nytimes. com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html; Mark Latonero, “Governing Artificial Intelligence: Upholding Human Rights & Dignity.” Data & Society (2018), 9. https://datasociety. net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_I ntelligence_Upholding_Human_Rights.pdf; Media Mobilizing Project (MMP), “Mapping Pretrial Injustice.” Media Mobilizing Project. Accessed February 17, 2020. https://pretrialrisk.com/the-basics/. 32. Robin A. Smith, “Opening the Lid on Criminal Sentencing Software.” Duke Today, July 19, 2017. https://today.duke.edu/2017/07/openinglid-criminal-sentencing-software. 33. Israni, “When”. 34. Robert O’Harrow, Jr., “Centers Tap into Personal Databases.” Washington Post, April 2, 2008. http://www.washingtonpost.com/wp-dyn/ content/article/2008/04/01/AR2008040103049.html. 35. Torin Monahan, “The Future of Security? Surveillance Operations at Homeland Security Fusion Centers.” Social Justice 37, 3 (2011), 88. www.jstor.org/stable/41336984. 36. Geoffrey Barnes and Lawrence Sherman, “Needles & Haystacks: AI in Criminology.” Research Horizons: University of Cambridge 35 (2018), 32–33. https://www.cam.ac.uk/system/files/issue_35_research_ horizons_new.pdf. 37. Latonero, “Governing”, 9. 38. James Manyika and Jacques Bughin, “The Promise and the Challenge of the Age of Artificial Intelligence.” Briefing note. McKinsey Global Institute (2018), 6. https://www.mckinsey.com/~/media/McKinsey/ Featured%20Insights/Artificial%20Intelligence/The%20promise%20and% 20challenge%20of%20the%20age%20of%20artificial%20intelligence/MGIThe-promise-and-challenge-of-the-age-of-artificial-intelligence-in-briefOct-2018.pdf. 39. Jacob Serebin, “E is for Ethics in AI—And Montreal’s Playing a Leading Role.” The Gazette, March 30, 2019. https://montrealgazette.com/ news/local-news/can-montreal-become-a-centre-not-just-for-artificial-int elligence-but-ethical-ai. 40. CORELS is a supervised learning algorithm.
138
D. PEREZ-DES ROSIERS
41. Rachel Armitage, Graham Smyth and Ken Pease, “Burnley CCTV Evaluation.” In Surveillance of Public Space: CCTV, Street Lighting and Crime Prevention, ed. Kate Painter, 225–249 (UK: Willan Publishing, 1999), 244; Jason Ditton and Emma Short, “Yes, It Works, No, It Doesn’t: Comparing the Effects of Open CCTV in Two Adjacent Scottish Town Centres.” In Surveillance of Public Space: CCTV, Street Lighting and Crime Prevention, ed Kate Painter, 201–224 (UK: Willan Publishing, 1999), 217; Martin Gill, Anthea Rose, Kate Collins, and Martin Hemming, “Redeployable CCTV and Drug-Related Crime: A Case of Implementation Failure.” Drugs: Education, Prevention and Policy113, 5 (2006), 451; Michelle Cayford and Wolter Pieters, “The Effectiveness of Surveillance Technology: What Intelligence Officials Are Saying.” The Information Society 34, 2 (2018), 90. https://doi.org/10. 1080/01972243.2017.1414721. 42. Jennifer King, Deirdre K. Mulligan, and Steven Raphael, “The San Francisco Community Safety Camera Program.” CITRIS Report, University of California, Berkeley, 2008, 11; Rachel Armitage, “To CCTV or Not to CCTV? A Review of Current Research into the Effectiveness of CCTV Systems in Reducing Crime.” Nacro. London UK, 2002. Accessed February 3, 2020. https://epic.org/privacy/surveillance/spotli ght/0505/nacro02.pdf. 43. Mark Rice-Oxley, “Big Brother in Britain: Does More Surveillance Work?” Christian Science Monitor, 2004, Accessed February 21, 2020. https:// www.csmonitor.com/2004/0206/p07s02-woeu.html. 44. Mara Hvistendahl, “In China, a Three-Digit Score Could Dictate Your Place in Society.” Wired, December 14, 2017. Accessed December 1, 2020. https://www.wired.com/story/age-of-social-credit/. 45. Fuchs, “Critique”, 61. 46. Anthony Nadler, and Lee McGuigan, “An Impulse to Exploit: The Behavioral Turn in Data-Driven Marketing.” Critical Studies in Media Communication 35, 2 (2018), 151. https://doi.org/10.1080/15295036.2017. 1387279. 47. Fuchs, “Critique”, 33. 48. Chisnall, “Digital”, 11. 49. Marx, “Windows”, 110. 50. Filippo Raso, Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Kim Levin, “Artificial Intelligence & Human Rights: Opportunities & Risks.” Berkman Klein Center for Internet & Society Research Publication, 2018, 8. 51. Brad Smith, “Facial Recognition Technology: The Need for Public Regulation and Corporate Responsibility.” Microsoft, 2018, Accessed January 20, 2020. https://blogs.microsoft.com/on-the-issues/2018/
6
52.
53. 54.
55.
56.
57.
58.
59.
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
139
07/13/facial-recognition-technology-the-need-for-public-regulation-andcorporate-responsibility/. Kate Conger and Daisuke Wakabayashi, “Google Employees Protest Secret Work on Censored Search Engine for China.” The New York Times, August 16, 2018. https://www.nytimes.com/2018/08/16/technology/ google-employees-protest-search-censored-china.html. Charlotte Walker-Osborn, “Ethics and AI, A Moral Conundrum.” ITNow 60, 2 (2018), 46. https://doi.org/10.1093/itnow/bwy052. Priscilla M. Regan, Legislating Privacy: Technology, Social Values, and Public Policy (Chapel Hill, NC: The University of North Carolina Press, 1995). Simon Rogerson, “The Data Self | The Connected World and Mobility: Ethical Challenges”, 2018. Accessed December 20, 2019. https://intern etofbusiness.com/the-data-self-the-connected-world-and-mobility-a-glo bal-ethical-challenge/. Kristen Weir, “Parents Shouldn’t Spy on Their Kids.” Nautilus. Last Modified on April 14, 2016. http://nautil.us/issue/35/boundaries/par ents-shouldnt-spy-on-their-kids. Ronald D. Schwartz, “Artificial Intelligence as a Sociological Phenomenon.” The Canadian Journal of Sociology 14, 2 (1989), 181–182. https://doi.org/10.2307/3341290. Greg Pogarsky and Alex R. Piquero, “Can Punishment Encourage Offending? Investigating the ‘Resetting’ Effect.” Journal of Research in Crime and Delinquency 40 (2003), 95. https://doi.org/10.1177/002 2427802239255. Manyika and Bughin, “The Promise”, 5.
Bibliography Andrejevic, Mark. 2007. iSpy: Surveillance and Power in the Interactive Era. Lawrence: University Press of Kansas. Anzalone, Charles. 2015. Study Finds Tight School Security can have Unintended, Negative Consequences. University at Buffalo. http://www.buffalo.edu/news/ releases/2015/11/037.html. Armitage, Rachel. 2002. “To CCTV or Not to CCTV? A Review of Current Research into the Effectiveness of CCTV Systems in Reducing Crime.” Nacro, London UK. Accessed February 3, 2020. https://epic.org/privacy/surveilla nce/spotlight/0505/nacro02.pdf. Armitage, Rachel, Graham Smyth, and Ken Pease. 1999. “Burnley CCTV Evaluation”. In Surveillance of Public Space: CCTV, Street Lighting and Crime Prevention, edited by Kate Painter, 225–249. UK: Willan Publishing.
140
D. PEREZ-DES ROSIERS
Barnes, Geoffrey, and Lawrence Sherman. 2018. “Needles & Haystacks: AI in Criminology.” Research Horizons: University of Cambridge, 35, 32–33. https://www.cam.ac.uk/system/files/issue_35_research_horizons_new.pdf. Bigo, Didier. 2006. Security, Exception, Ban and Surveillance. In Theorizing Surveillance, edited by David Lyon, 46–68. Portland: Willan. Cayford, Michelle and Wolter Pieters. 2018. “The Effectiveness of Surveillance Technology: What Intelligence Officials Are Saying.” The Information Society, 34:2, 88–103. https://doi.org/10.1080/01972243.2017.1414721. Chisnall, Mick. 2020. “Digital Slavery, Time for Abolition?” Policy Studies, 41, 488–506. https://doi.org/10.1080/01442872.2020.1724926. Clarke, Roger. 2001. “While You Were Sleeping…Surveillance Technologies Arrived.” Accessed December 3, 2020. http://www.rogerclarke.com/DV/ AQ2001.html. Conger, Kate, and Daisuke Wakabayashi. 2018. “Google Employees Protest Secret Work on Censored Search Engine for China.” The New York Times, August 16. https://www.nytimes.com/2018/08/16/technology/ google-employees-protest-search-censored-china.html. Dafoe, Allan. 2018. “AI Governance: A Research Agenda.” V1.0 August 27, 2018. Future of Humanity Institute. University of Oxford, Oxford, UK. Ditton, Jason, and Emma Short. 1999. “Yes, It Works, No, It Doesn’t: Comparing the Effects of Open CCTV in Two Adjacent Scottish Town Centres.” In Surveillance of Public Space: CCTV, Street Lighting and Crime Prevention, edited by Kate Painter, 201–224. UK: Willan Publishing. Domhoff, G.William. 2005. Who Rules America? Power, Politics and Social Change. 5th ed. New York: McGraw Hill. Ellul, Jacques. 1964. The Technological Society. New York: Alfred A. Knopf. Feldstein, Steven. 2019a. “How Artificial Intelligence Is Reshaping Repression.” Journal of Democracy, 30:1, 40–53. Feldstein, Steven. 2019b. “The Global Expansion of AI Surveillance.” Working Paper. Carnegie Endowment for International Peace, Washington, DC. Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. Harmondsworth, UK: Penguin. Fristch, Stefan. 2011. “Technology and Global Affairs.” International Studies Perspectives, 12, 27–45. Fuchs, Christian. 2012. “Critique of the Political Economy of Web 2.0 Surveillance.” In Internet and Surveillance: The Challenges of Web 2.0 and Social Media, edited by Christian Fuchs, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval, 71–88. New York: Routledge Studies in Science, Technology and Society. Fuchs, Christian, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval. 2012. Internet and Surveillance: The Challenges of Web 2.0 and Social Media. New York: Routledge Studies in Science, Technology and Society.
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
141
Gill, Martin, Anthea Rose, Kate Collins, and Martin Hemming. 2006. “Redeployable CCTV and Drug-Related Crime: A Case of Implementation Failure.” Drugs: Education, Prevention and Policy, 113:5, 451–460. Gilliom, John. 2001. Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy. Chicago, IL: The University of Chicago Press. Green, Lelia. 2001. Technoculture: From Alphabet to Cybersex. Crows Nest, TX: Allen & Unwin. Hvistendahl, Mara. 2017. “In China, a Three-Digit Score Could Dictate Your Place in Society.” Wired, December 14. Accessed December 1, 2020. https:// www.wired.com/story/age-of-social-credit/. Israni, Ellora Thadaney. “When an Algorithm Helps Send You to Prison.” The New York Times. Last modified on October 26, 2017. https://www.nytimes. com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html. King, Jennifer, Deirdre K. Mulligan, and Steven Raphael. 2008. “The San Francisco Community Safety Camera Program.” CITRIS Report, University of California, Berkeley. Koskela, Hille. 2000. “The Gaze Without Eyes’: Video-Surveillance and the Changing Nature of Urban Space.” Progress in Human Geography, 24:2, 243–265. https://doi.org/10.1191/030913200668791096. Kunz, William M. 2006. Culture Conglomerates: Consolidation in the Motion Picture and Television Industries. Lanham, MD: Rowman & Littlefield. Latonero, Mark. 2018. “Governing Artificial Intelligence: Upholding Human Rights & Dignity.” Data & Society. Accessed March 21, 2020. https://dat asociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artifi cial_Intelligence_Upholding_Human_Rights.pdf. Lyon, David. 1994. The Electronic Eye: The Rise of Surveillance Society. Minneapolis, MN: University of Minnesota Press. Manyika, James, and Jacques Bughin. 2018. “The Promise and the Challenge of the Age of Artificial Intelligence.” Briefing note. McKinsey Global Institute. Accessed March 10, 2020. https://www.mckinsey.com/~/media/ McKinsey/Featured%20Insights/Artificial%20Intelligence/The%20promise% 20and%20challenge%20of%20the%20age%20of%20artificial%20intelligence/ MGI-The-promise-and-challenge-of-the-age-of-artificial-intelligence-in-briefOct-2018.pdf. Marx, Gary T. 2016. Windows into the Soul: Surveillance and Society in an Age of High Technology. Chicago, IL: The University of Chicago Press. Media Mobilizing Project (MMP). “Mapping Pretrial Injustice.” Media Mobilizing Project. Accessed February 17, 2020. https://pretrialrisk.com/the-bas ics/. McCorduck, Pamela. 1981. Machines Who Think. San Francisco: W. H. Freeman.
142
D. PEREZ-DES ROSIERS
Monahan, Torin. 2011. “The Future of Security? Surveillance Operations at Homeland Security Fusion Centers.” Social Justice, 37:3, 84–98. Accessed February 17, 2020. www.jstor.org/stable/41336984. Murphie, Andrew, and John Potts. 2003. Culture and Technology. London: Palgrave. Nadler, Anthony, and Lee McGuigan. 2018. “An Impulse to Exploit: The Behavioral Turn in Data-Driven Marketing.” Critical Studies in Media Communication, 35:2, 151–165. https://doi.org/10.1080/15295036.2017. 1387279. O’Harrow, Robert, Jr. “Centers Tap into Personal Databases.” Washington Post. Last modified on April 2, 2008. http://www.washingtonpost.com/wp-dyn/ content/article/2008/04/01/AR2008040103049.html. Ogata, Sadako. 2015. “Striving for Human Security.” United Nations Chronicle. Accessed March 3, 2020. Striving for Human Security. United Nations Chronicle. Accessed March 3, 2020. https://www.un.org/en/chronicle/art icle/striving-human-security. PFPDT. 2001. Surveillance par vidéo dans les transports publics—exigences minimales de la protection des données. 8e rapport d’activités 2000–2001. Accessed March 3, 2020. https://www.edoeb.admin.ch/edoeb/fr/home.html. Perrault, Raymond, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles. 2019. “The AI Index 2019 Annual Report.” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December. Pogarsky, Greg, and Alex R. Piquero. 2003. “Can Punishment Encourage Offending? Investigating the “Resetting” Effect.” Journal of Research in Crime and Delinquency, 40, 95–120. https://doi.org/10.1177/002242780 2239255. Raso, Filippo, Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Kim Levin. 2018. “Artificial Intelligence & Human Rights: Opportunities & Risks.” Berkman Klein Center for Internet & Society Research Publication. Regan, Priscilla M. 1995. Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill, NC: University of North Carolina Press. Rice-Oxley, Mark. 2004. “Big Brother in Britain: Does More Surveillance Work?” Christian Science Monitor. Accessed February 21, 2020. https://www.csmoni tor.com/2004/0206/p07s02-woeu.html. Roff, Heather. M. 2018. “Advancing Human Security Through Artificial Intelligence.” Research Paper. International Security Department and US and the Americas Program. Accessed February 3, 2020. https://www.chathamho use.org/sites/default/files/publications/research/2017-05-11-ai-human-sec urity-roff.pdf.
6
AI APPLICATION IN SURVEILLANCE FOR PUBLIC SAFETY …
143
Rogerson, Simon. 2018. “The Data Self | The Connected World and Mobility: Ethical Challenges.” Accessed December 20, 2019. https://internetofbusin ess.com/the-data-self-the-connected-world-and-mobility-a-global-ethical-cha llenge/. Schwartz, Ronald D. 1989. “Artificial Intelligence as a Sociological Phenomenon.” The Canadian Journal of Sociology, 14:2, 179–202. https:// doi.org/10.2307/3341290. Serebin, Jacob. “E Is for Ethics in AI—And Montreal’s Playing a Leading Role.” The Gazette. Last modified on March 30, 2019. https://montrealgazette. com/news/local-news/can-montreal-become-a-centre-not-just-for-artificialintelligence-but-ethical-ai. Smith, Brad. 2018. “Facial Recognition Technology: The Need for Public Regulation and Corporate Responsibility.” Microsoft. Accessed January 20, 2020. https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-rec ognition-technology-the-need-for-public-regulation-and-corporate-responsib ility/. Smith, Robin A. “Opening the Lid on Criminal Sentencing Software.” Duke Today. Last modified on July 19, 2017. https://today.duke.edu/2017/07/ opening-lid-criminal-sentencing-software. Walker-Osborn, Charlotte. 2018. “Ethics and AI, a Moral Conundrum.” ITNow, 60:2, 46–47. https://doi.org/10.1093/itnow/bwy052. Wartofsky, Marx W. 1992. “Technology, Power and Truth: Political and Epistemological Reflections on the Fourth Revolution.” In Democracy in a Technological Society, edited by Langdon Winner, 15–34. Norwell, MA: Kluwer Academic Publishers. Weir, Kristen. “Parents Shouldn’t Spy on Their Kids.” Nautilus. Last Modified on April 14, 2016. http://nautil.us/issue/35/boundaries/parents-shouldntspy-on-their-kids. Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: The University of Chicago Press. Yesil, Bilge. 2009. Video Surveillance: Power and Privacy in Everyday Life. El Paso: LFB Scholarly Publishing LLC.
PART II
Global Security
CHAPTER 7
Artificial Intelligence for Peace: An Early Warning System for Mass Violence Michael Yankoski, William Theisen, Ernesto Verdeja, and Walter J. Scheirer
Introduction Pundits are increasingly raising concerns about the dangers that advanced artificial intelligence (AI) systems may pose to human peace and safety. Elon Musk (Clifford 2018) warned that AI has the potential to be more
M. Yankoski (B) · W. Theisen · W. J. Scheirer Department of Computer Science and Engineering, University of Notre Dame, South Bend, IN, USA e-mail: [email protected] W. Theisen e-mail: [email protected] W. J. Scheirer e-mail: [email protected] E. Verdeja Kroc Institute for International Peace Studies and Department of Political Science, University of Notre Dame, South Bend, IN, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_7
147
148
M. YANKOSKI ET AL.
dangerous than nuclear weapons. Stephen Hawking (BBC 2014) worried that AI could mean the end of the human species. A recent Special Issue of the Bulletin of the Atomic Scientists included several warnings about the coming AI Arms Race (Roff 2019). Indeed, many of the chapters in this volume are similarly concerned with mitigating the negative implications of advanced AI. We agree that caution is warranted regarding the rapid development in the field of AI, but we also believe that some AI research trajectories may be employed toward positive ends. In this chapter, we introduce one current research trajectory that combines AI and social scientific research on political violence in order to contribute to practical conflict prevention. AI systems are capable of significantly enhancing the work of peacebuilders in specific but important ways, for artificial intelligence provides unique tools for identifying and analyzing emergent trends and threats within massive volumes of real-time data on the Internet, well beyond the capacities of most existing political violence early warning systems. This chapter discusses a novel project that brings together computer and social scientists using artificial intelligence to advance current atrocity and political instability early warning capabilities. We focus on how the spread of disinformation, rumors, and lies on social media—essentially, hate propaganda—in already unstable political contexts may function as early warning indicators of imminent large-scale violence. Researchers have long argued that hate propaganda legitimizes violence against vulnerable groups (Chirot and McCauley 2010; Sémelin 2005; Kiernan 2003; Koonz 2003; Weitz 2003), but in our current social media landscape, where harmful and manipulative political content circulate more rapidly and widely than ever before, the dangers are especially acute. This is evident, for instance, in the lead up to the Indonesian elections in 2019, where Instagram and Twitter were filled with conspiratorial allegations about treasonous politicians who had to be prevented from winning at the ballot box by any means, including through terror, threats, and killings (Suhartono 2019; BBC 2019). In Myanmar, ongoing atrocities against the Rohingya minority group have been fueled by nationalist memes on Twitter and Facebook accusing the Rohingya of being dangerous foreigners, an existential threat to the integrity and survival of the country that must be eradicated (Azeem 2018; BSR 2020). In these cases and many others, social media has played a defining role in perpetuating dehumanization and facilitating violence.
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
149
Our focus in this project is on advancing what peace studies scholars often refer to as “negative peace”—the absence of armed conflict and direct violations of bodily integrity, such as killings, assaults, and torture— by contributing to early warning modeling and analyses of political violence.1 As such, we present a model of computational forensic analysis of digital images in social media to help identify where and when unstable societies may tip into large-scale violence, by providing journalists and prevention practitioners—that is, policymakers, analysts, and human rights advocates in the atrocity prevention community—with datarich, theoretically robust assessments of processes of violence escalation in near “real time.” This type of triage mechanism is central to ensuring timely and effective preventive responses on the ground. Our project is a work in progress, but we believe it suggests a path forward that can also benefit from contributions from the broader scholarly community. We realize that developing the political will to prevent or stop violence is crucial and extremely difficult (Lupel and Verdeja 2013; Weiss 2016), but providing more accurate and actionable information on conflict escalation can aid prevention work significantly. Our unique focus on digital images is driven by the new ways in which people communicate on the Internet, which are no longer just text-based. Given the enormity of social media data produced daily and the need for timely and accurate analysis, AI systems offer unique capabilities for enhancing and even transforming the work of practitioners tasked with anticipating and responding to violence. The chapter is dedicated to outlining the computational dimensions of our project. We proceed in several steps. First, we outline the current state of risk assessment and early warning research and practice, and situate our project within this field. We then introduce specific problems of disinformation: namely, how social media is increasingly used to sow fear and distrust in already fragile communities and further legitimize violence against vulnerable groups. This kind of disinformation is an important indicator of likely violence in the near- or mid-term, and thus needs to be actively monitored if early warning analyses are to be more focused, accurate, and actionable. Despite their importance, it is exceedingly difficult to understand the spread and impact of coordinated disinformation campaigns in real time using currently available tools. We then discuss our project in detail, first by outlining the types of entities we analyze—social media memes—and then by sketching the overall computational model of analysis. We then turn to some further areas of development, and
150
M. YANKOSKI ET AL.
finally explore the ethical and policy implications of AI work in atrocity prevention.
Risk Assessment and Early Warning: What We Know, What We Need to Know There is a long history of systematic attempts to anticipate the outbreak of large-scale political violence, going back at least to the 1950s when the superpowers sought to model the likelihood of nuclear war through a variety of simulations (Edwards 1997; Poundstone 1992). Since the international community’s failure to prevent genocides in Bosnia-Herzegovina and Rwanda in the 1990s, governments and human rights advocates have devoted more attention to forecasting political instability and mass violence, often working closely with scholars to develop rigorous and evidence-based approaches to prevention work.2 Today there are numerous early warning and risk assessments initiatives focusing on the main drivers and signs of impending violence and also to informing atrocity prevention (Waller 2016). We now have a sophisticated understanding of conditions that elevate risk of violence, but systematic and generalized models of short-term patterns of violence onset are less well developed. It is exceedingly difficult to interpret fast-moving political violence dynamics. Our project contributes to these early warning efforts to understand real-time violence escalation through an analysis of the circulation of digital images on social media that can encourage and legitimize harm against vulnerable minorities. The project primarily focuses on countries already at high risk of violence where the timing or onset of violence is difficult to know. This is ultimately about forecasting the likelihood of violence, not about providing a causal theory of violence. The distinction between these is pivotal. Much like sharp pains in the left torso may indicate an imminent heart attack without being its cause, political violence forecasting is concerned with assessing the probability of future violations, rather than causally explaining their occurrence after the fact. Current research has identified a host of general conditions that elevate a country’s likelihood of future violence. The first condition is a history of unpunished violence against vulnerable minorities (Harff 2003; Goldsmith et al. 2013). Prior impunity legitimizes future violence because potential perpetrators know they face little or no sanction. Severe political instability, especially armed conflict, is another major risk factor (Midlarsky
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
151
2005; Goldstone et al. 2010). In countries experiencing ongoing and profound crises, such as war, insurgencies, coups, or violent changes of political control, leaders are much more likely to rely on increasingly harsh repressive measures to eliminate perceived threats and remain in power. Armed conflict, whether an international or civil war, is among the strongest predictors of future atrocities against civilians. A third factor is the espousal of a radical ideology by government leaders and/or armed challengers that systematically dehumanizes others (Robinson 2018; Weitz 2003). Such extreme ideologies—whether religious, ethnic, racial, or authoritarian variants of left- or right-wing ideologies—justify the use of increasingly repressive measures against vulnerable civilians. Related to this is ongoing state-led discrimination, including the denial of basic civil and political rights as well as movement restrictions, which are highly correlated with atrocities (Fein 2007; Goldstone et al. 2010). Finally, government regime matters; authoritarian regimes are more likely than democracies to engage in violence. Semi-democratic regimes, with limited political contestation and some opposition political movements but weak rule of law and unaccountable political leaders, are also more prone to violence than robust democracies (Stewart 2013). The greater the presence of these factors, the greater risk a country has of experiencing mass violence. However, these factors are largely static—they are useful for providing general risk assessments (low, mid, or high risk) of violence, but the factors do not fluctuate much over time. They tell us the relative likelihood of future violence, but do not provide precise insights into when a high-risk situation may devolve into overt killings and atrocities. Much harder to pinpoint in real time are the short- and mid-term events and processes that are indicators of the shift from high-risk conditions into actual violence, or what is normally known as early warning (Heldt 2012). Broadly, there are three clusters of early warning indicators (Verdeja 2016). First, there are dangerous symbolic moments or discourses that significantly dehumanize already vulnerable populations, or reinforce deep identity cleavages between groups. This includes rallies and commemorations of divisive events or the spread of hate propaganda. Second is an uptick in state repression, such as moving security forces to places with vulnerable populations, stripping those populations of legal rights, attacks against prominent minority or opposition leaders and their followers, or widespread civilian arrests. Finally, political and security crises challenging incumbent political leaders are important indicators, and these can include new or resumed armed conflict between the
152
M. YANKOSKI ET AL.
state and rebels, rapid changes in government leadership, or the spread of confrontational protests. Unforseen exogenous shocks like natural disasters or neighboring conflict spillover can also trigger violence by challenging the ability of political leaders to maintain control. In many instances, several early warning indicators will occur simultaneously or in clusters. Despite these existing indicators, there is room to improve and expand early warning capabilities. The primary concern involves limitations in overall availability and quality of information. Many conflict scenarios are extremely difficult and dangerous to access physically, and thus we must rely on the information provided by relatively limited numbers of people in the field, whether journalists, aid workers, local residents, government officials, or displaced civilians. The reliability of these sources varies, but even when sources are dependable, it can be exceedingly hard to know how representative information is, especially when mobility is limited. For instance, does a journalist’s reporting in a small area reflect the entire region or the hardest hit areas of a country? If not, what is missing? How can we compensate for these limitations? In short, many existing early warning models can expand their source material to systematically tap into much richer social media streams, which can inform how violence may be escalating or if a situation is on the cusp of escalation. To be clear: social media streams do not represent a more factually accurate portrayal of what is occurring. Many politicized memes, for instance, are misleading or outright lies and their provenance (i.e., origin and mode of creation) can be hard to identify, as we discuss below. Indeed, social media is itself a new battleground in contemporary conflicts, as armed actors frequently distort and misrepresent the actions and motives of their enemies as a justification for violence. However, politicized social media gives a stream of real-time data that our system is capable of analyzing in order to identify trends that signify increased potential for outbreaks of political violence. Thus, integrating social media analysis into early warning evaluations may significantly enhance conflict prevention and intervention work. In order to understand what this entails, the next section discusses what a political meme is and then presents the main features of our AI project.
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
153
What Is a Political Meme? If the new media landscape is a battleground, then one must study how contemporary political movements communicate on the Internet in order to begin to formulate a response. Unlike in the past, where most people were largely consumers of professionally curated media content, anyone can now make and disseminate their own political messages to an audience of millions on the Internet. The widespread availability of powerful image editing tools has democratized digital content creation, allowing users with basic computer skills and time to produce custom images and videos. This content most often takes the form of a meme. Memes are cultural artifacts that evolve and spread like a biological organism, but are completely external to biology. On the Internet, memes consist of images, often humorous, that adhere to a set genre, which acts as a guideline for the creation of new instances. But these images are often more than just jokes. Memes have served as the impetus for political actions in movements as diverse as the Arab Spring (York 2012), Occupy Wall Street (Know Your Meme 2020), and the Black Lives Matter (Leach and Allen 2017) movements (Fig. 7.1). And they are now a significant resource
Fig. 7.1 A selection of political memes from the past decade, all exemplifying cultural remixing. Left: A meme that is critical of the Syrian regime’s use of poison gas, in the style of the iconic Obama “Hope” poster. Center: a meme associated with the UC Davis Pepper Spray Incident during the Occupy Wall Street Protests. Right: A Black Lives Matter meme where the raised fist is composed of the names of police victims. This meme also includes the movement’s signature hashtag
154
M. YANKOSKI ET AL.
for monitoring the pulse of an election, responses toward international security incidents, or the viewpoints surrounding a domestic controversy. The popularity of memes continues to grow, and so does the scope of political messaging attached to them. Cases where political memes prefigured violence in some form are not difficult to come by on social media. Our own work has uncovered cases across the globe where memes have been used to spread antisocial messages from discriminatory stereotypes to outright calls for violence. In Brazil, we found memes of right-wing President Jair Bolsonaro depicted as an action hero, ready to take on the country’s drug traffickers (left panel of Fig. 7.2). This coincides with an escalation of the Brazilian drug war and violence against the poor, in which state security forces have been responsible for over a third of the violent deaths reported in Rio De Janeiro (Santoro 2019). In India, we witnessed misogynistic memes featuring a cheerful Prime Minister Narendra Modi overlaid with text containing degrading messages against women (center panel of Fig. 7.2), while the government continues to undermine legal protections for women (Human Rights Watch 2018). In Indonesia, we discovered images where a hammer and sickle were superimposed on the prayer mats of Muslim worshipers, meant to insinuate that they are crypto-communists (right panel of Fig. 7.2). These images were found shortly before the 2019 presidential election, which ended with violent street protests in Jakarta (Suhartono 2019; BBC 2019). The list goes on.
Fig. 7.2 A selection of political memes with disturbing messaging. Left: Brazilian President Jair Bolsonaro depicted as an action hero, ready to take on Brazil’s drug traffickers. Center: A misogynistic meme featuring Indian Prime Minister Narendra Modi. Right: Hammer and sickle superimposed on the prayer mats of Islamic worshipers in Indonesia
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
155
If AI is to be deployed as a mechanism to watch for political memes that may be used to incite large-scale violence in high-risk contexts, we require a rigorous definition of a political meme to operationalize the algorithms. Definitions like the one given above referring to evolution and organism-like propagation are typically attributed to the biologist Richard Dawkins (2016). However, Dawkins’ thinking on the meme, whether intentional or not, borrowed liberally from the notion of intertextuality in literary theory: the shaping of a text’s meaning by other texts. Julia Kristeva, and Mikhail Bakhtin before her, suggested that the novel reworking and retransmission of information is fundamental to human communication (Kristeva 1986). In the Kristevan mode, intertextuality applies to all semiotic systems, including digital images and video. Intertextuality is a more useful framing when assessing content on the Internet, which can be collected and analyzed via automatic means to identify intertexts, those specific points of correspondence between artifacts (Forstall and Scheirer 2019). While other researchers have sought to define the meme in general terms (Shifman 2014), we are not interested in all memes for an early warning system for violence. Thus we offer the following definition for a political meme: a multimedia intertext meant to engage an in-group and/or antagonize an out-group. Given this definition, how can we operationalize it within the context of today’s AI capabilities to give an algorithm what it needs to automatically assess any potential threats of violence that political memes might pose? First, the source of the meme can be scrutinized. Where the meme was found on social media, and who posted it, can be diagnostic. For instance, if the source is known to be political, the meme might be as well. But the source is not essential for an observer to understand the message a meme conveys. More importantly, the content of the meme is composed of visual and textual cues that deliver the message. In many instances, decisions are made based purely on the visual style of an image. For example, does it look like something we have seen before, which is known to be political in some regard? If so, then an intertextual association has been made. If it is new, is there something in the visual content or text, if present, that gives us a clue as to whether or not it is political? Such clues can be the presence of political figures in the image, places associated with political history, symbols associated with political or religious groups, or objects with some political significance. Finally, not all political memes are something to worry about; thus, we need to separate the innocuous from the dangerous. This is done by establishing semantic links between
156
M. YANKOSKI ET AL.
the visual content and text, as well as by assessing the sentiment of those elements. The ability to recognize all of this involves pattern recognition—an area where AI systems excel, because they can be trained to recognize all of the aforementioned elements. The challenge is in making the system understand their relevance in a manner that is consistent with human observers.
Technologies for an Early Warning System for Violence If visual content found on social media is the object of focus for our early warning system, then what, exactly, are the technical requirements for understanding it? This task is fundamentally different from more traditional forms of early warning, where indicators are established using quantitative variables from the social science literature (e.g., datasets on protests, armed conflict onsets, coups, etc.), that can be processed using statistical techniques from data science and used to make general predictions. Here we bring to bear new methods from the areas of computer vision and media forensics, as well as established best practices from high performance computing and web and social media studies. We do this in order to build a comprehensive system that proceeds from data ingestion to making predictions about content that may contain messaging that seems intended to incite large-scale violence. In general, there are three basic required components of such a system (Fig. 7.3): (1) the data ingestion platform; (2) the AI analysis engine; and (3) the user interface (UI). In this section, we will detail these technical aspects of the system.
Fig. 7.3 The processing pipeline of the proposed early warning system for large-scale violence, composed of three basic required technology components
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
157
Data ingestion is the most important part of the process. If we do not look in the right place to begin with, then the early warning system will not be useful. Given the consolidation of the Internet over the past decade (itself a contributing factor to the overall problem we are studying) into a handful of popular social networks (Internet Society 2019), data targeting is, in some sense, straightforward. However, as one dives into these social networks, their vast internal complexity becomes apparent, with many communities and subcommunities existing in complex webs on platforms like Reddit, 4chan, Twitter, Facebook, and Instagram. So, where to look? Social media platforms have always striven to add structuring elements to their posts, in order for users to better identify relevant topics and authors. Examples of these structuring elements include userdefined hashtags (the “#” symbol followed by a plaintext string) that are used as meta-data to tag posts with custom topics on most social networks, as well as account names (often indicated by the “@” symbol). However, the reliability of these elements can be questionable, and some platforms go out of their way to render them useless (most notably 4chan, where nearly every user is anonymous). Media objects such as images and videos also form structured elements in posts, and for our purposes, posts containing these media types are of primary interest. Accordingly, our attention will be on visual content for the rest of this chapter. Note though that it is possible to bring comments, shared or reposted content, and “likes” into the analysis as well. Further, when it comes to the wholesale data harvesting that is necessary to watch the Internet in a meaningful way in real time, there is a diversity of access control mechanisms put in place that restrict direct access to the data. Some sites make this process easy, while others make it very difficult. For instance, Twitter provides an Application Program Interface (API) (Twitter 2020) that makes all of its data very accessible to automated data ingestion. The main motivation for this is the development of third-party apps that make creative use of the data that appears on Twitter, but there is also some sympathy for academic work that looks at the social aspects of information exchange on the platform. Other sites, like Facebook, are far more closed, leading to the need for specialized content crawlers that can search for relevant content without the assistance of an open API. Rate limiting procedures put in place on social networks are another confounding element, and a potentially dangerous roadblock for any early warning system that needs timely access to data. An example of this is Facebook, which uses dynamic links to content,
158
M. YANKOSKI ET AL.
which change on a regular basis. These problems can be mitigated with a distributed system design, whereby many instances of the data ingestion component run from different parts of the Internet. A practical constraint of data ingestion is the scale of the data an early warning system must consider in order to be effective. In a targeted operation, where a particular region and/or set of actors is being monitored, we can operate over data on the order of millions of images within a period of weeks. This is the current upper-bound for media forensics being conducted at the academic level (NIST 2018). Beyond this, the cost of data storage and the time required to collect the data becomes prohibitively expensive. However, there is a need for the system to scale to the order of billions of images per day (Eveleth 2015) for a truly comprehensive real-time capability. The ultimate goal is to be able to watch all of social media for emerging trends. In order to accomplish this goal, partnerships will need to be established between non-governmental organizations (NGOs), social media companies, and academics to provide access to the data at the source. The reluctance of social media companies to allow outsiders access to their internal data repositories is a serious stumbling block in this regard. We envision that such partnerships will lead to the transfer of early warning techniques that can be run internally at a social network when data access is problematic. Given existing constraints, we have found an effective combination for data ingestion to involve partnering with local experts who can provide relevant hashtags and accounts to review, thus mitigating the need to monitor everything on the Internet at once. While this approach may miss quite a bit of potentially threatening content, targeted sampling is still effective. For example, in our work on the 2019 Indonesian Presidential Election (Yankoski et al. 2020), we partnered with the local fact checking organization Cekfakta (2020). By using local volunteers throughout Indonesia, Cekfakta is able to identify social media sources of concern. However, human monitoring of even a limited number of sources proved incapable of keeping up with the pace of content creation. From just 26 hashtags and eight users on Twitter and Instagram, we harvested over two million images for analysis (Theisen et al. 2020)— a staggering number that exceeds the human capacity to find patterns in unorganized data. The artificial intelligence component of the early warning system is designed to automate the analysis of the large collections of media content assembled at the data ingestion stage. A key to success in this
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
159
regard is the use of state-of-the-art artificial perception methods. In the debates surrounding the use of artificial intelligence technology, there is a pervasive misunderstanding of the difference between perception and cognition. As we discussed in the beginning of this chapter, fears over AGI, or artificial general intelligence, have led to accusations that all AI technology represents an existential threat to humanity. Should AGI ever appear, it would embody a set of cognitive models meant to mimic the conscious mental actions of knowledge acquisition and reasoning in the human mind. This type of technology does not exist at the time of writing, and we are skeptical that it will emerge in the foreseeable future. The complexities of the human brain as a system are beyond the current understanding of science. We do not possess a model of computation for the brain, nor do we have explanatory models for complex phenomena such as conscious thought (Marcus and Davis 2019). Where AI has made inroads into modeling competencies of the brain is in the sensory systems (e.g., audition, olfaction, vision). Perception is the ability to take information from a sensory system and make decisions over it. This process mostly unfolds in an unconscious manner, but embodies a set off complex pattern recognition behaviors. The most studied and best modeled sensory modality is vision. The fields of computer vision and machine learning have taken direct inspiration from experimental observations in neuroscience and psychology (Goodfellow et al. 2016), leading to features (i.e., descriptions of the data) and classifiers (i.e., models that make decisions over features) that are, in some cases, at human- or super human-level performance (RichardWebster et al. 2018). These technologies can be used safely and effectively in the appropriate context. For instance, computer vision can be used to determine which images out of a large collection are similar in overall appearance, identify specific objects within images, and match common objects across images. Concern over manually and automatically generated fake content has driven advances in media forensics—a field within computer science that borrows heavily from the fields of computer vision and machine learning. For a violence early warning system, we need a way to characterize the images such that (1) from an initial collection, they can be placed into distinct genres, (2) new images can be placed into known genres or new genres where appropriate, and (3) semantic understanding of the visual content can be extracted, so that threatening messages can be identified. Select techniques from media forensics give us a path forward for each of these requirements.
160
M. YANKOSKI ET AL.
For establishing connections between images, image provenance analysis provides a powerful framework (Moreira et al. 2018). Work in image manipulation detection has shown that it is possible to estimate, through image processing and computer vision techniques, the types and parameters of transformations that have been applied to the content of individual images to obtain new versions of those images (Rocha et al. 2011). Given a large corpus of images and a query image (i.e., an image we would like to use to find other related images), a useful further step is to retrieve the set of original images whose content is present in the query image, as well as the detailed sequences of transformations that yield the query image given the original images. This is known as image provenance analysis in the media forensics literature. The entire process is performed in an automated unsupervised manner, requiring no human intervention. Such a process can be used to trace the evolution of memes and other content, which is a piece of what we need for the early warning system. This process can also be used for fact checking and authorship verification. In general, provenance analysis consists of an image retrieval step followed by a graph building step that provides a temporal ordering of the images. In place of the latter, we suggest that an image clustering step is more useful for early warning analysis. We will, in broad strokes, explain how each of these steps works below. In order to find related images, each must first be indexed based on features that describe the style and content of the images, but in a compact way that reduces the amount of space needed to store the data. Such a representation of the data can be generated using techniques from the area of content-based image retrieval, which addresses the problem of matching millions of images based on visual appearance. In our prototype early warning system, we build an index of all images based on local features, instead of the entire global appearance of the images. This strategy allows us to find matching images based on small localized objects that they share. For distinct meme genres of diverse visual appearance, finding just one small shared object could be the link for establishing a valid relationship between images. This gives our technique excellent recall abilities over large collections of images. In order to scale to millions of images, we make use of SURF features (Bay et al. 2008) that can be computed quickly and stored efficiently. The index is an Inverted File (IVF) index trained via Optimized Product Quantization (OPQ) (Ge et al. 2013).
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
161
After the index is built, it can be used to find related images through a querying process. In a manner similar to what is done in traditional image provenance analysis, query images are chosen and are matched against the images in the index to return the closest matches (the number returned is a user-defined parameter). The choice of query images could be random (i.e., randomly sampled from all of the available images), or determined through the use of image manipulation detectors that can identify suspicious images. Our prototype system has defined a scoring system relying on the quality of matches between individual objects in images, based on the correspondence between the pre-calculated features in the index with the query. The matching process for image retrieval results in collections of ranked lists (Fig. 7.4). This is somewhat useful, but what is ideally needed here is a data clustering approach that depicts how each image is visually situated with respect to other related images. Further, each cluster should represent a distinct genre of content that is evident to the human
Fig. 7.4 The output of the provenance filtering process to find related images in a large collection for three different meme genres from Indonesia. Each row depicts the best matches to a query image (the left-most image in these examples) in sorted order, where images ideally share some aspect of visual appearance. Scores reflect the quality of matches between individual objects in images. At the very end of the sorted list, we expect the weakest match, and the very low scores reflect that. These ranks form the input to the clustering step, which presents a better arrangement for human analysis
162
M. YANKOSKI ET AL.
observer. In a meme context, this means that memes that humans have labeled (e.g., “Action Hero Bolsonaro,” “Misogynistic Modi”) should be found in a single coherent cluster. With respect to other content, derivatives of an ad for instance should also be grouped together in a similar way. Our prototype system uses a spectral clustering algorithm to produce the final output of the system (Yu and Shi 2003). An important question is, how well does this prototype early warning system work? We validated the system on the dataset of two million images from the 2019 Indonesian Presidential Election that is referenced above. This case study is particularly salient for this work, in that the results of the election led to violent episodes in Jakarta, some of which were stoked by content found on social media (BBC 2019). Most critically, our validation sought to verify that the prototype system was able to detect useful meme genres from millions of images, as well as verify that the images contained within a genre are meaningful to human observers. In total, the system discovered 7,691 content genres out of the pool of roughly two million images. Some examples are shown in Fig. 7.5. Each of these genres was checked by a human observer to assess visual coherence and to tag the genre with a label that described its content. Roughly 75% of the images were placed into human interpretable clusters. Further, controlled human perception experiments were also conducted to verify that the genres were not simply the product of random chance. These showed that not only were human observers adept at perceiving a pattern in most presented clusters, but also that a majority of the detected genres had a cohesive-enough theme that was identifiable even in the presence
Fig. 7.5 Selections from three different content genres from the total pool of 7,691 discovered by the prototype system
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
163
of an impostor image. In other words, the AI system is performing accurate pattern recognition and organizing data at a scale that was once unthinkable to human analysts. In addition to the AI algorithms, we must also consider how to make the information such a system generates accessible to end users. It is very likely that most of the target users of such a system will not be comfortable interacting with the command line of a computer system. Thus a user interface (UI) layer must be developed and tested, as well as an alert system that will be capable of notifying the correct people with the correct information when an imminent threat begins to trend. It is conceivable that policymakers would require one set of information from this system, while civil society actors would require a different set, and reporters still another set. Working closely with these distinct communities of users is a critical aspect of ensuring this system’s utility as an early warning system. Major considerations remain for the design of a UI that is accessible to users who are not computer scientists. As can be seen in the figures included in this section, the current UI is minimalist by design and can be further developed. At this point in our development, the UI primarily consists of a web interface that presents a list of clusters to users, which can be selected and visualized (Fig. 7.6). We envision the next phase of UI development to not only present the user with genres of memes, but also automatically derived meta-data that describes those genres. Moreover, given automatically derived threat markers from the content (Do the scenes depicted suggest violence? What are the messages found in the text? Does an image share a relationship with content already known to be problematic?), genres of content can be triaged appropriately in order to present the user with the material that requires the most urgent attention.
Limitations of Existing Technologies and a Research Roadmap Much work still needs to be done so that the prototype system can meaningfully contribute to violence early warning and risk assessment at an operational level. A laundry list of additional features that are still in development includes better data source targeting, data fusion capabilities, the linking of text and image content, natural language understanding, disinformation identification, and the assessment of messaging over time. A good portion of these are related to semantic understanding (i.e., meaning-making)—one growth-edge of artificial intelligence research.
164
M. YANKOSKI ET AL.
Fig. 7.6 Screenshot of the web-based UI of the prototype system
The high-level goal of our in-process research and development is to design systems that are capable not only of categorizing political memes into particular genres, nor simply of identifying distinct media artifacts that have been manipulated or which may be entirely fake, but which are capable of understanding when these individual items might be indicative of larger trends toward political violence, or when they are being deployed in coordinated ways so as to exploit underlying tensions in particular contexts that are already primed for violence. As discussed above, current technologies allow for the identification of manipulated media objects in isolation. What has not yet been built are sufficiently robust AI systems that can identify trends occurring across multiple media modalities simultaneously, and which are targeted to incite violence within contexts that have already been identified as volatile or high risk. Consider a scenario wherein a bad actor is deploying a disinformation campaign with the intent to incite violence. In a robust campaign,
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
165
we would expect to see thematically resonant media artifacts emerging across modalities in close temporal proximity to one another: a news article here, a photograph there, plus a few videos and some well-sculpted memes designed to solicit response. Of course these media entities would all be shared and liked and cross referenced across multiple platforms and outlets. This is all that is needed for a rapidly spreading disinformation campaign to emerge. But the ability to identify a targeted campaign across these modalities and platforms as a single, coordinated campaign is critical if campaigns intended to incite violence—as well as their individual components— are going to be identified. This task begins by detecting that a piece of media has been manipulated or faked but expands into the broader task of semantic analysis and campaign identification across media modalities. Accomplishing this requires mapping the source, flow, spread, and corroboration dynamics as an additional layer on top of the baseline identification of the media object’s constituent parts. The initial assumption here is that manipulated media items are a key signal for our system to find and analyze. Once these manipulated media objects have been detected, the second layer of analysis is attribution. Here our main task is to discover traces related to the technological process of creation of the objects which may help us identify when particular collections derive from the same source. This may be possible through simple fingerprinting of media files including EXIF and meta-data embedded in photographs, temporal mapping to identify when particular content is first posted or shared, or even the use of distinct stylistic tendencies in written pieces. Techniques to establish the digital pipeline at the origin of deep fakes and AI-generated media also belong to this category. The third and final goal we aim for beyond detection and attribution is the design and development of characterization methods to help us understand why a concentrated effort to generate disinformation might be initiated. In this regard, we aim to develop realistic cognitive models to study the effect that media manipulations and the use of fake media have on the users, their intentions (malicious, playful, political), and also their provoked emotions. The ultimate goal is to design an AI early-warning system capable of monitoring both traditional and social media platforms for trending content that may be part of an influence campaign intended to incite violence.
166
M. YANKOSKI ET AL.
Accomplishing this threefold task of detection, attribution, and characterization across multiple media modalities simultaneously is an even more daunting task than what the prototype system currently accomplishes, given the sheer volume of content created every second online. It is very unlikely that a team of human analysts would be able to sufficiently identify and analyze coordinated disinformation campaigns in real time. The simple fact is that humans are incapable of performing this task across all relevant media modalities at scale and at the speed required to identify a campaign aiming to incite near-term violence. Similarly, at an algorithmic level, there is more demand for computational resources and/or optimizations to improve the algorithms. However, in contrast to human limitations, technology continues to improve in speed and to lower in cost, making this possible in the near term.
Ethical Considerations Our discussion in this section is limited to the narrow scope of AI designed to function as a violence early warning system such as we have outlined above. Within this narrowed scope, there is an important set of ethical and policy questions that deserves attention throughout the design process of this system. While the space constraints of this chapter prevent a thorough treatment of each area of concern, we first offer a set of four guiding principles for this system, followed by a set of ethical questions for reflection. The principles are: Aim: The fundamental aim of the system we propose is to assist efforts to prevent or lessen violence against civilians. Use of this system should only contribute toward that aim. Transparency: It is crucial that the structure and operation of the system be presented in such a way that the general public can understand what the system entails, and also to allay concerns over the use of AI in this context. Thus, the main aspects of data collection and analysis will be clearly presented and available publicly, although the complete software system will remain proprietary. As explained below, all data sources are public in origin. We do not rely on surveilling private communications. Accessibility: We will prioritize accessibility for actors whose work has a demonstrable commitment to advancing human rights and protecting civilians. This may include human rights organizations and civil society actors, think tanks and research institutes, journalists, global and
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
167
regional governance institutions such as the United Nations, and those parts of the scholarly research community working on peace and conflict, among others. In some instances, this may include certain offices or departments in governments. A structured, transparent committee that includes members with expertise in computational and social sciences, as well as human rights research and practice, should make recommendations on who has access to the system, with a clearly articulated appeals process for those who are denied access upon initial application. Independence: The principal investigators are committed to evidence-based research for the common good. Thus, we endorse independence of analysis and objective reporting of results.
These principles will help frame our responses to emergent ethical challenges. In addition to the above principles, several pressing questions have emerged from our early development on this system. Questions such as: Is it ethically problematic to have an AI system “listening” in on Internet communications? What safeguards should be required to prevent the possibility of false positives? Is it possible to prevent bad actors from “gaming” the system? Let us address each question in turn. The idea of an AI used to “listen” for trends—even trends that threaten mass violence—in online communications may seem ethically problematic. Many observers are wary of using AI to monitor digital information, especially private posts, for fear that it can be used to expand the surveillance powers of states or corporations. We share these concerns, but believe we can address these issues for the purposes of this specific project. It is important to emphasize that all of the media instances that our system ingests are publicly available. While other developers, corporations and even national intelligence agencies may seek to develop ways to eavesdrop on private communications, our ingest system simply “scrapes” publicly available communications that can be seen by anyone. The simplest way to distinguish between these is to think about the difference between reading a user’s posts on Reddit versus reading that same user’s private text messages; we focus on the former types of sources. A legal gray area exists in collections taking place on WhatsApp (Wang 2018) and other messaging services where one must be invited into a group chat to collect information. Nevertheless, from our perspective, we have more than enough public material to sift. There is no need to dig into private
168
M. YANKOSKI ET AL.
data, especially given the potential harm that can be done to users if they believe a conversation is happening in confidence. The problem of a false positive is only a concern if we were proposing a system that would be used in isolation rather than as one piece of a robust violence early warning and forecasting model. We do not believe that this system on its own should serve as a single “trigger” for intervention in particular contexts. Rather, we envision this AI early warning system as one facet of the broader early warning forecasting systems already employed. This system would work in tandem with and provide more real-time granular data about what is happening in particular high-risk contexts. The appropriate interventions would then be coordinated by the various stakeholders in a manner that is appropriate to their particular situation, context, and capacities. It would be prudent to also consider how such a system might be “gamed” for nefarious purposes by bad actors. Manipulation of such a system might range from the harmless hacker to coordinated manipulation attempts made by state actors. Examples of such manipulation of larger systems abound: In early 2020 a performance artist put approximately 100 cell phones into a hand-pulled wagon and walked around the streets of Berlin in order to provide false information to Google Maps. Everywhere he walked Google began reporting a major traffic jam and rerouting traffic around it (Barrett 2020). Consider also the Internet phenomenon known as “swatting,” where a hoax “tip” is provided in order to lead a SWAT team to an innocent and unsuspecting person’s home (Ellis 2019). It is easy to see that a system designed to detect shortterm onset of violence will undoubtedly be targeted by bad actors. A bot army could be deployed to make it seem like mass violence is about to erupt in a country, even when there is little to no real world movement. While this is an important concern, we emphasize that this system is not meant to stand in isolation but rather is intended to be a component part of a larger early warning system. Social media is a crucial battleground in contemporary conflicts, but it is not the only one; it is important to corroborate the findings of any single early warning indicator system with other types of evidence from other realms of conflict analysis.
Policy Implications One further question is how the early warning information provided by this system should be employed. From a policy perspective, there are at
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
169
least three broad ways in which the data and analysis provided by our AI early warning system might be used: response, shaming, and accountability. The primary contribution of this system is to response, by which we mean it can enhance the abilities of conflict prevention practitioners to respond to escalating violence by providing informed, real-time evidence of an emerging threat based on trending campaigns and communications on social media. This is especially important in cases where information is otherwise lacking or limited, such as in places that journalists and human rights monitors are unable to reach due to physical danger. However, one important qualification is needed: Because this system is just one tool within the larger toolkit available to conflict prevention practitioners, it should not be dispositive on its own for substantially coercive response efforts, such as deploying peacekeepers. However, this system will serve to provide more granular and real-time insights and can thus empower conflict prevention practitioners and, where relevant, peacekeeping missions to react swiftly to defuse an incident as it unfolds. Similarly, this system allows election monitors to better gauge the fairness and legitimacy of an election by providing insights into any influence campaigns or intimidation tactics that are being deployed on social media. Second, this system may also be used to strengthen efforts to publicly shame nation-states that deny employing repressive policies, by showing publicly and in near real time how they are in fact endorsing or even committing violence against civilians. The aim of shaming campaigns— which are a key component of much human rights advocacy work—is to change perpetrator behavior by imposing reputational costs that may be transformed into other more, robust costs, such as economic sanctions, among others. The process of publicly shaming a government before the international community has been important in Myanmar, where external monitoring of extremist social media has confirmed what the government has long denied: that there is a widespread campaign to remove and even destroy the Rohingya population, and publicizing this information has placed increased pressure on the government and its allies to lessen the extent of repressive practices (Human Rights Watch 2019). Nevertheless, even if individual actors are difficult to identify because of obfuscated digital content streams, our system’s ability to identify a focused campaign may help human rights advocates understand when vulnerable groups are being targeted and provide additional contextual information as this is unfolding, which can aid pressure efforts. To be sure, we do not contend that such public pressures are able to change policies completely, but
170
M. YANKOSKI ET AL.
shaming has been shown to have some effect on lessening certain forms of repressive behavior, especially where governments or armed challengers have previously agreed to follow existing human rights and humanitarian law (Hafner-Burton 2008). Finally, such a system can contribute to future accountability measures. Holding perpetrators accountable after major episodes of violence is often exceedingly difficult, not only because there may be a lack of political will to prosecute, but also because it may be hard to obtain evidence of culpability that meets legal prosecutorial thresholds. While the original authors of media artifacts are often difficult to identify, our system helps map campaigns that are intended to prime populations for violence against vulnerable groups. The additional evidence offered by our AI early warning system can enhance investigation into human rights abuses and war crimes, thus increasing accountability. In all of these facets—response, shaming, and accountability—our AI early warning system strengthens, rather than replaces, existing conflict prevention policies, initiatives, and programs. We underscore that our purpose is not to cast aside the important accumulated knowledge and expertise of peacebuilding and human rights specialists, but instead to assist their work by providing more fine-tuned analyses of dynamic and shifting conflict situations in near real time.
Conclusion In this chapter we have described our initial work on an AI early warning system for violence. While there is much that needs to be done in order to maximize the potential for AI technologies to assist in preventing large-scale violence from occurring in distinct conflict contexts, we have demonstrated one aspect of the extraordinary value that niche-specific, targeted AI systems can add within novel domains of application; there are distinct contributions new AI systems and technologies can make to enhancing the capacities and efficacy of experts in distinct fields. Our new collaboration of researchers in peace studies, genocide and conflict studies, and artificial intelligence will continue to push the limits of available technologies, even as it helps scholars of peace studies and human rights practitioners understand more about how online campaigns are capable of sparking and shaping on-the-ground conflict realities.
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
171
Acknowledgements This article is based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number [FA8750-16-2-0173] for the Media Forensics (MediFor) program. Support was also given by USAID under agreement number [7200AA18CA00054].
Notes 1. Scholars frequently distinguish between negative and positive peace, the latter involving the dismantlement of the broad social, political, and economic structures that systematically marginalize people, and the creation of conditions necessary for human flourishing (Galtung 1969). This is the ultimate goal of peacebuilding, of course, but our project focuses on the often pressing and immediate need to prevent and end overt episodes of mass political violence. 2. These include the US government’s Political Instability Task Force (PITF); the high-level US Atrocity Early Warning Taskforce; the United Nations (UN) Office on Genocide Prevention and the Responsibility to Protect; various regional efforts by international organizations like the European Union, African Union, and Organization of American States; and, increasingly sophisticated early warning and watch lists by non-governmental organizations. See Verdeja (2016).
References Azeem, Ibrahim. The Rohingyas: Inside Myanmar’s Genocide. London: Hurst, 2018. Barrett, Brian. “An Artist Used 99 Phones to Fake a Google Maps Traffic Jam.” Wired. February 20, 2020. Accessed March 25, 2020. https://www.wired. com/story/99-phones-fake-google-maps-traffic-jam/. Bay, Herbert, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. “Speeded-up Robust Features (SURF).” Computer Vision and Image Understanding vol. 110, no. 3 (2008): 346–359. BBC. 2019. “Indonesia Post-Election Protests Leave Six Dead in Jakarta.” Accessed March 25, 2020. https://www.bbc.com/news/world-asia-483 61782. BBC. 2014. “Stephen Hawking Warns Artificial Intelligence Could End Mankind.” Accessed March 25, 2020. https://www.bbc.com/news/techno logy-30290540.
172
M. YANKOSKI ET AL.
BSR. Human Rights Impact Assessment: Facebook in Myanmar. October 2018. Accessed March 24, 2020. https://about.fb.com/wp-content/upl oads/2018/11/bsr-facebook-myanmar-hria_final.pdf. Cekfakta. 2020. Accessed March 25, 2020. https://cekfakta.com/. Chirot, Daniel and Clark McCauley. Why Not Kill Them All? The Logic and Prevention of Mass Political Murder. Princeton: Princeton University Press, 2010. Clifford, Catherine. “Elon Musk: Mark My Words, AI Is More Dangerous Than Nukes.” CNBC. March 13, 2018. Accessed March 25, 2020. https://www. cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuc lear-weapons.html. Dawkins, Richard. The Selfish Gene: 40th Anniversary Edition. Oxford: Oxford University Press, 2016. Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge: MIT Press, 1997. Ellis, Emma Grey. “Swatting Is a Deadly Problem—Here’s the Solution.” Wired. August 8, 2019. Accessed March 25, 2020. https://www.wired.com/story/ how-to-stop-swatting-before-it-happens-seattle/. Eveleth, Rose. 2015. “How Many Photographs of You Are Out There in the World?” Accessed March 25, 2020. https://www.theatlantic.com/techno logy/archive/2015/11/how-many-photographs-of-you-are-out-there-in-theworld/413389/. Fein, Helen. Human Rights and Wrongs. Boulder, CO: Paradigm Publishers, 2007. Forstall, Christopher W., and Walter J. Scheirer. Quantitative Intertextuality. Cham: Springer, 2019. Galtung, Johan. “Violence, Peace, and Peace Research.” Journal of Peace Research vol. 6, no. 3 (1969): 167–191. Ge, Tiezheng, Kaiming He, Qifa Ke, and Jian Sun. “Optimized Product Quantization for Approximate Nearest Neighbor Search.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2946–2953, 2013. Goldsmith, Benjamin, Charles Butcher, Arcot Sowmya, Dimitri Semenovich. “Forecasting the Onset of Genocide and Politicide: Annual Out-of-sample Forecasts on a Global Dataset, 1988–2003.” Journal of Peace Research vol. 50, no. 4 (2013): 437–452. Goldstone, Jack, Robert H. Bates, David L. Epstein, Tedd Robert Gurr, Michael B. Lustick, Monty G. Marshall, Jay Ulfelder, and Mark Woodward. “A Global Model for Forecasting Political Instability.” American Journal of Political Science vol. 54, no. 1 (2010): 190–208. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge, MA: MIT Press, 2016.
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
173
Hafner-Burton, Emilie. “Sticks and Stones: Naming and Shaming the Human Rights Enforcement Problem.” International Organization vol. 62, no. 4. (2008): 689–716. Harff, Barbara.“No Lessons Learned from the Holocaust? Assessing Risks of Genocide and Political Mass Murder Since 1955.” American Political Science Review vol. 97, no. 1 (2003): 57–73. Heldt, Birger. “Mass Atrocities Early Warning Systems: Data Gathering, Data Verification, and Other Challenges.” Guiding Principles of the Emerging Architecture Aiming at the Prevention of Genocide, War Crimes, and Crimes Against Humanity, 2012. http://dx.doi.org/10.2139/ssrn.2028534. Accessed March 25, 2020. Human Rights Watch. 2018. “India: Events of 2018.” Accessed March 25, 2020. https://www.hrw.org/world-report/2019/country-chapters/india. Human Rights Watch. 2019. “World Report 2018: Myanmar.” Accessed March 25, 2020. https://www.hrw.org/world-report/2019/country-cha pters/burma. Internet Society. 2019. “Consolidation in the Internet Economy.” Accessed March 25, 2020. https://future.internetsociety.org/2019/consolidation-inthe-internet-economy/. Kiernan, Ben. “Twentieth-Century Genocides: Underlying Ideological Themes from Armenia to East Timor.” In The Specter of Genocide: Mass Murder in Historical Perspective. Edited by Robert Gellately and Ben Kiernan. Cambridge: Cambridge University Press, 2003. Know Your Meme. 2020. “Occupy Wall Street.” Accessed March 25, 2020. https://knowyourmeme.com/memes/events/occupy-wall-street. Koonz, Claudia. The Nazi Conscience. Cambridge, MA: Harvard University Press, 2003. Kristeva, Julia. “Word, Dialogue and Novel.” The Kristeva Reader. Edited by Toril Moi. Oxford: Basil Blackwell, 1986. Leach, Colin Wayne, and Aerielle M. Allen. “The Social Psychology of the Black Lives Matter Meme and Movement.” Current Directions in Psychological Science vol. 26, no. 6 (2017): 543–547. Lupel, Adam and Ernesto Verdeja. “Developing the Political Will to Respond.” In Responding to Genocide: The Politics of International Action. Edited by Adam Lupel and Ernesto Verdeja. Boulder, CO: Lynne Rienner, 2013. pp. 241–257. Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. New York: Pantheon, 2019. Midlarsky, Manus. The Killing Trap: Genocide in the Twentieth Century. Cambridge: Cambridge University Press, 2005. Moreira, Daniel, Aparna Bharati, Joel Brogan, Allan Pinto, Michael Parowski, Kevin W. Bowyer, Patrick J. Flynn, Anderson Rocha, and Walter J. Scheirer.
174
M. YANKOSKI ET AL.
“Image Provenance Analysis at Scale.” IEEE Transactions on Image Processing vol. 27, no. 12 (2018): 6109–6123. NIST. 2018. “2018 Medifor Challenge.” Accessed March 25, 2020. https://tsa pps.nist.gov/publication/get_pdf.cfm?pub_id=928264. Poundstone, William. Prisoner’s Dilemma: John Von Neumann, Game Theory and the Puzzle of the Bomb. New York: Anchor Books, 1992. Robinson, Geoffrey. The Killing Season: A History of the Indonesian Massacres, 1965–66. Princeton: Princeton University Press, 2018. RichardWebster, Brandon, So Yon Kwon, Christopher Clarizio, Samuel E. Anthony, and Walter J. Scheirer. “Visual Psychophysics for Making Face Recognition Algorithms More Explainable.” In Proceedings of the European Conference on Computer Vision (ECCV), pp. 252–270, 2018. Rocha, Anderson, Walter Scheirer, Terrance Boult, and Siome Goldenstein. “Vision of the Unseen: Current Trends and Challenges in Digital Image and Video Forensics.” ACM Computing Surveys (CSUR) vol. 43, no. 4 (2011): 1–42. Roff, Heather M. “The Frame Problem: The AI “Arms Race” Isn’t One.” Bulletin of the Atomic Scientists vol. 75, no. 3 (2019): 95–98. https://doi. org/10.1080/00963402.2019.1604836. Santoro, Maurício. 2019. “The Brutal Politics of Brazil’s Drug War.” Accessed March 25, 2020. https://www.nytimes.com/2019/10/28/opinion/braziwar-on-poor.html. Secretary General of the United Nations. Early Warning Systems. New York: United Nations, 2006. S´emelin, Jacques. Purify and Destroy: The Political Uses of Massacre and Genocide. London: Hurst & Company, 2005. Shifman, Limor. Memes in Digital Culture. Cambridge: MIT Press, 2014. Stella, X. Yu, and Jianbo Shi. “Multiclass Spectral Clustering.” In Proceedings of the International Conference on Computer Vision (ICCV), p. 313. 2003. Stewart, Frances. “The Causes of Civil War and Genocide: A Comparison.” In Responding to Genocide: The Politics of International Action. Edited by Adam Lupel and Ernesto Verdeja. Boulder, CO: Lynne Rienner, 2013. pp. 47–84. Suhartono, Muktita and Daniel Victor. “Violence Erupts in Indonesia’s Capital in Wake of Presidential Election Results.” New York Times, May 22, 2019. Accessed March 30, 2020. https://www.nytimes.com/2019/05/22/world/ asia/indonesia-election-riots.html. Theisen, William, Joel Brogan, Pamela Bilo Thomas, Daniel Moreira, Pascal Phoa, Tim Weninger, and Walter Scheirer. “Automatic Discovery of Political Meme Genres with Diverse Appearances.” arXiv preprint arXiv:2001.06122 (2020). Twitter. 2020. “Developer Documentation.” Accessed March 25, 2020. https:// developer.twitter.com/en/docs.
7
ARTIFICIAL INTELLIGENCE FOR PEACE …
175
Verdeja, Ernesto. “Predicting Genocide and Mass Atrocities.” Genocide Studies and Prevention vol. 9, no. 3 (2016). http://dx.doi.org/10.5038/19119933.9.3.1314. Waller, James. Confronting Evil: Engaging Our Responsibility to Protect. Oxford: Oxford University Press, 2016. Wang, Shan. “WhatsApp Is a Black Box of Viral Misinformation—But in Brazil, 24 Newsrooms are Teaming Up to Fact-Check It.” Nieman Lab. August 6, 2018. Accessed March 25, 2020. https://www.niemanlab.org/2018/08/ whatsapp-is-a-black-box-of-viral-misinformation-but-in-brazil-24-newsroomsare-teaming-up-to-fact-check-it/. Weiss, Thomas G. What’s Wrong with the United Nations and How to Fix It. London: Polity, 2016. Weitz, Eric D. A Century of Genocide: Utopias of Race and Nation. Princeton: Princeton University Press, 2003. Yankoski, Michael, Tim Weninger, and Walter Scheirer. “An AI Early Warning System to Monitor Online Disinformation, Stop Violence, and Protect Elections.” Bulletin of the Atomic Scientists (2020): 1– 6. https://thebulletin.org/2020/03/an-ai-early-warning-system-to-monitoronline-disinformation-stop-violence-and-protect-elections/. York, Jillian. 2012. “Middle East Memes, a Guide.” Accessed March 25, 2020. https://www.theguardian.com/commentisfree/2012/apr/20/ middle-east-memes-guide.
CHAPTER 8
Between Scylla and Charybdis: The Threat of Democratized Artificial Intelligence Ori Swed
and Kerry Chávez
Introduction Droids with luminous white carapaces stand at attention in droves. A rogue robot shifts its gaze out of turn, foreshadowing the impending uprising. Fast-forward a few scenes and they are pulsing red, crawling like arachnids across their mainframe “brain” to destroy the rest of humanity standing in their way (I, Robot 2004). At least this is the mental image many conjure when artificial intelligence (AI) is mentioned. Perhaps one’s personal Platonic form is the Terminator with chinks in his artificial skin exposing the metal understructure and glowing eye (The Terminator 1984). Alternatively, it is the alluring Ava with cybernetic chainmail until she disguises it after outsmarting her maker (Ex Machina 2014). As Thiele also observes in this collection, these analogies embody fears
O. Swed (B) Department of Sociology, Texas Tech University, Lubbock, TX, USA e-mail: [email protected] K. Chávez Department of Political Science, Texas Tech University, Lubbock, TX, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_8
177
178
O. SWED AND K. CHÁVEZ
of technology, indicative of alarmists who foresee a technological singularity as AI outstrips human intelligence (Szollosy 2017). Their focus on locating the event horizon, where tools designed to bring a better future turn against us, precludes the ability to identify tangible, relevant threats. This is not exclusive to filmography. In popular science, the eminent Elon Musk identifies AI as humanity’s greatest existential threat, warning that “we do not have long to act. Once this Pandora’s box is opened, it will be hard to close” (Hern 2017). Scientifically harder still, scholars warn of cyber insecurities arising from AI from the workplace (Huang and Rust 2018; McClure 2018) to health care (Sparrow 2016; Pepito and Locsin 2019) to home (Heartfield et al. 2018) and in transportation (Ionita 2017). The misconceptions, speculations, and fear that overestimate doomsday scripts represent a common approach to AI. The other common response is underestimation of the implications. Called cultural lag, the deliberate pace of social and institutional adaptation is out of sync with the rapid pace of technological innovation. Ogburn (1922) developed the theory of cultural lag to explain periods of maladjustment between parts of modern culture that change at variable rates. Dividing culture into material and non-material elements, he notes that material advancements tend to arrive first as a function of discovery or invention. Aspects of society correlated with the material progress are compelled to adjust in the aftermath. Depending on the nature of the change, the heterogeneity of response, and the inertia of the status quo,1 periods of maladjustment (or lag) can be protracted and tumultuous. In addition, the compounding rate of change can complicate an otherwise timely transition. Brinkman and Brinkman (1997, 3) observe that “cultures in an advanced stage of economic development…tend to experience an exponential accumulation of material culture given the dynamics of a science-fed technology.” The pace and reach of AI’s expansion exemplifies the problem of cultural lag, especially on legal and analytical fronts (Tschider 2018; O’Sullivan et al. 2019). Lacking proper understanding of the scope, context, and content of AI applications penetrating several industries, legislation and regulation struggle to keep pace much less anticipate problems. While cultural dissonance might be uncomfortable, the consequences of security oversights are more dire. Steering between the Scylla of doomsday scripts and the Charybdis of cultural lag, we focus on the particular threat of violent nonstate actors (VNSAs) exploiting AI to commit terrorist attacks, a topic that Agarwala
8
BETWEEN SCYLLA AND CHARYBDIS …
179
and Chaudhary abstract in their extensive survey of AI’s impact on international security in this volume. This chapter draws on the established literature on the threat of nuclear terrorism (Bunn et al. 2016) to explore and contextualize the threat of the high technology of AI in the wrong hands. We posit that what determines VNSA innovation is the confluence of advanced and democratized technologies. The former feature mitigates the necessity of complex engineering efforts and lowers the technical capacity threshold. The latter introduces the dual-use dilemma, removing barriers to access that regulated technologies inhibit. Using these two elements as predictive crosshairs, we shift attention from fantastical fears to grounded scenarios. We investigate practical and salient threats to aid practitioners (legal scholars, legislators, security experts) in responsibly actualizing AI, in balancing the dual pressures of Scylla and Charybdis. We analyze three potential dangers becoming available to VNSAs as AI evolves, all based on broadly available technology: selfdriving cars, Internet bots, and 3D printing. Our study contributes a sensible and shrewd approach to threat analysis in the dynamic world of AI development.
Theory---Advanced and Democratized As a general rule, VNSAs are weak and resource-constrained relative to states. This compels them to adopt unconventional strategies that deflect brute force, attrite the enemy’s political will, and control populations through fear or appeal (Mack 1975; Arreguín-Toft 2001). It also puts a premium on innovation as nonstate groups aim to offset asymmetries in capability or effectiveness in any way possible. The criteria for determining how VNSAs decide to adopt an innovation include desire, capacity, and capability (Horowitz 2010). Desire seems straightforward. Capacity refers to technical competence to master and maintain a given platform. Ackerman (2016) traces the determinants of terrorist groups engaging in complex engineering efforts, arguing that emerging technologies are making it easier, less costly, and safer. Where before many components, steps, or applications were complex and manually executed, technological progress is producing packages that self-integrate, calibrate, and harmonize. This lowers the technical capacity requisite to adopt increasingly advanced approaches to political violence. Though the front end of engineering AI poses a capacity problem for VNSAs, and even many states, once it is embedded in commonplace devices it becomes
180
O. SWED AND K. CHÁVEZ
operable, exploitable. To use Alexander’s approach in this collection, AI can augment terrorists’ technique at executing their violent agendas. We focus on this type of AI, integrated in accessible, ordinary technologies. As an innovation venue for VNSAs, AI is a game-changer. It is infusing not only an unprecedented level of sophistication into myriad subjects, but a stunning pace of development and (self-)learning. In addition to decreasing entry capacity thresholds for VNSAs to utilize advanced tech, it will enable ongoing innovation as it optimizes. While advanced technologies can be expected to enhance a group’s fighting capacity, not all are equally affordable, available, or navigable. Capability, the third criterion for VNSA adoption, refers to the logistical ability to procure the materials, personnel, and knowledge for an innovation (Horowitz 2010). Advanced technologies that remain cost-prohibitive and/or highly regulated are not likely to systematically proliferate to resource-constrained, violent groups. Indeed, despite the considerable attention paid to the danger of terrorists deploying weapons of mass destruction (WMDs), it has not come to fruition. WMDs are both expensive and tightly controlled, constricting VNSAs from enfolding them in their arsenals. Even capital-rich nations with WMD capabilities invest substantial resources and harness vast knowledge merely to maintain them (Schwartz 1997; Rosner and Eden 2019). We focus on more mundane applications of AI as they are assimilated into widespread, dayto-day functions. We argue that technologies must be democratized, or diffusely and easily available to the average person, in order to be feasibly adopted. First, in order to be operable by the average user, advanced hardware or software must become internally “smart” enough so that despite design complexity its interface is simple. Second, for technologies that can be used for both benign and malevolent agendas, democratization problematizes regulation. The dual-use dilemma, or the difficulty in curtailing positive and neutral innovations from fear that they will be misused, makes it easier for VNSAs to obtain and retool technologies for their violent agendas (Rath et al. 2014; Rychnovská 2016; Schulzke 2019). Working with ten insurgent groups in the 1980s, Hammes (2019) affirms that they are biased toward democratized tech for reasons of familiarity, confidence, and widespread availability of maintenance. We assert that it is the confluence of advanced and democratized technologies that most appeal and are amenable to VNSAs. They provide significant augmentation, making them desirable in the context of asymmetry and resource constraints (Wallace and Reeves 2013). Their innate
8
BETWEEN SCYLLA AND CHARYBDIS …
181
“smartness” lowers the technical capacity requisite to field sophisticated platforms. Finally, they are widely available and affordable, bypassing regulation and cost prohibitions. To illustrate this argument, in the next section we discuss three confirming cases of VNSA misuse of advanced and democratized technologies that validate our theoretical framework— mapping technologies, social media, and civilian drones. We then apply the framework to the field of AI to identify predictive cases. We explore the probabilities that VNSAs will exploit self-driving cars, Internet bots, and 3D printing as AI stabilizes in these areas.
Confirming Cases Google Maps Many take for granted the convenience of real-time map guidance at the tap of a button. Originally developed for advanced militaries, modern mapping utilizes Global Positioning System (GPS) infrastructures to pinpoint locations and routing options. Advances in the proliferation and precision of satellite technologies have converted platforms like Google Maps to Google Earth, providing detailed aerial imagery of the entire planet. The high-end technology exclusive to superpowers with national satellite arrays became available to everyone. It is now open-source and free to anyone with a device with even intermittent Internet access. This includes VNSAs. Shortly after Google Maps launched in 2005, violent groups began to exploit the new capabilities. Following raids of insurgent homes in Iraq in 2007, British troops uncovered evidence that terrorists were using satellite imagery from Google Earth to improve targeting and assaults on allied bases in Basra (Harding 2007). Terrorists used Google Earth to navigate locations in a south Mumbai attack (Ribeiro 2008). Al-Qaeda is known to use Google Maps to plan attacks across the Middle East (Kredo 2018). In the wake of a plot, which relied heavily on satellite imagery, to explode jet fuel tanks and a pipeline at JFK International Airport, a New York legislator urged Google to blur potential targets. By this point, he was merely “adding to the chorus of critics who say detailed images on Google Earth can aid and abet terrorists and snoops” (Goodin 2007). Google excused itself on the grounds that its data are available on other platforms or can be purchased from commercial entities, highlighting dual-use problems of regulation.2 Spain’s Centre Against Terrorism and Organised Crime [sic]
182
O. SWED AND K. CHÁVEZ
identifies Google Maps as one of three primary vulnerabilities in aviation security, citing that maps enable terrorists to do a significant portion of their planning with accuracy, minimal exposure or opportunity to intercept, and low-cost tools (Flood 2016). Waters (2018) encourages even strong states to tap into open-source maps lacking other reliable intelligence, spotlighting the value of these tools for VNSAs that do not have intelligence apparatuses. Being advanced and openly available, mapping services have long been and will likely continue to be a regular go-to in the terrorist toolkit. Social Media Social media is meant to turn distance into digital nearness, reduce costs of interaction, and boost social networks. Largely the domain of college students when it debuted in the early 2000s, it exploded into a pervasive social force. Similar to open-source mapping, all it requires is Internet access and a simple computing device for VNSAs to expand their reach and efficacy of influence. Traditionally, violent groups relied on mass media for attention to and distribution of their messages and wider agenda. Social media eliminates this dependency, allowing VNSAs to initiate, craft, and disseminate information at will (Klausen 2015; Melki and Jabado 2016). The breadth and variety of platforms—Facebook, Instagram, Twitter, YouTube, WhatsApp, Ask.fm, kik, viper, Tumblr, blogs, dedicated Web sites, even online gaming portals—enable access to audiences of massively greater size and diversity (Conway 2012; Huey 2015; Klausen 2015). Suitable for communication, planning, propaganda, fundraising, and recruiting, social media delivers versatile utility at low cost and easy access (Dean et al. 2012; Awan 2017). For instance, Islamic State in Iraq and the Levant (ISIL) heavily utilized social media platforms to build its brand. Producing visual spectacles of drama and horror, the group was able to capture attention and spread fear to millions all over the world. Though ISIL is exemplary in its recruitment and radicalization through social media mechanisms, other VNSAs engage in “electronic Jihad” (Bloom 2017; Rudner 2016). Of particular popularity, live stream functionality allows violent groups to broadcast attacks in real time, an exploit of unrestrained and adrenal exposure (Mair 2017). At the same time, the decentralized and horizontal structure of these platforms makes policing difficult (Markon 2016; Melki and Jabado 2016). By virtue of being advanced, user-friendly, and globally democratized, social media has become a primary radicalizing milieu for VNSAs (Huey 2015).
8
BETWEEN SCYLLA AND CHARYBDIS …
183
Civilian Drones Industry and intelligence members have been warning of the dangers of commercial-off-the-shelf drones for a long time alongside public reports of their use for narco-terrorism, illicit reconnaissance, and violent attacks. Given their low cost, flexibility, and increasing sophistication driven by commercial competition and demand, they constitute an ideal platform for militant groups (Finisterre and Sen 2016). As early as 2005, analysts began recognizing that simple drones provide VNSAs with advanced, force-multiplying capabilities (Mandelbaum et al. 2005). Civilian drone technology has only innovated since, becoming increasingly stable and efficient (Friese et al. 2016; Rassler 2016). In line with the theory, these developments lower the technical capacity needed to field drone programs. For example, Libyan fighters in the 2011 march on Tripoli legally procured a user-friendly minidrone with night vision from Aeryon Labs. The Vice President of Business Development remarked that “the rebels barely needed a day of training to use a technology that many national armies would love to acquire. We like to joke that it’s designed for people who are not that bright, have fat fingers, and break things” (Ackerman 2011). Notably, it is democratized drones that VNSAs exploit, not high-cost, high-tech, and highly regulated military-grade models. Civilian models are broadly accessible and affordable. In fact, their sophistication is increasing at the same rate that costs are decreasing, making drones even more desirable for a resource-constrained actor (Ball 2017). The consensus in academic and policy circles is that the nature of the militant drone threat will coincide with commercial advancements (Rassler 2016; Barsade and Horowitz 2017). This confirming case segues well into a discussion of the predictive cases suffused with AI because analysts foresee that drones will attain full autonomy, networked swarming, and emergent artificial intelligence in the near future (Lachow 2017).
Predictive Cases Artificial intelligence, in its simulation and augmentation of human intelligence, radically simplifies the use of advanced technologies. Essentially, it does the complex portions of the work for human operators. We focus on AI technology that brings computational power and self-learning into commonplace products. Layering AI onto the theory that VNSAs will exploit advanced and democratized technologies only exacerbates the
184
O. SWED AND K. CHÁVEZ
draw, and danger, of the newfound capacities. The three predictive cases share characteristics of the confirming cases, being advanced and currently or predicted to be democratized in a dual-use dilemma format. They differ in their timelines of debut on civilian markets and in the degree to which they can amplify VNSA agendas with the aid of AI. Self-driving Cars Even a skeptic would likely acknowledge the benefits of autonomous vehicles (AVs): intrinsic adherence to traffic laws, dramatic decreases in accidents (most are caused by human error and judgment), and improved mobility for those unable to drive (Grigorescu et al. 2018; Miller 2018; Rao and Frtunikj 2018). This technology encompasses several AI components including camera-based machine vision systems, radar-based detection units, driver condition evaluation and sensor fusion engine control units, among other things (Gadam 2018; O’Flaherty 2018). Unlike the original impetuses for GPS and social media to be exclusive products for exclusive markets, AVs are being engineered upfront for use by the general population. As development drives forward, the prospect of misuse by VNSAs lingers in the back of many minds. In 2015, a research team was able to hack the Lidar system of a self-driving prototype using a simple laser pointer. They warn that cars can be fooled into slowing down or stopping to avoid collisions with phantom obstacles or “ghost pedestrians.” This follows a 2013 report that other researchers successfully infested AVs with computer viruses causing them to crash by killing the lights and engines and slamming on the brakes (Curtis 2015). Separately, a senior scientist at RAND Corporation testifies that hacking, especially as an avenue for terrorism, constitutes a significant threat in AVs riddled with cybernetics. She argues that this decreases costs of terrorism as they can commit bombing attacks without sending allegiants to their deaths in suicide attacks (Ravindranath 2017). Australian law enforcement authorities made news by issuing warnings that self-driving cars can be weaponized for terrorism in even simpler ways that preclude hacking and cyber capacities. A rented AV could be packed with explosives, remotely driven to a target site, and detonated from a safe location (Sekhose 2016). Cultural lag comes to the forefront in this technology because the danger of weaponization has nothing to do with any particular government, but with the nature of the technology itself (Bradshaw 2018). Advancing
8
BETWEEN SCYLLA AND CHARYBDIS …
185
faster than legislators, regulators, and law enforcement can keep up with, it will be increasingly difficult to police VNSA uses as the technologies mature and go global. Internet Bots Social bots are automated social media accounts that algorithmically emulate online human activity such that they resist detection (Bessi and Ferrara 2016). They are distinct from Internet bots that infect computers with malware for remote control. In computer parlance, we speak of sybils (Ferrara et al. 2016), not zombies (Lee et al. 2008). The bedrock of bots is their AI (Shawar and Atwell 2005; Gentsch 2018). To be effective and convincing, they rely on machine learning to adapt and emulate online human behavior. Their purpose is to drive online traffic and manipulate political discourse, opinion, and behavior (McKelvey and Dubois 2017; Santini et al. 2018). The direction depends on the algorithm. Thus, the question behind bots is who are the bot masters. Lin and Kerr (2019) underscore their potency as political instruments, emphasizing influence warfare and manipulation by users from individuals to political parties to state apparatuses, not to mention terrorist groups. Recognizing the breadth of organizations running Internet bot campaigns, the Defense Advanced Research Projects Agency (DARPA) held a competition for bot detection. Participating teams report that 8.5% of all Twitter users are bots (Subrahmanian et al. 2016). In the 2016 US presidential election, Bessi and Ferrara (2016) determine that 1/5 of the entire Twitter conversation about the candidates was generated by bots. Lest critics question whether coding algorithms constitute too high a capacity for VNSAs, social bots have become sufficiently advanced and democratized for the average person to use. In fact, certain blogs offer ready-made tools and tutorials to customize a bot with no previous coding skills or experience (Bessi and Ferrara 2016). This applies even to current generation bots that convincingly mimic human behavior. They can search the web for information to populate profiles, emulate human circadian rhythms and temporal productive spikes in content generation and consumption, and engage in complex interactions such as entertaining conversations, commenting, and answering questions (Ferrara et al. 2016; Arnaudo 2017; Marr 2018). Given the already profound capacity social media has bestowed on VNSAs, intensifying it with AI agents that outperform human productivity will further advance their
186
O. SWED AND K. CHÁVEZ
goals. Experts worry that this inexpensive, versatile, and evolving niche of AI might “shape or reshape communities on a very large scale, in what we might call ‘social architecting’” (Hwang et al. 2012, 45). Imagine if VNSAs are the architects. 3D Printing Additive manufacturing, informally called 3D printing, enables the conversion of a three-dimensional computer model into a solid object. Dissimilar to the previous predictive cases, AI is not native to 3D printing. The prospect of adding it to optimize processes and outcomes, however, is already trending in the industry (Lin et al. 2015). Current models are compact, affordable, and rapidly advancing. Theoretically, one could print an infinite number of things to an infinite degree of customization (Blackman 2014). For the most part, the 3D printing industry explores new frontiers in manufacturing food, medical innovations, vehicles, and other constructive products. Nonetheless, it is equally able to create destructive technologies. The chief fear to date is the prospect of criminals printing guns at will (Little 2014; Walther 2015), but guns are only the beginning of a more imaginative threat analysis. 3D printers can be used to manufacture landmines, improvised exploding devices, small drones, bomblets and munitions, etc. Just as importantly, they can print components for modifications and repairs to maximize a resourceconstrained arsenal. AI moves these capabilities forward by leaps and bounds, yielding advanced additive manufacturing. First, it increases the speed, scale, and quality of printing by applying deep learning to recognize patterns, detect defections, and improve prefabrication iterations (McCaney 2018; Bharadwaj 2019; Hammes 2019). Second, it is improving design and selection of material properties suitable for a given printing project, serving as an engineering shortcut. LePain (2018) explains that “designers simply enter the desired properties into a program and algorithms predict which chemical building blocks can be combined at a micro level to create a structure with the desired functions and properties.” Third, it handles the volume and complexity occurring under the surface of a printing project such that an inexperienced user can generate accurate and optimized products with little trial and error (Bharadwaj 2019). Finally, emerging applications of AI in 3D printing promise to construct computer models from 2D images, bypassing tedious engineering and coding processes and thereby significantly lowering the technical capacity requisite.
8
BETWEEN SCYLLA AND CHARYBDIS …
187
Conclusion The question of how new technologies problematize security is not new. One policymaker remarked that “every major technology—metallurgy, explosives, internal combustion, aviation, electronics, nuclear energy— has been intensively exploited not only for peaceful purposes but also for hostile ones” (Walther 2015). Still, AI bears a magnitude that merits pause. It is undoubtedly a major technology, and one on the cusp of a cascade into most facets of society. We set out to execute a sensible and shrewd threat analysis at the intersection of AI and international security. On one side, we have endeavored to avoid sensationalist speculations shallowly rooted in context. On the other side, we acknowledge the inertia of cultural lag constraining timely scholarship and policy prescription. To strike a balance, we have asserted a theoretical framework that brackets the elements of innovation—desire, capacity, and capability. We have argued that VNSAs are most likely to adopt platforms that are both advanced, being sophisticated enough to lower the technical capacity requisite yet potent, and democratized, being commonplace and affordable for resource-constrained actors. We demonstrated the logic with three confirming cases—mapping technologies, social media, and civilian drones. We then allowed a measure of imagination in to predict likely VNSA exploitations of AI in the near future. Going forward, we recognize that threat identification is only the first step. Responsible rollouts of AI will need to consider the right governance approach, apportioned between engineers at the front end, legislators and regulators in the middle, and law enforcement professionals at the end. Given the immeasurable potential and utility incubating in AI applications, its advancement is inevitable. What remains is mindful and timely treatments of externalities, unintended consequences, and misuses. May this be one of them, and may others follow to guide stakeholders in AI’s unfolding.
Notes 1. Cultural inertia can derive from social habits, binding powers of tradition, vested interests, bureaucratic obstinance, and even fear-based conformity to the past (Brinkman and Brinkman 1997). 2. For instance, Planet sells sub-meter resolution images online, taken by cube satellites of the entire globe every day. SpyMeSat is a mobile app that offers on-demand access to hi-resolution satellite imagery (Hammes 2019).
188
O. SWED AND K. CHÁVEZ
References Ackerman, Gary A. 2016. “‘Designing Danger’: Complex Engineering by Violent Non-state Actors.” Journal of Strategic Security 9, no. 1: 1–11. https://doi.org/10.5038/1944-0472.9.1.1502. Ackerman, Spencer. 2011. “Libyan Rebels Are Flying Their Own Minidrone.” Wired, August 23. https://www.wired.com/2011/08/libyan-rebels-are-fly ing-their-own-mini-drone/. Arnaudo, Dan. 2017. “Computational Propaganda in Brazil: Social Bots During Elections.” Computational Propaganda Research Project, Working Paper No. 2017.8. http://blogs.oii.ox.ac.uk/politicalbots/wp-content/upl oads/sites/89/2017/06/Comprop-Brazil-1.pdf. Arreguín-Toft, Ivan. 2001. “How the Weak Win Wars: A Theory of Asymmetric Conflict.” International Security 26, no. 1: 93–128. https://doi.org/ 10.1162/016228801753212868. Awan, Imran. 2017. “Cyber-Extremism: ISIS and the Power of Social Media.” Society 54, no. 2: 138–149. https://doi.org/10.1007/s12115-017-0114-0. Ball, Ryan Jokl. 2017. The Proliferation of Unmanned Aerial Vehicles: Terrorist Use, Capability, and Strategic Implications. Livermore, CA: Lawrence Livermore National Laboratory. Barsade, Itai and Michael C. Horowitz. 2017. “Militant Groups Have Drones. Now What?” Bulletin of the Atomic Scientists, September 7. https://thebul letin.org/2017/09/militant-groups-have-drones-now-what/. Bessi, Alessandro and Emilio Ferrara. 2016. “Social Bots Distort the 2016 U.S. Presidential Election Online Discussion.” First Monday 21, no. 11: 1–14. https://doi.org/10.5210/fm.v21i11.7090. Bharadwaj, Raghav. 2019. “Artificial Intelligence Applications in Additive Manufacturing (3D Printing).” Emerj, February 12. https://emerj.com/ ai-sector-overviews/artificial-intelligence-applications-additive-manufacturing3d-printing/. Blackman, Josh. 2014. “The 1st Amendment, 2nd Amendment, and 3D Printed Guns.” Tennessee Law Review 81: 479–538. Bloom, Mia. 2017. “Constructing Expertise: Terrorist Recruitment and ‘Talent Spotting’ in the PIRA, Al Qaeda, and ISIS.” Studies in Conflict & Terrorism 40, no. 7: 603–623. https://doi.org/10.1080/1057610x.2016.1237219. Bradshaw, Tim. 2018. “Self-driving Cars Raise Fears Over ‘Weaponisation’.” Financial Times, January 14. https://www.ft.com/content/a8dbd4e0-f80711e7-88f7-5465a6ce1a00. Brinkman, Richard L. and June E. Brinkman. 1997. “Cultural Lag: Conception and Theory.” International Journal of Social Economics 24, no. 6: 609–627. https://doi.org/10.1108/03068299710179026. Bunn, Matthew, William H. Tobey, Martin B. Malin, and Nickolas Roth. 2016. Preventing Nuclear Terrorism: Continuous Improvement or Dangerous Decline?
8
BETWEEN SCYLLA AND CHARYBDIS …
189
Cambridge, MA: Project on Managing the Atom, Belfer Center for Science and International Affairs, Harvard Kennedy School. Conway, Maura. 2012. “From al-Zarqawi to al-Awlaki: The Emergence of the Internet as a New Form of Violent Radical Milieu.” CTX: Combating Terrorism Exchange 2, no. 4: 12–22. Curtis, Sophie. 2015. “Self-driving Cars Can be Hacked Using a Laser Pointer.” The Telegraph, September 7. https://www.telegraph.co.uk/technology/ news/11850373/Self-driving-cars-can-be-hacked-using-a-laser-pointer.html. Dean, Geoff, Peter Bell, and Jack Newman. 2012. “The Dark Side of Social Media: Review of Online Terrorism.” Pakistan Journal of Criminology 4, no. 3: 103–122. Ex Machina. 2014. Directed by Alex Garland. Universal City, CA: Universal Studios. Ferrara, Emilio, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. “The Rise of Social Bots.” Communications of the ACM 59, no. 7: 96–104. https://doi.org/10.1145/2818717. Finisterre, Ken, and Robi Sen. 2016. “The Unrealized Threat of Weaponized Drones.” Morning Consult, October 24. https://morningconsult.com/opi nions/unrealized-threat-weaponized-drones/. Flood, Rebecca. 2016. “ISIS using GOOGLE MAPS to Plot Airport and Plane Terror Attacks, Report Warns.” The Express, September 27. https://www.express.co.uk/news/world/715011/Isis-airplane-attack-ter ror-bomb-google-map-report-airport. Friese, Larry, N. R. Jenzen-Jones, and Michael Smallwood. 2016. “Emerging Unmanned Threats: The Use of Commercially Available UAVs by Armed Non-state Actors.” Armament Research Services. http://armamentrese arch.com/wp-content/uploads/2016/02/ARES-Special-Report-No.-2-Eme rging-Unmanned-Threats.pdf. Gadam, Suhasini. 2018. “Artificial Intelligence and Autonomous Vehicles.” Medium. https://medium.com/datadriveninvestor/artificial-intelligence-andautonomous-vehicles-ae877feb6cd2. Gentsch, Peter. 2018. AI in Marketing, Sales and Service: How Marketers Without a Data Science Degree Can Use AI, Big Data and Bots. New York, NY: Springer. Goodin, Dan. 2007. “Google Maps Aids Terrorists, NY Lawmaker Warns.” The Register, June 11. https://www.theregister.co.uk/2007/06/11/google_ maps_aids_terrorists/. Grigorescu, Sorin Mihai, Markus Glaab, and André Roßbach. 2018. “From Logistics Regression to Self-driving Cars: Chances and Challenges for Machine Learning in Highly Automated Driving.” Embedded Computing, July 30. http://www.embedded-computing.com/automotive/from-logistics-
190
O. SWED AND K. CHÁVEZ
regression-to-self-driving-cars-chances-and-challenges-for-machine-learningin-highly-automated-driving. Hammes, T. X. 2019. “Technology Converges; Non-state Actors Benefit.” Hoover Institution, February 25. https://www.hoover.org/research/techno logy-converges-non-state-actors-benefit. Harding, Thomas. 2007. “Terrorists Use Google Maps to Hit UK Troops.” Telegraph News, January 13. https://www.telegraph.co.uk/news/worldnews/ 1539401/Terrorists-use-Google-maps-to-hit-UK-troops.html. Heartfield, Ryan, George Loukas, Sanja Budimir, Anatolij Bezemskij, Johnny R. J. Fontaine, Avgoustinos Filippoupolitis, and Etienne Roesch. 2018. “A Taxonomy of Cyber-physical Threats and Impact in the Smart Home.” Computers & Security 78: 398–428. https://doi.org/10.1016/j.cose.2018. 07.011. Hern, Alex. 2017. “Elon Musk Says AI Could Lead to Third World War.” The Guardian, September 4. https://www.theguardian.com/technology/2017/ sep/04/elon-musk-ai-third-world-war-vladimir-putin. Horowitz, Michael C. 2010. “Nonstate Actors and the Diffusion of Innovations: The Case of Suicide Terrorism.” International Organization 64, no. 1: 33– 64. https://doi.org/10.1017/s0020818309990233. Huang, Ming-Hui, and Roland T. Rust. 2018. “Artificial Intelligence in Service.” Journal of Service Research 21, no. 2: 155–172. https://doi.org/10.1177/ 1094670517752459. Huey, Laura. 2015. “This Is Not Your Mother’s Terrorism: Social Media, Online Radicalization and the Practice of Political Jamming.” Journal of Terrorism Research 6, no. 2: 1–16. https://doi.org/10.15664/jtr.1159. Hwang, Tim, Ian Pearce, and Max Nanis. 2012. “Socialbots: Voices from the Fronts.” Interactions 19, no. 2: 38–45. https://doi.org/10.1145/2090150. 2090161. Ionita, Silviu. 2017. “Autonomous Vehicles: From Paradigms to Technology.” IOP Conference Series: Materials Science and Engineering 252: 1–8. I, Robot. 2004. Directed by Alex Proyas. Los Angeles, CA: 20th Century Fox. Klausen, Jytte. 2015. “Tweeting the Jihad: Social Media Networks of Western Foreign Fighters in Syria and Iraq.” Studies in Conflict & Terrorism 38, no. 1: 1–22. https://doi.org/10.1080/1057610x.2014.974948. Kredo, Adam. 2018. “Al Qaeda Using Google Maps to Plan Jihadist Attacks.” Washington Free Beacon, April 20. https://freebeacon.com/national-security/ al-qaeda-using-google-maps-plan-jihadist-attacks/. Lachow, Irving. 2017. “The Upside and Downside of Swarming Drones.” Bulletin of the Atomic Scientists 73, no. 2: 96–101. https://doi.org/10. 1080/00963402.2017.1290879. Lee, Wenke, Cliff Wang, and David Dagon. 2008. Botnet Detection: Countering the Largest Security Threat. New York, NY: Springer.
8
BETWEEN SCYLLA AND CHARYBDIS …
191
LePain, Andrea. 2018. “How AI and 3D Printing are Revolutionizing Materials Design.” R&D World, December 11. https://www.rdworldonline.com/howai-and-3d-printing-are-revolutionizing-materials-design/. Lin, Herbert and Jaclyn Kerr. 2019. “On Cyber-enabled Information/Influence Warfare and Manipulation.” In Oxford Handbook of Cybersecurity (forthcoming). Lin, Pierre Pascal Anatole, Karl Willis, Eric Jamesson Wilhelm, Arian Aziz Aghababaie. 2015. “Intelligent 3D Printing Through Optimization of 3D Print Parameters.” U.S. Patent 14/711714, November 19. Little, Rory K. 2014. “Guns Don’t Kill People, 3D Printing Does? Why the Technology Is a Distraction from Effective Gun Controls.” Hastings Law Journal 65, no. 6: 1505–1513. Mack, Andrew. 1975. “Why Big Nations Lose Small Wars: The Politics of Asymmetric Conflict.” World Politics 27, no. 2: 175–200. https://doi.org/10. 2307/2009880. Mair, David. 2017. “# Westgate: A Case Study–How al-Shabaab Used Twitter During an Ongoing Attack.” Studies in Conflict & Terrorism 40, no. 1: 24– 43. https://doi.org/10.1080/1057610x.2016.1157404. Mandelbaum, Jay, James Ralston, Ivars Gutmanis, Andrew Hull, and Christopher Martin. 2005. “Terrorist Use of Improvised or Commercially Available Precision-guided UAVs at Stand-off Ranges: An Approach for Formulating Mitigation Considerations.” Institute for Defense Analyses. http://www.dtic. mil/dtic/tr/fulltext/u2/a460419.pdf. Markon, Jerry. 2016. “As Concern Grows Over Terrorists on Social Media, Senate Bill Calls for National Strategy.” The Washington Post, February 10. https://www.washingtonpost.com/news/federal-eye/wp/2016/02/10/asconcern-grows-over-terrorists-on-social-media-senate-bill-calls-for-national-str ategy/?utm_term=.43bb08a586cc. Marr, Bernard. 2018. “How Artificial Intelligence Is Making Chatbots Better for Businesses.” Forbes, May 18. https://www.forbes.com/sites/bernardmarr/ 2018/05/18/how-artificial-intelligence-is-making-chatbots-better-for-busine sses/#396362d94e72. McCaney, Kevin. 2018. “AI Fueling Next Wave of 3D Printing and Robotics.” Government CIO, August 17. https://governmentciomedia.com/ai-3d-pri nting-robotics. McClure, Paul K. 2018. “‘You’re Fired,’ Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment.” Social Science Computer Review 36, no. 2: 139–156. https://doi.org/10.1177/089443931 7698637. McKelvey, Fenwick and Elizabeth Dubois. 2017. “Computational Propaganda in Canada: The Use of Political Bots.” Computational Propaganda Research
192
O. SWED AND K. CHÁVEZ
Project, Working Paper No. 2017.6. http://blogs.oii.ox.ac.uk/politicalbots/ wp-content/uploads/sites/89/2017/06/Comprop-Canada.pdf. Melki, Jad and May Jabado. 2016. “Mediated Public Diplomacy of the Islamic State in Iraq and Syria: The Synergistic Use of Terrorism, Social Media and Branding.” Media and Communication 4, no. 2: 92–103. https://doi.org/ 10.17645/mac.v4i2.432. Miller, Maggie. 2018. “Consumer Groups Say Senate’s Revamped Self-driving Car Bill Fails to Resolve Cyber, Safety Concerns.” Inside Cyber Security, December 11. https://insidecybersecurity.com/daily-news/consumergroups-say-senates-revamped-self-driving-car-bill-fails-resolve-cyber-safety. Ogburn, William F. 1922. Social Change with Respect to Culture and Original Nature. New York, NY: B. W. Huebsch, Inc. O’Flaherty, Douglas. 2018. “AI in Action: Autonomous Vehicles.” IBM IT Infrastructure Blog. https://www.ibm.com/blogs/systems/ai-in-action-aut onomous-vehicles/. O’Sullivan, Shane, Nathalie Nevejans, Colin Allen, Andrew Blyth, Simon Leonard, Ugo Pagallo, Katharina Holzinger, Andreas Holzinger, Mohammed Imran Sajid, and Hutan Ashrafian. 2019. “Legal, Regulatory, and Ethical Frameworks for Development of Standards in Artificial Intelligence (AI) and Autonomous Robotic Surgery.” The International Journal of Medical Robotics and Computer Assisted Surgery 15, no. 1: e1968. https://doi.org/10.1002/ rcs.1968. Pepito, Joseph Andrew and Rozzano Locsin. 2019. “Can Nurses Remain Relevant in a Technologically Advanced Future?” International Journal of Nursing Sciences 6, no. 1: 106–110. https://doi.org/10.1016/j.ijnss.2018.09.013. Rao, Qing and Jelena Frtunikj. 2018. “Deep Learning for Self-driving Cars: Chances and Challenges.” SEFAIS ‘18: Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems: 35–38. https://doi.org/10.1145/3194085.3194087. Rassler, Don. 2016. “Remotely Piloted Innovation: Terrorism, Drones, and Supportive Technology.” USMA Combating Terrorism Center. http://www. dtic.mil/dtic/tr/fulltext/u2/1019773.pdf. Rath, Johannes, Monique Ischi, and Dana Perkins. 2014. “Evolution of Different Dual-use Concepts in International and National Law and Its Implications on Research Ethics and Governance.” Science & Engineering Ethics 20, no. 3: 769–790. https://doi.org/10.1007/s11948-014-9519-y. Ravindranath, Mohana. 2017. “Lawmakers Want Self-driving Cars to Thrive But Still Fear Hacks.” NextGov, February 15. https://www.nextgov.com/eme rging-tech/2017/02/lawmakers-want-self-driving-cars-thrive-still-fear-cyberhacks/135437/.
8
BETWEEN SCYLLA AND CHARYBDIS …
193
Ribeiro, John. 2008. “Google Earth Used by Terrorists in India Attacks.” PC World, November 30. https://www.pcworld.com/article/154684/art icle.html. Rosner, Robert and Lynn Eden. 2019. “Rebuilding an Aging Nuclear Weapons Complex: What Should the United States Do, and Not Do? An Overview.” Bulletin of the Atomic Scientists 75, no. 1: 3–8. https://doi.org/10.1080/ 00963402.2019.1555977. Rudner, Martin. 2016. “‘Electronic Jihad’: The Internet as al-Qaeda’s Catalyst for Global Terror.” Studies in Conflict & Terrorism 40, no. 1: 10–23. https:// doi.org/10.1080/1057610x.2016.1157403. Rychnovská, Dagmar. 2016. “Governing Dual-use Knowledge: From the Politics of Responsible Science to the Ethicalization of Security.” Security Dialogue 47, no. 4: 310–328. https://doi.org/10.1177/0967010616658848. Santini, Rose Marie, Larissa Agostini, Carlos Eduardo Barros, Danilo Carvalho, Rafael Centeno de Centeno, Debora G. Salles, Kenzo Seto, Camyla Terra, and Giulia Tucci. 2018. “Software Power as Soft Power: A Literature Review on Computational Propaganda Effects in Public Opinion and Political Process.” Partecipazione e Conflitto 11, no. 2: 332–360. https://doi.org/10.1285/i20 356609v11i2p332. Schulzke, Marcus. 2019. “Drone Proliferation and the Challenge of Regulating Dual-Use Technologies.” International Studies Review 21, no. 3: 497–517. https://doi.org/10.1093/isr/viy047. Schwartz, Stephen I. 1997. “Maintaining Our Nuclear Arsenal is Expensive.” Brookings Institute, March 26. https://www.brookings.edu/opinions/mainta ining-our-nuclear-arsenal-is-expensive/. Sekhose, Marcia. 2016. “Australian Authorities Fear Self-driving Cars Could be Used for Terrorism.” BGR Media, November 9. https://www.bgr.in/news/ australian-authorities-fear-self-driving-cars-could-be-used-for-terrorism/. Shawar, Bayan Abu and Eric Steven Atwell. 2005. “Using Corpora in Machinelearning Chatbot Systems.” International Journal of Corpus Linguistics 10, no. 4: 489–516. https://doi.org/10.1075/ijcl.10.4.06sha. Sparrow, Robert. 2016. “Robots in Aged Care: A Dystopian Future?” AI & Society 31, no. 4: 445–454. https://doi.org/10.1007/s00146-015-0625-4. Szollosy, Michael. 2017. “Freud, Frankenstein and Our Fear of Robots: Projection in Our Cultural Perception of Technology.” AI & Society 32, no. 3: 433–439. https://doi.org/10.1007/s00146-016-0654-7. Subrahmanian, V. S., Amos Azaria, Skylar Durst, Vadim Kagan, Aram Galstyan, Kristina Lerman, Linhong Zhu, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer. 2016. “The DARPA Twitter Bot Challenge.” Computer 49, no. 6: 38–46. https://doi.org/10.1109/mc.2016.183. The Terminator. 1984. Directed by James Cameron. Los Angeles, CA: Hemdale Film Corporation.
194
O. SWED AND K. CHÁVEZ
Tschider, Charlotte A. 2018. “Regulating the Internet of Things: Discrimination, Privacy, and Cybersecurity in the Artificial Intelligence Age.” Denver Law Review 96: 87. https://doi.org/10.2139/ssrn.3129557. Wallace, Dave and Shane Reeves. 2013. “Non-state Armed Groups and Technology: The Humanitarian Tragedy at our Doorstep?” University of Miami National Security & Armed Conflict Law Review 3, no. 1: 26–45. Walther, Gerald. 2015. “Printing Insecurity? The Security Implications of 3DPrinting of Weapons.” Science & Engineering Ethics 21, no. 6: 1435–1445. https://doi.org/10.1007/s11948-014-9617-x. Waters, Nick. 2018. “Google Maps Is a Better Spy than James Bond.” Foreign Policy, September 25. https://foreignpolicy.com/2018/09/25/goo gle-maps-is-a-better-spy-than-james-bond/.
CHAPTER 9
Comparison of National Artificial Intelligence (AI): Strategic Policies and Priorities Sdenka Zobeida Salas-Pilco
Introduction Artificial intelligence (AI) research started in the 1950s. Since then, there have been many ‘AI springs’ and ‘AI winters,’ in which public, academic, and government interest in AI was raised and later decreased. However, over the last decade, AI research has been growing steadily and achieved important breakthroughs. It is currently one of the most transformative forces of the twenty-first century and is profoundly changing the world as we know it. AI has multiple benefits, but it also brings many challenges and disruptions to governments, businesses, and society. Thus, many governments have proposed national AI strategic policies to lead the development of AI in their countries. These policies explain guidelines, regulations, and priorities according to the context of each country. AI has been defined in various ways according to the different stages of its evolution. For example, John McCarthy (2007), who coined the term artificial intelligence, defined AI as “the science and engineering of
S. Z. Salas-Pilco (B) Central China Normal University, Wuhan, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_9
195
196
S. Z. SALAS-PILCO
making intelligent machines”; however, this study uses the Oxford Dictionary’s definition: “AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence.” Also, AI policy is defined as the decisions that governments make regarding AI-related topics. Due to the rapid development of AI in the past decade, it has started to play such a large role in our society that governments have become aware of its importance at economic, technological, social, and political levels. Therefore, even now, AI governance is included as part of the natural development of policies for creating new regulations and standards focused on benefiting our society and avoiding its negative impacts. In recent years, many governments around the world have released several AI policies and started developing national AI strategies. Therefore, the following research questions are addressed in the present work: What are the similarities and differences between various national AI strategies? What are the priorities of AI strategies? How much do governments spend on AI strategies?
Methodology This study is exploratory and intends to understand the AI strategies developed by several countries. Although numerous countries have issued some regulations, the present study selected six countries: Canada, China, France, Japan, the Republic of Korea, and the United States. This selection was made through purposeful sampling, considering that the selected countries were the first to develop AI policies and show their commitment to including AI in their strategic plans and budgets.
National AI Policies and Strategies In the following pages, the countries mentioned in the previous section are presented in alphabetical order. Their AI policies, budgets, priorities, achievements, and challenges are discussed in order to understand their present situation in this area of development. These are also summarized in Table 9.2 to provide a clearer analysis and comparison of their AI strategies.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
197
Canada The Government of Canada released its Pan-Canadian Artificial Intelligence Strategy in March 2017, detailing a five-year strategic plan (2017– 2018 to 2021–2022) that has four major goals: (a) increase the number of outstanding AI researchers and graduates in Canada; (b) establish three major interconnected centers for AI in Edmonton, Montreal, and Toronto; (c) develop global thought leadership on the economic, ethical, and legal implications of AI advancements; and (d) support a national AI research community. These goals are intended to position Canada as a world-leading country in terms of AI and innovation, as it is mentioned by the Canadian Institute for Advanced Research (CIFAR 2017). Priorities The Canadian strategic policies mainly prioritize research and talent development in order to position Canada as a world-leading destination for companies seeking to invest in AI and innovation. Budget In its 2017 budget, the Government of Canada planned to invest CAD 125 million (USD 95 million) over five years, from 2017–2018 until 2021–2022. The Canadian Institute for Advanced Research (CIFAR), which receives CAD 35 million (USD 23 million) in funding, is responsible for administering the funding and collaborating with three other AI centers that are receiving the following amounts: the Montreal Institute for Learning Algorithms (MILA) in Montreal received CAD 40 million (USD 30 million), the Vector Institute in Toronto-Waterloo received CAD 40 million (USD 30 million), and the Alberta Machine Intelligence Institute (AMII) in Edmonton received CAD 25 million (USD 19 million) (Government of Canada 2017, 103–104; Villani et al. 2018). Achievements From the initial plan, the following developments have occurred: (a) The AI Futures Policy Lab was set up, which is a partnership between CIFAR and the Brookfield Institute for Innovation and Entrepreneurship; (b) 29 researchers were announced as Canada CIFAR Artificial Intelligence (CCAI) chairs and are to become the AI research backbone; (c) the Royal Bank of Canada (RBC) Foundation will donate CAD 1 million over the next three years to CIFAR to support ethical AI; and
198
S. Z. SALAS-PILCO
(d) in December 2018, the University of Montreal published its initiative, the Montréal Declaration for Responsible Development of Artificial Intelligence, which Canadian citizens reviewed and provided feedback on (Brookfield Institute 2018; PressReleasePoint 2018a, b; Université de Montréal 2018). Challenges Although Canada is leading in supporting R&D in AI centers and on AI ethics in universities, it seems that there is a lack of partnership with the AI industry. China In July 2017, China’s State Council released its New Generation Artificial Intelligence Development Plan (AIDP) (新一代人工智能发展规划), which is a very detailed plan that describes various chronological stages and corresponding budgets. The six main AI tasks are to (a) develop an open and cooperative AI technology system, (b) build an efficient AI economy, (c) cultivate a safe AI society, (d) strengthen AI in the field of militarycivilian integration, (e) build an efficient AI infrastructure system, and (f) lay out all AI major science and technology projects (P.R. China. Information Office of the State Council 2017; English translation [Foundation for Law & International Affairs 2017]). Also, in December 2017, the Ministry of Industry and Information Technology (MIIT) released the Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry (20182020) (促进新一代人工智能产业发展三年行动计划). This plan intends to (a) foster the development of smart products; (b) accelerate R&D and make breakthroughs in core foundations; (c) deepen the development of AI manufacturing; (d) build AI support systems; and (e) safeguard the implementation, support, innovation, training, and optimization of the AI development environment (P.R. China. MIIT 2017; English translation [New America 2018a]). Priorities According to the AI strategies highlighted in the documents, China’s top priorities are to strengthen the economic and industrial development of AI (smart economy and industrial clusters) and support R&D for building a knowledge and talent system.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
199
Budget No explicit amount of investment in AI has been disclosed by the Chinese government, but there is official data on China’s R&D investment as a percentage of the gross domestic product (GDP). Thus, in 2016, China’s R&D investment was RMB 1,567,700 million (USD 234,000 million), accounting for 2.11% of the country’s GDP. In 2017, it was RMB 1,760,600 million (USD 263,000 million), corresponding to 2.13% of the GDP; in 2018, it was RMB 1,965,700 million (USD 293,600 million), accounting for 2.18% of the GDP (National Bureau of Statistics of China 2019, Fig. 20; World Bank 2019). However, through The New Generation Artificial Intelligence Development Plan, China intends to increase the worth of its AI industry in three stages. First, by 2020, the country hopes to catch up with global AI development and become competitive, developing an AI industry worth more than RMB 150,000 million (USD 22,000 million). Then, by 2025, AI will be the main driving force behind Chinese industry and economy, and the AI industry will be worth more than RMB 400,000 million (USD 60,000 million). Finally, by 2030, China aims to become the world’s AI leader, and its AI industry will be worth more than RMB 1,000,000 million (USD 150,000 million) (P.R. China. Information Office of the State Council 2017). Achievements Allen (2019) highlighted China’s AI achievements; for example, globally, China is number one in (a) total number of AI research papers, (b) highly cited AI papers, (c) AI patents, and (d) AI venture capital investment. It is number two in (e) the number of AI companies and (f) AI talent pool based on the study presented by the China Institute for Science and Technology Policy at Tsinghua University (2018). Moreover, the American National Science Foundation (U.S. NSF 2018) expected “China to pass the United States in R&D investments by the end of 2018.” China is also aware of the importance of the social impact of AI; thus, in October 2018, a lecture at the 13th National People’s Congress Standing Committee Speeches on Special Topics, titled Innovative Development and Social Impact of Artificial Intelligence, highlighted the following key topics for AI development in China: (a) establish pragmatic development, (b) strengthen basic research, (c) build an independent and controllable ecosystem, (d) establish an efficient innovation system, (e) accelerate the cultivation of talent, (f) promote shared global governance, (g) formulate laws and regulations, and (h) strengthen AI social research (Tan 2018; English translation [New America 2018b]).
200
S. Z. SALAS-PILCO
Challenges China faces some challenges in the field of software frameworks and platforms, as well as in the field of semiconductors (Allen 2019). France In March 2018, the President Emmanuel Macron presented France’s AI strategic plan, named AI for Humanity: French Strategy for Artificial Intelligence (Villani Mission on AI 2018), which was based on Villani et al.’s (2018) report, named For a meaningful artificial intelligence towards a French and European strategy. This AI strategic plan has four major components: (a) reinforcing the AI ecosystem to attract the best talent; (b) developing an open data policy in sectors where France already has the potential for excellence, such as healthcare; (c) creating a regulatory and financial framework favoring the emergence of “AI champions,” thus supporting AI research projects and start-ups; and (d) implementing AI regulations and ethics to ensure that the best standards of acceptability are in place for citizens. Also, in November 2018, the Ministry of Higher Education, Research and Innovation (France. MESRI 2018) released the National Strategy of Research on AI, which showcases the country’s massive efforts in research, training, and innovation to consolidate France’s world-class expertise and attract the best talent. This policy supports projects in health, transport, environment, and security to strengthen France’s AI ecosystem. Priorities The French government’s main priority on AI is to provide an AI environment through R&D and the participation of private investment, as well as through AI regulations and ethics. Budget In March 2018, the French president announced the allocation of e1,500 million (USD 1,850 million) over five years (2018–2022) to turn France into a world leader in AI research and innovation (France. MESRI 2018). The funding was disclosed as follows: e665 million (USD 756 million) from the French government and e358 million (USD 407 million) from industrial partners through matching funds, yielding a subtotal of e1,023 million (USD 1,163 million) (see Table 9.1). Another e500 million
9
Table 9.1
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
201
French government’s resource allocation for AI development
Category Creation of 3IA networks Program to attract talents Strategic cooperation with Germany Investment in a Supercomputer Strengthening research partnership Launch “AI challenges” for start-ups Total
Total (million e)
French government (million e)
300 (USD 80 (USD 115 (USD
342 M) 91 M) 130 M)
200 (USD 228 M) 70 (USD 80 M) 115 (USD 130 M)
198 (USD 130 (USD
224 M) 148 M)
115 (USD 130 M) 65 (USD 74 M)
200 (USD
228 M)
100 (USD 114 M)
1,023 (USD 1,163 M)
665 (USD 756 M)
Source Reproduced by permission and translated from the National Strategy of Research on AI (France. MESRI 2018, 15)
(USD 575 million) was attracted from companies in the private sector, bringing the total funding for AI development to e1,500 million. Achievements In November 2018, the Direction Interministérielle de la Transformation Publique (France. DITP 2018) announced the creation of four interdisciplinary AI Institutes (3IA) in Paris (PRAIRIE), Toulouse (ANITI), Grenoble (MIAI@Grenoble-Alpes), and Nice (3IA Côte d’Azur), where academics and industrials will work together on AI projects under the coordination of the National Institute for Research in Computer Science (INRIA, French acronym). Regarding international partnerships, in June 2018, a French-Canadian cooperation was organized to create an international study group on inclusive and ethical AI (Gouvernement de la République Française 2018). In March 2019, a committee was set up for research cooperation between France and Singapore in the areas of science and technology, including AI development (Seow 2019). Challenges Villani mentioned that the limited budget allocated by France reduces its competitiveness when compared with the regional budgets of the US or China. Thus, “France may not become the AI leader, but it can become an AI leader.” The biggest challenge is not related to the budget or the
202
S. Z. SALAS-PILCO
international competition, but rather “changing the cultural mindset” or cultural adaptation (Bock 2019). Japan In March 2017, Japan released the Artificial Intelligence Technology Strategy, which was formulated by a council (Japan. Artificial Intelligence Technology Strategy Council 2017). This document outlines the country’s industrialization roadmap in three phases. The first phase is the utilization and application of data-driven AI developed in various domains (until approximately 2020). The second phase is the public use of AI and data developed across various domains (until approximately 2025~2030). The third phase is the creation of ecosystems built by connecting multiple domains (after approximately 2025~2030) (see Fig. 9.1).
Fig. 9.1 Japan’s AI development phases (Japan. Artificial Intelligence Technology Strategy Council 2017, 5) (Source Reproduced by permission of the New Energy and Industrial Technology Development Organization [NEDO]. Artificial Intelligence Technology Strategy)
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
203
Japan’s AI technology strategy has the following goals: (a) to promote R&D projects based on industry-academia-government collaboration; (b) to foster human resources; (c) to provide environmental maintenance for data and tools owned by industry, academia, and the government; (d) to offer start-up support; and (e) to promote an understanding of the development of AI technology among providers and users. It also has four priority areas: (1) productivity; (2) health, medical care, and welfare; (3) mobility (transportation); and (4) information security (which is a cross-sectional area). These areas were selected based on urgent solutions devised for social issues, contributions to the economy, and expectations for AI contributions (Japan. Artificial Intelligence Technology Strategy Council 2017). Priority The main AI priority of the Japanese government is R&D aimed at AI industrialization, which will become the pillar of the “productivity revolution” (Prime Minister of Japan and His Cabinet 2017). Budget According to the Japanese government, the AI budget in Fiscal Year 2016 (FY2016) was JPY 42,030 million (USD 378 million). In the FY2017 budget plan, the total AI budget was JPY 57,550 million (USD 514 million). In the FY2018 budget plan, this amount was increased to JPY 77,040 million (USD 703 million). For the FY2019 budget plan, the budget was expected to reach about JPY 120,000 million (USD 1,072 million), which means the AI budget will have increased by almost 1.5 times (Japan. Cabinet Office 2018a, 9; Sankei News 2019). Achievements The Artificial Intelligence Technology Strategy Council established in 2016 is coordinating with three Japanese research centers to promote R&D on AI: (1) the Center for Information and Neural Networks (CiNet) and the Universal Communication Research Institute (UCRI) at the National Institute of Information and Communications Technology (NICT), (2) the RIKEN Center for Advanced Intelligence Project (AIP), and (3) the Artificial Intelligence Research Center (AIRC) at the National Institute of Advanced Industrial Science and Technology (AIST) (Japan. Artificial Intelligence Technology Strategy Council 2017, 3).
204
S. Z. SALAS-PILCO
Japan took the initiative to lead the international discussion about AI R&D. First, they suggested that the G7 countries establish AI international rules, providing the Draft AI R&D Guidelines for International Discussions, and Draft AI Utilization Principles (Japan. MIC 2017, 2018). Furthermore, they led a global discussion on the impact of AI on human society through the Report on Artificial Intelligence and Human Society (Japan. Cabinet Office 2017) and the proposal named Humancentered AI Social Principles (Japan. Cabinet Office 2018b) to discuss AI development and its implications for society and the global community. Thus, with support from the Japanese government, an advisory committee meeting regarding the ethics of AI was held at the UNESCO headquarters in Paris (Permanent Delegation of Japan to UNESCO 2019; UNESCO 2019). Challenges Although Japan is strong in terms of hardware development and datadriven research, it is still relatively weak in software development and algorithms. Therefore, Japan is trying to catch up to the US and China and avoid being left behind. Republic of Korea In December 2016, the Korean Ministry of Science, ICT, and Future Planning (Korea. MSIP 2016) released a master plan titled Mid- to Longterm Master Plan in Preparation for the Intelligent Information Society: Managing the Fourth Industrial Revolution. This document outlines the Korean government’s vision of “realizing a Human-Centered Intelligent Information Society.” Its main responsibilities are (a) fostering a healthy ecosystem of competition over innovative, intelligent IT and services; (b) enhancing creativity, the understanding of intelligent IT, and other core capabilities necessary to lead society into the future; (c) supporting the development of technologies and human resources; and (d) developing infrastructure to support entrepreneurial endeavors and private investment by applying intelligent IT to public services. Also, in May 2018, the Artificial Intelligence (AI) R&D Strategy for realizing I Korea 4.0 (Korea. MSIT 2018a) was released. This strategy highlighted three goals to be achieved over the following five years (2018–2022): (a) provide world-class AI technology, (b) nurture the best AI talents, and (c) establish an open and innovative AI infrastructure (AI hub).
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
205
Priorities The initial priority for the Korean government was on industry, but the emphasis was later placed on universities, thus cultivating AI talents. Budget According to the Ministry of Science and ICT (Korea. MSIT 2018a, 12), the Korean government’s investment in AI R&D in 2016 was KRW 1,387,000 million (USD 1,221 million). This investment was mostly focused on industry and included two AI projects: ExoBrain (designed to compete with IBM’s Watson computer) and a computer vision project called DeepView (Iglauer 2016; Zhang 2016). In 2017, another KRW 1,767,000 million (USD 1,562 million) was invested. Finally, according to the most recent announcement on this matter, Korea will invest KRW 2,200,000 million (USD 1,950 million) in AI over the next five years (2018–2022). It was also stated that this time that it will secure human resources from six universities (Korea. MSIT 2018b; Kang 2018). Achievements Pangyo Techno Valley is an innovation hub focused on public-private research in partnership with worldwide start-ups. Furthermore, Korea is establishing a public-private research center in partnership with Samsung, LG Electronics, Hyundai Motor Company, telecom giant KT, SK Telecom, and Internet portal Naver (Iglauer 2016). Moreover, the Korean government has issued several documents showing its detailed vision and strategies regarding its AI R&D strategy and preparations for the intelligent information society. Challenges Korea does not want to be left behind in terms of AI research. Thus, its government intends to include universities in its AI talent development strategy, which was missing from its prior AI investments, which primarily supported industry. Also, there is still a lack of conversation about ethical issues within the AI industry. The United States In October 2016, during the Obama administration, the US released the National Artificial Intelligence Research and Development Strategic Plan for publicly funded R&D in AI, according to the National Science and Technology Council (U.S. NSTC 2016a). The country also released the document Preparing for the Future of Artificial Intelligence (U.S.
206
S. Z. SALAS-PILCO
NSTC 2016b) and the report Artificial Intelligence, Automation, and the Economy (U.S. Executive Office of the President 2016). Then, in May 2018, the Trump administration released a Summary of the 2018 White House Summit on Artificial Intelligence for American Industry (White House 2018). Afterward, in February 2019, the president launched the US’s initiative on AI with an Executive Order on Maintaining American Leadership in Artificial Intelligence (White House 2019). This initiative includes the following objectives: (a) promote sustained investment in AI R&D, (b) enhance access to high-quality and fully traceable federal data, (c) reduce barriers to the use of AI technologies, (d) ensure that technical standards minimize vulnerability, (e) train the next generation of American AI researchers and users, and (f) protect the US’s advantage in AI and related critical technologies. Finally, in February 2019, the U.S. Department of Defense (U.S. DoD 2019) published its AI Strategy. Priorities Initially, President Trump did not make AI one of the government’s priorities, instead giving preference to traditional industries. However, since other countries are demonstrating serious commitments to AI development, the president has shown more interest in AI research, as mentioned in the FY2019 budget: …accelerate the development and deployment of advanced IT to support American military superiority, security, economic prosperity, energy dominance, health, innovation and early-stage research, modernization of the IT research infrastructure, and development of a strong cyber-enabled workforce. (U.S. NITRD 2018, ii)
Budget In 2015, the US invested approximately USD 1,100 million in R&D for AI-related technologies, and USD 1,200 million in 2016 (U.S. NSTC 2016b, 25). In 2017, the Networking and Information Technology Research and Development (U.S. NITRD 2016) mentioned that the total amount of R&D included not only AI research but also other non-AI research projects. Thus, the FY2017 budget for R&D was USD 4,540 million. The next year, during the Trump administration, the budget for FY2018 was reduced to USD 4,460 million. Finally, the FY2019 budget was increased to USD 5,280 million (NITRD 2016, 2017; U.S.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
207
NITRD 2018). It must be highlighted that since 2017, the amount of the R&D budget allocated to AI-related research has not been specified. However, according to the FY2019 budget, the largest percentage goes to the Department of Defense (U.S. NITRD 2018, 5). Achievements Under the U.S. Defense Advanced Research Projects Agency (U.S. DARPA 2018), there are already 20 active programs advancing the stateof-the-art of AI. There are also more than 60 programs exploring the uses of AI into which at least USD 2,000 million has been invested. Also, in July 2018, the Department of Defense established a Joint AI Center (JAIC) to work on selected “National Mission Initiatives.” Challenge The main challenge for the US is to maintain its leadership role in AI. In Table 9.2, it is presented a summary and comparison of the countries’ AI policies, strategies, priorities, and budgets.
Framework Analysis Having introduced the diverse policies of different countries, these AI strategic policies are now compared in order to analyze the research questions. The policies are compared based on four categories that correspond to public policy areas: technological, economic, social, and governmental-geopolitical. The comparison also assesses the main focuses of different strategic priorities. Each category includes subcategories so that more detailed classifications can be given. The technological category has two subcategories: human resources, and research and development. The economic category has two subcategories: industry and manufacturing, and start-up incubators. The social category has two subcategories: AI social awareness and AI-enabled workforce. The governmental-geopolitical category has three subcategories: ethical-legal norms, military and security, and international partnerships. Table 9.3 shows the comparison of the national AI strategic policies that were described in the previous section. The categories and subcategories are defined as follows: Technological. The scientific and technical methods, processes, and systems (including managerial techniques and expertise) related to AI.
Canada
Pan-Canadian AI Strategy (2017 March)
1. Increase the number of AI researchers and graduates in Canada 2. Establish interconnected AI research centers in three cities 3. Develop global leadership on the economic, ethical, and legal implications of AI 4. Support a national AI research community
Policy
Strategic goals
1. Develop an open and cooperative AI technology system 2. Build an efficient AI economy 3. Cultivate a safe AI society 4. Strengthen AI in the field of military-civilian integration 5. Build an efficient AI infrastructure system 6. Implement major AI science and technology projects Phases: • By 2020: Progress in AI competitiveness • By 2025: Make AI the main driving force of the industry and economy • By 2030: Become an AI world-leading country
New Generation AI Development Plan (2017 July)
China
Japan
1. Reinforce the AI ecosystem to attract the best talent 2. Develop an open data policy in sectors such as healthcare 3. Create a regulatory and financial framework for supporting AI research projects and start-ups 4. Introduce AI regulations and ethics to ensure the best standards of acceptability for citizens
1. Promote R&D based on industrial, academic, and government collaboration 2. Foster human resources 3. Maintain data owned by industry, academia, and the government 4. Provide startup support 5. Promote an understanding of AI technology among providers and users Phases: • By 2020: Use and apply data-driven AI • By 2025~2030: Enable the public use of AI and data • After 2025~2030: Build AI ecosystems
AI Technology AI for Humanity: French Strategy for Strategy (2017 March) AI (2018 March)
France
Summary of countries’ AI policies, strategies, priorities, and budgets
Country
Table 9.2
AI R&D Strategy for realizing I-Korea 4.0 (2018 May)
4.
3.
2.
1.
The US
Exec. Ord. on Maintaining American Leadership in AI (American AI Initiative) (2019 February) 1. Promote Provide investment in world-class AI AI R&D technology Nurture best AI 2. Enhance access to fully talents traceable federal Establish an open data and innovative AI infrastructure 3. Reduce barriers to the use of (AI hub) AI technologies Enhance the 4. Ensure that general technical understanding standards and other core minimize capabilities vulnerability necessary to lead society into the 5. Train American future AI researchers and users 6. Protecting the US’s advantage in AI and related critical technologies
Republic of Korea
208 S. Z. SALAS-PILCO
R&D investment • 2016: RMB 1,567,700 M (USD 234,000 M) • 2017: RMB 1,760,600 M (USD 263,000 M) • 2018: RMB 1,965,700 M (USD 293,600 M) China intends to increase its AI industry worth: • By 2020: RMB 150,000 M (USD 22,000 M) • By 2025: RMB 400,000 M (USD 60,000 M) • By 2030: RMB1,000,000 M (USD 150,000 M) 2018: USD 294,000 M (*)
2017–2022: Total AI investment is CAD 125 M (USD 95 M) • CAD 35 M for CIFAR (USD 23 M) • CAD 40 M for MILA in Montreal (USD 30 M) • CAD 40 M—Vector Inst.Toronto (USD 30 M) • CAD 25 M for AMII in Edmonton (USD 19 M)
Average: USD 19 M per year
Budget (March 2019 USD exchange rates)
Average per year
FY2019: USD 1,072 M per year
2018 USD 1,950 M per year
AI R&D budget • 2016: KRW 1,387,000 M (USD 1,221 M) • 2017: KRW 1,767,000 M (USD 1,562 M) • 2018: KRW 2,200,000 M (USD 1,950 M)
AI R&D budget • FY2016: JPY 42,030 M (USD 378 M) • FY2017: JPY 57,550 M (USD 514 M) • FY2018: JPY 77,040 M (USD 703 M) • FY2019: JPY 120,000 M (USD 1,072 M
Average: USD 370 M per year
Priority on research with a special focus on defense and protecting the US’s advantage in AI
The initial priority was on industry, but the emphasis has shifted to developing AI talent at universities
R&D towards AI industrialization, becoming the pillar of the “productivity revolution”
Provide an AI environment through R&D and private investment, as well as AI regulations and ethics 2018–2022: Total AI allocation is e1,500 M (USD 1,850 M) • e665 M (USD 756 M) for research • e358 M (USD 407 M) for industrial projects in AI • Attracting e500 M (USD 575 M) from private companies
FY2019: USD 5,280 M (*)
Only AI R&D • 2015: USD 1,100 M • 2016: USD 1,200 M NITRD Program-R&D budget (AI & non-AI) • FY2016: USD 4,490 M • FY2017: USD 4,540 M • FY2018: USD 4,460 M • FY2019: USD 5,280 M
The US
Republic of Korea
Japan
France
Note (*) The amount corresponds to total R&D funding in that year, there is no disaggregate data of specific AI R&D funding
Support the economic and industrial development of AI and R&D for building a talent system
Mainly research and talent development
Priorities
China
Canada
Country
9 COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
209
210
S. Z. SALAS-PILCO
Table 9.3 Comparison of the national AI strategic policies and priorities according to general categories and subcategories Category
Subcategory
Canada China
France
Japan
Korea, Rep.
The US
Technological
• Human resources • Research and development • Industry and manufacturing • Startup incubators • AI social awareness • AI-enabled workforce • Ethical-legal norms • Military and security • International partnerships
● ●
●
● ●
●
● ●
●
● ●
● ●
● ●
● ●
● ●
● ●
●
● ●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Economic
Social
Governmental and geopolitical
●
●
●
● ●
● ● ●
●
●
Note Prioritized subcategories are marked with double points
A. Human resources. Funding for training, attracting, and retaining domestic or international AI human resources, e.g., creating AIspecific postgraduate programs. B. Research and development (R&D). Establishing new initiatives, programs, and research centers for AI research and development. Economic. The process or system that is concerned with allocating resources for the creation of new AI businesses and AI industrial activities. A. Industry and manufacturing. Encouraging the private sector to invest in AI initiatives to develop strategic industrial sectors. B. Startup incubators. Funding start-up incubators, entrepreneurs, and small and medium-sized enterprises (SMEs) around AI clusters and AI hubs. Social. The public sphere in which AI impacts and challenges society.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
211
A. AI social awareness. Promoting an awareness and understanding of how AI is used in society, its benefits, and its limitations. B. AI -enabled workforce. Developing digital skills and lifelong learning of the labor force that employs AI in jobs related to science, engineering, and technology. Governmental and geopolitical. The use of AI applications in government, military, and international areas, as well as the legal impact of AI. A. Ethical-legal norms. Setting norms and regulations for the ethical use and design of AI algorithms, systems, and initiatives. B. Military and security. Using AI development for defense and sovereignty in all branches of the armed forces. C. International partnerships. Collaborating with other countries to improve and develop AI initiatives and programs. In order to answer the research question: “What are the priorities of AI strategies?” the data in Table 9.3 considers the AI strategies highlighted by the different countries and classifies each into a particular category and subcategory. Regarding the research question: “What are the similarities and differences between various national AI strategies?” some insights can be deduced from the comparative table (see Table 9.2). First, in general, the most mentioned category is the technological category, especially the R&D subcategory, as all the countries are attempting to create AI research centers. This is followed by the human resources subcategory, which is intended to nurture AI talent to secure long-term sustainability. The second-most important category is the economic category, wherein the subcategory industry and manufacturing is emphasized due to its critical role, followed by the subcategory of supporting startups. The third-most mentioned category is the social category. Its subcategory of AI social awareness was considered by all the policies, while the subcategory of AI-enabled workforce was highlighted by China, the US, and France. The fourth category in terms of its importance is the governmental and geopolitical category. Its subcategory of ethical-legal norms shows its importance in all policies. This is followed by the subcategory of international partnerships, which are represented
212
S. Z. SALAS-PILCO
by initiatives in Canada, France, and Japan. Finally, the military and security subcategory is mentioned in the Chinese AI strategy and is a strategic priority for the US. Second, it is interesting to note that the US emphasizes the military and security subcategory (see Agarwala and Chaudhary [Chapter 11]; Arif [Chapter 10], this volume), which is prioritized in the budget as well as in the policy. Thus, according to the FY2019 budget, the Department of Defense is given a larger percentage of the budget than any other agency (U.S. NITRD 2018, 5; U.S. DoD 2019). Third, industry and manufacturing plays a key role in all the countries. However, in the case of Korea, its policies were originally more strongly focused on the AI industry before the main focus shifted toward supporting universities and nurturing AI talent. Fourth, some countries, such as Japan, France, and Canada, have a strong commitment to building international partnerships related to AI. For example, in the case of Japan, the government put forth an initiative to propose that the G7 countries should develop international rules and principles related to AI R&D (Japan. MIC 2017). In addition, Japan has led the global discussion about inviting UNESCO and OECD members to discuss AI and its impact on society (Japan. Cabinet Office 2018b). Meanwhile, France has signed bilateral agreements for AI collaboration with Canada and Singapore (Gouvernement de la République Française 2018; Seow 2019). Fifth, although AI social awareness has been mentioned by all the countries examined in this paper, the AI-enabled workforce is still limited, especially in relation to retraining the labor force. Therefore, in the coming years, it will be clearer how these issues impact society. To answer the third research question: “How much do governments spend on AI strategies?” Table 9.2 summarizes the countries’ budgets. Some countries have allocated a certain amount to be spent exclusively on AI development over five years (e.g., Canada and France), while other countries have a fiscal year budget (e.g., Japan and the US). Meanwhile, China has not provided explicit statistics regarding its national AI budget, though its policy highlights its intentions to promote AI planning. China and the US provide data about R&D as a whole, and so their specific AI budgets are not known. The allocated funds put toward AI R&D for each country are as follows. Canada invested an average of USD 19 million per year; China’s R&D investment in 2018 was USD 234,000 million; France invested
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
213
an average of USD 370 million per year; Japan’s investment for the FY2019 is USD 1,072 million, Korea’s investment in 2018 was USD 1,950 million; and the US invested USD 5,280 million in R&D for the FY2019. The leading countries are China and the US, followed by Korea and Japan, while France and Canada still need to enhance their financial support for AI development.
Conclusions and Recommendations An analysis of six national AI strategic policies, which included a framework of technical, economic, social, and governmental-geopolitical categories, revealed that each AI strategy is unique according to the context of the country. Each country has different priorities as well as different budgets. Policymakers must be aware that AI is already a part of our daily lives, and they have to face the challenge of regulating technological changes in order to develop effective AI policies. The following three broad recommendations can be extracted from the present study. First, to benefit a country, it is important to fund AI R&D initiatives while keeping in mind the importance of training human resources and cultivating AI talents not only by supporting education but also training their own government staff to make wise decisions regarding AI-matters. The second is to invest in AI infrastructure creating and a suitable environment for public-private investments. The third is to regulate AI developments and deployments based on their ethical and legal implications and their impacts on society.
References Allen, Gregory C. 2019. Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security. Washington, DC: Center for a New American Security. Bock, Pauline. 2019. “Meet the Brain Macron Tasked with Turning France into an AI Leader.” Wired Magazine, February 15. https://www.wired.co.uk/art icle/cedric-villani-france-artificial-intelligence. Brookfield Institute. 2018. “AI Futures Policy Labs: A Series of Workshops for Emerging Policymakers.” https://brookfieldinstitute.ca/project/aifutures-policy-labs-a-series-of-workshops-for-emerging-policymakers/. China Institute for Science and Technology Policy at Tsinghua University. 2018. China AI Development Report 2018. Beijing: Tsinghua University.
214
S. Z. SALAS-PILCO
CIFAR [Canadian Institute for Advanced Research]. 2017. “CIFAR PanCanadian Artificial Intelligence Strategy.” https://www.cifar.ca/ai/pan-can adian-artificial-intelligence-strategy. Foundation for Law & International Affairs. 2017. “China’s New Generation of Artificial Intelligence Development Plan.” Translated by Floria Sapio, Weiming Chen, and Adrian Lo. https://flia.org/notice-state-council-issuingnew-generation-artificial-intelligence-development-plan. France. DITP [Direction Interministérielle de la Transformation Publique]. 2018. “Appel à Manifestation d’intérêt Intelligence Artificielle: 6 Lauréats à Découvrir!” [Call for Those Interested on Artificial Intelligence: 6 Winners to Discover!]. November 21. https://www.modernisation.gouv.fr/outils-etmethodes-pour-transformer/appel-a-manifestation-dinteret-intelligence-artifi cielle-annonce-des-laureats. France. MESRI [Ministère de l’Enseignement Supérieur, de la Recherche et de l’Innovation]. 2018. Stratégie Nationale de Recherché en IA [National Strategy of Research on AI ]. Paris: Ministry of Higher Education, Research and Innovation, and the Ministry of State for Digital Affairs. Gouvernement de la République Française. 2018. “France and Canada Create New Expert International Panel on Artificial Intelligence.” Gouvernement, December 7. https://www.gouvernement.fr/en/france-and-canada-cre ate-new-expert-international-panel-on-artificial-intelligence. Government of Canada. 2017. “Budget Plan 2017. Building a Strong Middle Class. Growing Canada’s Advantage in Artificial Intelligence.” https://www. budget.gc.ca/2017/docs/plan/budget-2017-en.pdf. Iglauer, Philip. 2016. “South Korea Promises $3b for AI R&D After AlphaGo ‘Shock’.” ZDNet, March 22. https://www.zdnet.com/article/south-koreapromises-3b-for-ai-r-d-after-alphago-shock/. Japan. Artificial Intelligence Technology Strategy Council. 2017. Artificial Intelligence Technology Strategy. Tokyo: Ministry of Internal Affairs and Communications and NEDO Technology Strategy Center. Japan. Cabinet Office. 2017. “Report on Artificial Intelligence and Human Society.” Advisory Board on Artificial Intelligence and Human Society. https://www8.cao.go.jp/cstp/tyousakai/ai/summary/aisociety_en.pdf. ———. 2018a. “人工知能技術戦略会議” [Artificial Intelligence Technology Strategy Council]. https://www8.cao.go.jp/cstp/tyousakai/jinkochino/ 6kai/siryo1.pdf. ———. 2018b. “人間中心の AI 社会原則 (案)” [Human-Centered AI Social Principles. Draft]. https://www.cao.go.jp/cstp/tyousakai/humanai/ai_gen soku.pdf. Japan. MIC [Ministry of Internal Affairs and Communications]. 2017. “Draft AI R&D Guidelines for International Discussions.” The Conference toward AI Network Society. https://www.soumu.go.jp/main_content/000507517.pdf.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
215
———. 2018. “Draft AI Utilization Principles”. The Conference toward AI Network Society. https://www.soumu.go.jp/main_content/000581310.pdf. Kang, Ki-Hun. 2018. “한국 AI, 미국에 1.8 년 뒤지고 중국 추월 당해…정부, 2.2 조원 들여 따라잡는다” [Korean AI is 1.8 Years Behind the US and Overtaken by China… Government is Trying to Catch Up with 2.2 Trillion Won]. Korea Joongang Daily, May 15. https://news.joins.com/article/22625271. Korea. MSIP [Ministry of Science, ICT and Future Planning]. 2016. Mid-to Long-Term Master Plan in Preparation for the Intelligent Information Society: Managing the Fourth Industrial Revolution. Seoul: Ministry of Science, ICT and Future Planning. Korea. MSIT [Ministry of Science and ICT]. 2018a. “I-Korea 4.0 실현을 위 한 인공지능(AI) R&D 전략” [Artificial Intelligence (AI) R & D Strategy for Realizing I-Korea 4.0]. http://www.4th-ir.go.kr/article/download/39. ———. 2018b. “세계적 수준의 인공지능 기술력 확보에 2.2조원 투자” [Investment of KRW 2.2 Trillion to Secure World-Class Artificial Intelligence Technology]. https://www.msit.go.kr/web/msipContents/contentsView.do? cateId=mssw311&artId=1382727. McCarthy, John. 2007. “What Is Artificial Intelligence.” Stanford University. http://www.formal.stanford.edu/jmc/whatisai.html. National Bureau of Statistics of China. 2019. “Statistical Communiqué of the People’s Republic of China on the 2018 National Economic and Social Development.” February 28. http://www.stats.gov.cn/english/PressRelease/201 902/t20190228_1651335.html. New America. 2018a. “Translation: Chinese Government Outlines AI Ambitions through 2020.” Translated by Paul Triolo, Elsa Kania and Graham Webster. January 26. https://www.newamerica.org/cybersecurity-initiative/digichina/ blog/translation-chinese-government-outlines-ai-ambitions-through-2020/. ———. 2018b. “Read What Top Chinese Officials are Hearing about AI Competition and Policy.” Translated by Cameron Hickert and Jeffrey Ding. November 29. https://www.newamerica.org/cybersecurity-initiative/digich ina/blog/read-what-top-chinese-officials-are-hearing-about-ai-competitionand-policy/. Permanent Delegation of Japan to UNESCO. 2019. “AIの倫理に関するハイレ ベル会合の開催” [High Level Meeting on AI Ethics]. https://www.unesco. emb-japan.go.jp/itpr_ja/AI2019.html. P.R. China. Information Office of the State Council. 2017. “新一 代人工智能发展规划” [New Generation Artificial Intelligence Development Plan (AIDP)]. http://www.gov.cn/zhengce/content/2017-07/20/
216
S. Z. SALAS-PILCO
content_5211996.htm. P.R China. MIIT [Ministry of Industry and Information Technology]. 2017. 促进新一代人工智能产业发展三年行动计划(20182020年)” [Three-Year Action Plan to Promote the Development of NewGeneration Artificial Intelligence Industry]. http://www.miit.gov.cn/n11 46295/n1652858/n1652930/n3757016/c5960820/content.html. PressReleasePoint. 2018a. “RBC Foundation Supports Advancing Ethical AI with $1 Million Commitment to CIFAR.” October 9. http://www.pressrele asepoint.com/rbc-foundation-supports-advancing-ethical-ai-1-million-commit ment-cifar. ———. 2018b. “CIFAR Names 29 Researchers as Canada CIFAR AI Chairs at AI Can Meeting; Attends G7 Conference on AI.” December 16. http:// www.pressreleasepoint.com/cifar-names-29-researchers-canada-cifar-ai-chairsaican-meeting-attends-g7-conference-ai. Prime Minister of Japan and His Cabinet. 2017. “Press Conference by Prime Minister Shinzo Abe.” September 25. https://japan.kantei.go.jp/97_abe/sta tement/201709/_00011.html. Sankei News. 2019. “平成31年度AI予算、1.5倍1200億円 自民幹部「まだ1桁 足りない」” [FY2019 AI Budget Has Increased 1.5 Times: 12,000 Million Yen. Shinzo Abe, Leader of the Liberal Democratic Party (LDP) Says “It’s Still One Digit Short”]. February 7. https://www.sankei.com/politics/news/ 190207/plt1902070001-n1.html. Seow, Bei Yi. 2019. “Singapore, France to Boost Cooperation in Science and Tech Research.” The Strait Times, March 16. https://www.straitstimes.com/ business/economy/spore-france-to-boost-cooperation-in-science-and-techresearch. Tan, Tieniu. 2018. “人工智能的创新发展与社会影响。 ” [The Innovative Development and Social Impact of Artificial Intelligence]. 7th Lecture for the 13th National People’s Congress Standing Committee Speeches on Special Topics. http://www.npc.gov.cn/zgrdw/npc/xinwen/2018-10/29/ content_2065419.htm. UNESCO. 2019. “Principles for AI: Towards a Humanistic Approach? A Global Conference.” https://en.unesco.org/artificial-intelligence/principlesai-towards-humanistic-approach/programme. Université de Montréal. 2018. “Montréal Declaration for Responsible Development of Artificial Intelligence.” Montréal Declaration Responsible AI. https://www.montrealdeclaration-responsibleai.com/. U.S. DARPA [Defense Advanced Research Projects Agency]. 2018. “DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies.” September 7. https://www.darpa.mil/news-events/2018-09-07.
9
COMPARISON OF NATIONAL ARTIFICIAL INTELLIGENCE (AI) …
217
U.S. DoD [Department of Defense]. 2019. Summary of the 2018 Department of Defense Artificial Intelligence Strategy. Harnessing AI to Advance Our Security and Prosperity. Arlington, VA: United States of America. Department of Defense. U.S. Executive Office of the President. 2016. Artificial Intelligence, Automation, and the Economy. Washington DC: White House. U.S. NITRD [Networking and Information Technology Research and Development]. 2016. Supplement to the President’s Budget for Fiscal Year 2017 . Arlington, VA: United States of America. National Science and Technology Council. ———. 2017. Supplement to the President’s Budget for Fiscal Year 2018. Arlington, VA: United States of America. National Science and Technology Council. ———. 2018. Supplement to the President’s FY 2019 Budget. Arlington, VA: United States of America. National Science and Technology Council. U.S. NSF [National Science Foundation]. 2018. “National Science Board Statement on Global Research and Development (R&D) Investments NSB-20189.” February 7. https://www.nsf.gov/nsb/news/news_summ.jsp?cntn_id= 244465. U.S. NSTC [National Science and Technology Council]. 2016a. The National Artificial Intelligence Research and Development Strategic Plan. Washington, DC: United States of America. Executive Office of the President. ———. 2016b. Preparing for the Future of Artificial Intelligence. Washington DC: United States of America. Executive Office of the President. NSTC Committee on Technology. Villani, Cédric, Marc Schoenauer, Yann Bonnet, Charly Berthet, Anne-Charlotte Cornut, François Levin, and Bertrand Rondepierre. 2018. For a Meaningful Artificial Intelligence: Towards a French and European Strategy. Paris: Conseil National du Numérique (French Digital Council). Villani Mission on AI. 2018. “AI for Humanity: French Strategy for Artificial Intelligence.” Paris: Conseil National du Numérique (French Digital Council). https://www.aiforhumanity.fr/en/. White House. 2018. Summary of the 2018 White House Summit on Artificial Intelligence for American Industry. Washington, DC: Executive Office of the President of the United States, Office of Science and Technology Policy. ———. 2019. “Executive Order 13859 of February 11, 2019 Maintaining American Leadership in Artificial Intelligence.” Federal Register 84 (31): 3967–3972. https://www.govinfo.gov/content/pkg/FR-2019-02-14/pdf/ 2019-02544.pdf. World Bank. 2019. “Research and Development Expenditure (% of GDP).” Indicators. Science and Technology. https://data.worldbank.org/indicator. Zhang, Byoung-Tak. 2016. “Humans and Machines in the Evolution of AI in Korea.” AI Magazine 37 (2): 108–112.
CHAPTER 10
Militarization of Artificial Intelligence: Progress and Implications Shaza Arif
Introduction Artificial Intelligence has been an enchanting technology which has grasped the attention of policymakers in the recent time period. The phenomena of AI are not novel yet the significant enhancement that has been making its way in this sector has transformed the potential uses of AI. The research on AI initiated in 1940 yet following 2010 the research picked pace due to the emergence of faster computer processing, presence of more data and advanced machine learning. Today states around the world are eyeing to rope in this technology in order to boost the capabilities of their militaries. The twenty-first century has been encompassed with towering enhancement in the field of AI. There is no particular definition associate with AI as it is extremely arduous to generate consensus on its definition due to the broad scope associated with it yet the most elementary way to define AI would be to program machines in such a way that their responsive and cognitive capacities equate or surpass that of human beings. AI is
S. Arif (B) Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_10
219
220
S. ARIF
characterized into two domains, i.e., weak AI and strong AI, also known as general AI. Weak AI is task-specific which can execute whatever it has been programmed to do and does not encompass the capacity to go beyond that.1 Strong AI on the other hand, can perform a number of tasks spontaneously while employing the cognitive element where it has the ability to perform tasks it has not been specifically assigned taking into consideration the current context of the situation.2 In the current time, only weak AI is available with the prospects that strong AI will be around by 2050. Weak AI has shown its results at an expedited rate then was broadly anticipated by the policymakers. In 2016, when Lee Seodal, a pro Go player of 9 Dan rank lost the game of Go from a computer program namely Alpha Go developed by Google,3 the results had a surprising impact as Go is an extremely deceptive game which requires exceptional intelligence in order to win. Thereby, this event served as a breakthrough and an eye-opener for the potential utility of this technology in a number of other arenas. Moreover, this development was not anticipated till 2024, which brings it to our attention, how fast and unpredictable AI is. Furthermore, within a year Alpha Go itself was defeated by another program Alpha Go Zero which utilized machine learning and was able to learn such strategies in the game of Go which the human beings could not come up with, despite playing Go for an increasingly long period as Go is one of the oldest games played by human beings. Similarly, in future more software can appear which will be having the capacity to oust what human beings have expected out of the technology. AI is a dual use technology which can be used for both civilian and military purposes akin to nuclear power. The military applications of AI comprise of command and control, autonomous weapons, surveillance, reconnaissance, cyber-attacks, integrated military networks, planning/training, and logistics.4 The prodigious amount of efficiency of autonomous weapons and the surprisingly trivial reaction time have captivated states to integrate this emerging technology into defense sector. President Putin accentuated the importance of AI by stating that “Artificial intelligence is the future, not only for Russia, but for all humankind,”.5 Furthermore he added, “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
221
Though there are a number of potential uses of AI in the military sector, the primary domain which the experts and policymakers have associated with militarization of AI refers to allotting substantial autonomy to the weapons and pulling the human out of the loop. Furthermore, the primary focus of the militarization of AI has been the Lethal Autonomous Weapons (LAWS) or more generally known as Killer Robots which have the ultimate objective of tracing and engaging their target without human operators involved. There are three models of linked with autonomy with regard to AI. Firstly, weapons could be given controlled autonomy where human beings could exercise a fair amount of control. The second situation is where weapons are autonomous and human beings maintain an oversight presence above it. Lastly, the situation arises where human beings either voluntarily give up the authority previously encompassed by them to the machines or involuntarily are deprived of the authority when machines learn to program themselves and make the human authority redundant. Initially U.S. has enjoyed a technological edge over all other states and was leading the race of AI until recently when China has accelerated the efforts to break the status quo and take the lead in this race. Furthermore, taking into consideration the wave of this trend, other countries have stepped into join the race as the technology has been very tempting to play with. The chapter will discuss an emerging technology that has been entering the defense sector the impetus and the developments of U.S., China, Russia, and India. The paper will also shed light upon the implications that the militarization of AI will steer along.
U.S. And the Militarization for AI In the U.S., there has been extensive research in the field of AI which gives it a fair leverage over the others. The American military has employed AI in its weaponry since the Cold War. However, the recent developments with regard to the military policymaking in the Pentagon echoes the desires of the armed forces to make AI a critical aspect of the current military doctrine.6 Technological superiority has been the hallmark of major powers. During the Cold War, both U.S. and Soviet Union indulged in a competition to maintain a strategic edge over the other. Following the end of Cold War, U.S. emerged as the sole superpower and enjoyed its “Unipolar
222
S. ARIF
moment” in the world which shaped a new world order. It had the most power, was most influential, had advanced military and was the state with the highest military spending. The pattern prevailed for a number of years until U.S. felt that other countries are bent to challenge the sole U.S. hegemony in the military might. One of the immediate threats in this regard was China whose economy was making progress by at an expedited rate. Moreover, certain American policies such as the adoption of liberal hegemonic aspirations to engineer democracies around the world, provoking China by having a heavy military presence in the South China Sea and taking NATO to Russia’s backyard also unraveled a quest by these countries to challenge the U.S. by strengthening their capabilities. Secondly, the War on Terror that was commenced after the attacks on twin towers on 11 September 2001 had lingered on for such a protracted period of time that the individuals born after the attack are now old enough to go and fight in Afghanistan. Moreover, there are a number of other areas where the U.S. military is directly or indirectly involved. Hence considering the endless conflicts in which U.S. military is involved, the policymakers want to bring substantial advancements in its military in order to cope up with impending challenges in the future. In 2014, the then Defense Secretary Chuck Hagel called for steps in order to augment the AI capabilities of the U.S. armed forces which was apparent in the “Third Offset Strategy.”7 An offset strategy is an attempt to make technological advancement with the objective to flip the balance of power against your adversary. The third offset strategy comprises of the amalgamation of technology along with the military operations in order to bolster conventional deterrence. It also includes electronic warfare, social media surveillance and cyber-defense measures. The third offset strategy was brought into action in light of the impending threat from Russia and China each of whom were making rapid advancements in AI. Even the chairperson of Service Armed Service Committee (SASC), James Inhofe affirmed that Russia and China are in a better position with respect to the advancement of AI.8 Mike Griffin, the defense undersecretary for R&D has asserted that China has conducted test of hypersonic 20 times more as compared to U.S. Griffin added that in the view of the Chinese advancement, U.S. should also work toward making hypersonic capabilities.9 Eric Schmidt, the Defense innovation advisory Board Chairman and former Google AI expert believed that U.S. was five years ahead of China until his recent visit to China after which he reconstructed his statement stating that “We were lucky if we had 6 months.”
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
223
In fact, Russia and China are allotting a magnanimous amount of resources for the weaponization of AI. A number of policymakers in U.S. continue to hold the view that U.S. is lagging in investing the required amount of money in the weaponization of AI. Today the third offset strategy has been regretted by those who advocated for it in the first place and proved to be futile which has forced the policymakers to resort to different strategies in order to counter the threat to the dominance of U.S. military might. Joint Artificial Intelligence Centre (JAIC) which was established in June 2018 is on the go to chalk out plans and framework of employing the AI into U.S. military. Currently, it has been monitoring over 600 AI projects.10 In 2018, the Department of Defense (DOD) Artificial Intelligence Strategy was issued which laid out the framework paving way for strengthening the U.S. military capacities in order to have better operational capacities and for the larger objective of shielding U.S. from security vulnerabilities and threats. Pentagon wants such systems which can process more data as compared to human beings which can precipitate into better monitoring of the battlefield and rationale decision making for the U.S. armed forces. Project Maven was launched in April 2017 which is an initiative of the U.S. Army which has given it access to the battlefield command and control. The program relies on computer-vision algorithms to help the civilian and military analysts who have to deal with abundant amount of data which is collected every day in view of counter-insurgency and counter-terrorism activities. This would ultimately help to narrow the prodigious amount of information that is available to the decision makers sitting in Washington. Resultantly, it would make the interpretation of data more convenient and more speedy as compared to the conventional procedure. Furthermore, it would also streamline the distribution of data which would need to be distributed to a number of users in the military networks. The first phase of Project was costing U.S. the sum of around $ 70 million, i.e., a very small amount of the magnanimous military budget of U.S. OF $600 billion. The project also aims to have better drone strike capabilities by sensing imagery in order to targets at far off place which could lead to less burden on the military men. There has been certain ambiguity whether there will be complete autonomy granted or not to the systems involved. However, Pentagon’s statement claiming “Technologies underpinning unmanned systems
224
S. ARIF
would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force” signals that there are chances that a substantial amount of autonomy could be employed in the initiative. Even more convincing is the avid support that U.S. accords to the development of Lethal Autonomous Weapons (LAWS). The Department of Defense (DOD) has also been working on the Joint Enterprise Defense Infrastructure (JEDI) which involves developing a cloud computing system to assist the U.S. armed forces around the world. Time required for the completion of this project is 10 years while costing around $10 billion. The ultimate objective of this project is to set the stage for an algorithmic warfare considering the notion that the AI has eliminated the battlefield from warfare and the ambiguous nature will require such measures. Likewise, the Defense Advanced Research Project Agency (DARPA) has developed an autonomous ship namely Sea Hunter which is able to patrol in the sea for a considerate period of time and does not require any crew member for its operation. Anti-submarine operations can also be performed by this autonomous weapon and increase the vulnerability of the adversaries’ forces in the sea. U.S. has also been propagating the use of Lethal Autonomous Weapons (LAWS) or more famously known as Killer robots. These are autonomous weapons equipped with the capability to inflict substantial damage in air, underwater and land which can track and engage their targets without relying on any sort of human assistance. U.S. has been very resistant toward any sort of U.N resolution that calls for a ban on the development and usage of these weapons. One challenge that U.S. would be facing in its ambition toward the militarization of AI is the cooperation from the private sector. The primary source of the AI is the private sector. In the U.S., the private sector is not bound to adhere to the orders of the government to fulfill its demands. In June 2018, Google had to withdraw from renewing its contract over Project Maven as a consequence of the immense protest of its employees.11 Approximately 3000 Google employees were against the weaponization of this technology as they believed that militarization of AI would cross the ethical limits and would tarnish the image of the brand and therefore staged protests which forced the senior leadership to back out from the project. The withdrawal of Google was a major blow to Pentagon and has been a source of frustration ever since. Moreover
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
225
Google has been termed as “unpatriotic” over its withdrawal and engagement with China over AI as Google has been working with the Chinese government “I have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don’t want to work for the US military,” General Joe Dunford, chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November. The controversy associated with Project Maven further strained the relations between the AI companies and the Pentagon. The U.S. government has to do overcome this trust deficit and engage the private sector. However, there are some companies such as Microsoft, Amazon, and Clarifal who continue to aid the U.S. government in the weaponization of AI.
China as the Aspirant of Leadership in AI China has emerged as a major aspirant of militarizing its armed forces with AI and the military in full swings to make expedited efforts. China wants to make use of the broad potentials of AI to revolutionize its military. China has appeared as a front runner in AI. China has made AI a national policy and is aiming to be the AI leader by the year 2030. President Xi Jingping has called to follow “the road of military-civil fusion style innovation,” this innovation has taken to the form of a national strategy. There are a number of reasons that have compelled China to resort to this emerging technology. Firstly, China envisions a multipolar world with China being an extremely relevant actor in it. Consequently, AI would play a very major role in strengthening the Chinese military keeping in view that military might is an extremely crucial for major powers and the sophistication of AI would allow it to overcome the numerical superiority of the U.S. armed forces. China is looking toward new and more smart strategies. Instead of piling up the conventional or nuclear stockpiles, it is bent to acquire the technology which will overawe the leverage of numerical strength, for instance China is not interested in beating the stockpiles of U.S. armed forces which were piled up three decades ago during the Cold War, rather it aims to get new technology which will precipitate out more efficient and decisive results in the future. China is aware of the fact that future wars will involve algorithmic warfare and therefore it is in its best interest that China excels in this field.
226
S. ARIF
The more data available, the more better the AI. Since China has more data, it is making significant advancement in AI and is producing more research work in this field.. It is estimated that China currently encompasses 20% of the world’s data and in future it is expected that the percentage is expected to rise to 30%. Since China has more data it will be very apt to incorporate this technology at a faster rate. More data expedites the process of machine learning and makes it more competent.12 China is also investing heavily in hypersonic and supersonic technology. Hypersonic technology will provide China with cutting edge technology. The hypersonic technology as emerging as game-changing and will act as a force multiplier in the strength of the militaries It has been claimed that supersonic missiles would make a number of targets extremely vulnerable while itself are highly deceptive and can avoid interception by incoming attacks. Hence China is playing aptly and investing in such technology which will inadvertently give it a platform to emerge as a clear winner in the field of technology. China is also having future plans to make an underwater base in the South China Sea which will be having autonomous submarines.13 Apart from strengthening its position in the South China sea, it will also enhance the second strike capability of China. One of the reasons behind this move is to deter American ships which continue to maintain their presence in the South China sea which emerges as a source of contention between both of the countries. The Chinese government exercised strong control over the private sector which is bound to exercise all the demands of the government and there is no resistance from the private companies, some of which are eager to work with the government for militarizing AI. Apart from the private sector, there is a surprising level of cooperation with the academic sector as well for instance the Tsinghua University has commenced the Military Civil Fusion National Defense Peak Technologies Laboratory to serve as a platform for the dual use of the AI. The combined approach will certainly ease the path toward the integration of this technology. Although China has expressed his desire to conclude an agreement which would ban the use of fully autonomous weapons but it has no appetite for banning the development of such weapons. China has been working on next generation stealth drones and is bent on acquiring swarming technology. In future, it is expected that China will surpass the
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
227
U.S. due to the high level of research that is taking place in AI and the intense cooperation that exists between the concerned institutions.
Russia and AI Russia is one of those states that have also been bent to acquire this technology. Russia’s interest in AI can be analyzed from the statements made by the Russian President Vladimir Putin. Similarly, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.”14 Russia has taken the position that the international community should give a second thought to the calls for banning lethal autonomous weapons in view of their potential benefits. After President Putin accentuated the importance of AI, the discussion on how AI will be integrated into Russian policy became the widespread debate in the public, social and political sphere. The scientific community also became proactive as to how AI will play a crucial role in future. Russia is currently working on a national strategy for Artificial Intelligence. In 2018, a conference by the name of “Artificial Intelligence” was organized by the Ministry of Economic Development at the Military Technical Forum ARMY-18” after which a number of conferences organized by the took around this concept.15 The Russian military divides combat robots into three generations: 1st generation robots with software and remote control that can only function in an organized environment. Second-generation robots are adaptive, having a kind of sensory organs and able to function in a random environment, i.e., to adapt to the environment changes. Third-generation robots are smart robots equipped with an AI-based control system (so far, such robots are only available as laboratory models). In 2000, the Russian Defense Ministry adopted an Integrated Target Program “Robotization of Weapons and Military Equipment – 2015.”16 The program allowed to successfully carry R&D works and to produce and test experimental mock-up models of ground-based robotic systems, but development and engineering never started which actually led to the suspension of research and development in ground-based military robotics. Russia has always been bent on retaining the tag of a major power and in this regards it has been practicing various steps. The acquirement of
228
S. ARIF
new forms of technology always leaves an overwhelming impact on adversaries. Recently, relations between U.S. and Russia have been strained due to a number of factors such as Russian invasion of Crimea in 2014, alleged Russian meddling in American elections and increased Russian involvement in Syria. These contentions have expedited Russia’s quest for accessing latest technology. Russia has taken a number of steps which signals its interest in the new technology of AI, the most pertinent among which is the statement by the Russian President who has repeatedly emphasized on the importance of acquiring this new and evolving technology in order to mitigate the existing gap between Russia and its adversaries. While giving an interview to Correra Della Serra, an Italian newspaper, Putin stated “Compare the Russian spending on defense – about 48 billion dollars, and the US military budget, which is more than 700 billion dollars. What an arms race in reality can be here? We’re not going to get involved in it. But we also have to ensure our security. That is why we are bound to develop the latest weapons and equipment in response to the US increase in military spending and its clearly destructive actions.” According to SIPRI, the Russian defense budget decreased by 16% in 2017 compared to 2016, and by another 3.5% in 2018. Hence it is optimal for Russia to resort to AI as it would give it an opportunity to bridge the gaps currently entrenched in its military capabilities. Currently, Russia does not have a declared stated AI policy yet it is making rapid advancements in this field. The National Centre for development of Technology along with the Basic Elements of Robotics were opened in 2015 by the Foundation of Advanced Research Projects which is the Russian equivalent of DARPA. In March 2018, the Russian Defense Minister Shoigu urged for more cooperation between the civilian and military networks for the advancement of AI in light of the technological threats to Russian security in light of the advancement of AI technologies. In January 2019, reports claimed that Russia is in the development phase of autonomous drone which would be able to take off, execute its mission and land back without any human interference thought the weapons use will require human approval. Furthermore, akin to U.S., Russia also supports the development of lethal autonomous weapons (LAWS) and has tried to limit the number of days that the issue of banning the development of lethal autonomous weapons was discussed in the United Nations.
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
229
Russia has also been working on a new city “Era” solely devoted to the creation of military Artificial Intelligence systems and similar technologies.17 In February 2019, Russian President Vladimir Putin announced the end of a key stage of testing the nuclear powered unmanned underwater vehicle “Poseidon” that can deliver a conventional and a thermonuclear cobalt bomb of up to 100 megatons against enemy’s naval ports and coastal cities. Russia’s largest gun manufacturer Kalashnikov in 2017 announced that it has manufactured an absolutely autonomous combat module which will be able to identify and engage targets through the use of neural network technologies.18 Russia is not at the level at which US and China are currently standing on yet it is gradually making its way to be an important member of the AI club.
India and the Race of AI India is also one of those countries that have been enticed by this emerging technology. India is seeking ways to assert itself as one of the major powers in the transitioning multipolar world and is in swings with the military modernization of its forces. India is on its way to making progress in AI and it has also issued a paper stating its AI strategy. The Department of Defense and Research (DRDO) has commenced initiatives to militarize AI and it opening new avenues where the Indian armed forces together in order to strengthen its capabilities. As far as the private sector is concerned, there are a number of companies that are willing to work with the government on the militarization of AI. In 2019, the chairperson of N. Chandrasekaran, TATA sons agreed to work with the government for this task and have signed an agreement with the Indian armed forces. Therefore, India will not have the hinderance of the cooperation from the private sector. Moreover, the IT industry in India is already eager to work with the Indian armed forces to perform such tasks. One of the stated reasons for India opting for this technology is that since China, the longtime foe and strategic regional rival in the region is making expedited effort toward this aim therefore India should also consider reaching out to this domain. India sees China as an obstacle to its regional hegemony. Therefore, India is taking steps which enable it
230
S. ARIF
to counter China in South Asia and avoid lagging behind in this race. Adding strength to this argument is the fact that General Bipin Rawat, the former Army Chief of Indian Army stated, “since our adversaries are revolutionizing the scope of their defence capacities, it is better that we catch up with them before it is too late.” Other policymakers in India have even went on to say that India is losing this race if it doesn’t take required steps quickly. Secondly, despite the fact that India claims China as the primary source of its progress in AI, the underlying fact it that it is also augmenting its capabilities against Pakistan, its arch-rival since independence. In 1947, after the British rule ended in Subcontinent, a new state of Pakistan emerged on the world map. This development has not been accepted by India till yet and both the countries have been involved in multiple wars. Likewise, recently in February 2019, both India and Pakistan met each other in aerial engagement in which India lost one of its MIG-21 and Su-30. This was followed by towering tensions between the archrivals that continued for a number of days. The international community had to interfere as they were anticipating a nuclear war between both of the countries assuming that both had stockpiles of nuclear weapons. Hence, in order to gain leverage over Pakistan, India is also aiming for this technology on the premises that it would help it to dominate Pakistan. India’s Land Warfare Doctrine 2018 claims that India would make substantial advancement with AI in its military modernization to overcome challenges in the future. Furthermore, the Indian leadership is quite confident that its military will be upgraded with this technology in the near times as continuous efforts are being put in this regard. Currently, Centre for Artificial Intelligence and Robotics (CAIR), initially developed by the DRDO is the department which is tasked with finding new avenues for innovation and integrating AI into the defense. Unmanned tanks namely Muntra tanks launched by the Chennai labs in 2017 are now part of the Indian armed forces. Different versions of this tank have been developed such as Muntra S for surveillance, Muntra N for areas where nuclear risk is high and Muntra M for the detection of Mines. Multi Agent Robotic Framework (MARF) is in the development stages which would act like a team of soldiers and would assist the Indian Army in combat operation. Rustom II, an unmanned aerial vehicle which have the capability to carry put surveillance at as distance of 250 km was claimed successful on February 2018. The DRDO is also in the process of obtaining more autonomous aerial vehicle. DAKSH robots which are
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
231
autonomous in nature and are capable of diffusing bombs in dangerous locations in addition to overcoming complicated terrains have also been developed and around 200 of them are part of Indian armed forces. These efforts are picking pace especially under the Modi regime which is convinced to make severe advancement in the Indian armed forces in order to appease his domestic Hindu-hardliners. The rationale behind is to elevate the political stature of his regime by engaging the domestic public with such tactics. In future, India can also apply the surveillance on its citizens in the light of the surfacing rise of protests from its citizens. India can also apply this surveillance model particularly in the state of Kashmir where a separatist movement is brewing in light of the oppressive behavior of the government. Though India has a long way to go before it can actually catch up with the front runners in this field, it has stepped in race and will continue to augment its strength.
Implications There are competing claims by experts on how AI is going to impact the nature of warfare. The first school of thoughts asserts that AI will have an evolutionary effect on warfare as it will have an advancing impact on the reaction time and the accuracy while keeping the human oversight. On the other hand, there is another school of thought which posits that AI will completely transform the dynamics of warfare imparting more autonomy to the machines and ultimately marginalizing the human control. Artificial Intelligence is leading the world to a stage where algorithmic warfare will change the dynamics of the battlefield. Technological experts such as Elon musk have repeatedly drawn the attention of the policymakers to the fact that AI is extremely lethal to employ in the battlefield. According to musk, there is a threshold to which AI can be controlled and human can be kept in the loop. Once the threshold is crossed the human loses any substantial control over the autonomous weapons as they learn to program themselves. Hence though we may not truly want it but human beings will bring machines to an extent that they will bring machines to a point where they will be able to control human beings. AI has been a blessing as it has aided in the prevention of exploiting young minds into notorious activities. The image-matching technique
232
S. ARIF
has played its part in averting terrorism related and violence provoking content from social media which re-emerged from other accounts or platforms. In the same manner, Facebook is keen to employ the machine leaning algorithms to avert it from appearing on the news feed of the users. Similarly, the surveillance techniques employed on terrorist networks aided in the recognition of individuals who were involved in any lethal activities. Hence, there are number of platforms where multiple benefits can be extracted from this technology yet granting excessive the militarization of AI is a very risky and lethal step which can rope in deadly consequences. Militarization of Artificial Intelligence is certainly going to bring paramount changes. Firstly, this wave of militarization of AI will trigger an arms race with each state making huge investment in this technology in order to avoid lagging behind. The weaponization of AI is also going to erect challenges of accountability. If an autonomous weapon is intentionally or accidently programmed in such a way that it creates any strategic mishap by killing civilians or pushing for escalation then there will be concerns over who will be accounted for this act.19 In such scenarios would the person who programmed it be held accountable for the damage inflicted or there would be justifications based upon machine. For instance if a machine kills any civilian on the premises that the individual posed a potential threat, then such circumstances can lead to major accountability issues and can make their use more likely. These are the challenges that needs to be addressed before the autonomous weapons are deployed by states. Adding to this problem is the fact that algorithms are prone to deception and malware which can manipulate the systems into executing attacks which would bring harm to whole of mankind. Likewise, AI will generate ethical challenges. The International Humanitarian Law (IHL) obliges that civilian and combatants should be treated differently and even unknown or undefined individuals should be accorded the status of a civilian. However, autonomous weapons would have to face challenges in order to abide by such terms. Hence they pose a serious threat to the future of humanity.20 Warfare is cruising toward increasing more and more autonomy becoming less reliant on human’s commands AI is making progress at a rate that is far beyond the expectations of the AI experts. It was being forecasted that a machine would be able to defeat human in game of GO
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
233
in the year 2027, contrary to expectation this happened in 2016.21 Resultantly, the progress signals that its unpredictable nature makes it difficult to comprehend the uncertainty that it would steer along. Different countries continue to raise voices against banning the development of lethal autonomous weapons (LAWS) as it would have a destabilizing impact on the global security environment. However, the fact of the matter is that AI is here to stay and abandoning it is not on the option list anymore. Major powers have imitated this race and it will continue akin to the arms race in the Cold War. This race of AI will not remain limited to a handful of states rather it will proliferate and provoke a regional arms race as well. Chinese and Russian inclination to militarize AI has tagged this as the with major power, Resultantly, India which has ever since been having hegemonic desires in the region have also opted to integrate it into its military. Consequently, it will force Pakistan to take countermeasure as India and Pakistan share a very hostile relationship. The Indian steps will ultimately force Pakistan to also opt for the same kind of technology and a new arms race can pick ground in neighbor which already are arch-rivals and have terminated all types of diplomatic relations after they had an aerial engagement in February 2019 in which one Indian jet was shot down. In case of escalation, these autonomous weapons can be a source of extreme devastation and it is highly strenuous to “simply turn them off” as humans would definitely lose control over the situation. Such circumstances are even possible with Weak AI and are bound to enhance with more autonomy being introduced in weapons as AI will overcome every possible barrier in order to complete the task they were assigned for. Resultantly, the AI will be extremely competent to complete the programed task yet there are slim chances that the method employed will be in line with the interests of mankind. The danger lies in the notion that there hasn’t been any technology in the past that has overtaken human intelligence and AI glaringly more smarter than humans which makes it unpredictable for human beings as to where AI might lead us in case of crisis. There are competing claims that AI despite being smarter then human beings will not be in a position to have control over human beings and somehow human beings will prevent it from taking a lethal form. However, fact of the matter is that any entity which is more intelligent than the other is able to control it for example human beings are able to control animals as they are more intelligent than them and similarly human intelligence will not hinder AI
234
S. ARIF
from controlling us as if we are letting go of our special position of being the most intelligent on the planet then we inevitably are surrendering the control that we have been enjoying previously.22 Autonomy is increasing and every new military technology is more advanced and more autonomous than the previous one.There are significant chances that stealth drones with absolutely new combat functions will be made by China. Russia and U.S. which will create challenges in future. Hackers are already on the mission of finding ways in which to decisive AI and cause damage to its own source. Moreover, no military technology is prone to exploitation and there can be scenarios where the machines can be disrupted or AI-enabled systems can also spread false information which can trigger crisis which can unroll into direct confrontation between states thereby creating space where the sole sufferer of this technology will be humanity. Moreover, the reaction time for autonomous weapons is so minute that human beings won’t be able to take de-escalatory measures in order to take control over the situation. Weaponizing AI would open way for further instability to the global security and chaos. The investment by the major powers for the militarization of AI will impose a forcing pressure on other states to follow the similar patterns many of whom will not be able to capitalize this technology and will remain dependent on the advanced countries to fulfill their requirements of integrating this technology. In South Asia, which is characterized by three nuclear powers, the race will be lethal as each side will be inclined to augment its capabilities. The introduction of this technology will damage the strategic stability of South Asia which is already very fragile and prone to conflict. The International community is prioritizing the strategic edge that comes with this technology over the ethical dimensions. This ignorance and can lead to crisis that may unravel into devastation.
Recommendations • The militarization of AI should take into account that human remains in the loop. Currently, there are voices raised at various platforms that suggest that humans will always be kept in the loop and there are slim chances of the delegation of absolute autonomy to
10
•
•
• • •
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
235
machines. However, the highly attractive benefits of AI can enhance the desires of the powers to play with this technology. There is a dire need to draw legislation on this technology as it comes with a lot of uncertainty. In the absence of laws and rules which govern this technology, there is enough space for escalation which can bear out unwanted results. The international community must cater in the risks associated with this technology and should device a framework on how to pose certain limitations on it before it is too late. The countries such as U.S. and China which practice a leverage over AI should take concrete measures as they can play effective part in putting curbs on the potential use of AI Experts should also sit together to generate an agreement on any particular definition of AI as it will avoid a lot of confusion already posed. There should be increased collaboration between the technological sector and the defiance sector in order to curtail the prospects of letting AI precipitate calamitous results. Since AI is a highly disruptive technology which creates a high sense of insecurity in states which lag behind in this race which creates a ground for escalation and pre-emption, there is a dire need to create transparency in the activities and developments of the weaponization of AI such as prior notifications of military exercises
Conclusion It is certain from the recent developments that future wars won’t be only uncertain but will also be highly lethal. The future militaries will be equipped with more capabilities. At the moment human being is voluntarily delegating the authority to machines, however, these patterns may change in the future. As of now the technology will appear as if it is aiding the human beings yet in the hindsight this technology will bring humanity at a verge where it will have to compete with the technology in order to determine its own fate.
Notes 1. Jeff Kerns, “What’s the Difference Between Weak and Strong AI?”, Machine Design, February 2017, https://www.machinedesign.com/mar
236
S. ARIF
2. 3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
kets/robotics/article/21835139/whats-the-difference-between-weakand-strong-ai. George Rajna, “Weak AI, Strong AI and Superintelligence”, Vixra, (2018): 04, https://vixra.org/pdf/1706.0468v1.pdf. Cade Mentz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future”, The Wired, March 3rd, 2016, https://www.wired.com/2016/ 03/two-moves-alphago-lee-sedol-redefined-future/. Kristofer Carlson, “The Millitary Application of Artificial Intelligence”, Research Gate, (2019), https://www.researchgate.net/publication/335 310524_THE_MILITARY_APPLICATION_OF_ARTIFICIAL_INTE LLIGENCE. Jessica Harris, “Weapons of the Weak: Russia and AI Driven Asymmetric Warfare”, Brookings, (2018), https://www.brookings.edu/research/wea pons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/. Brandon Knapp, “Here’s Where the Pentagon Wants to Invest in Artificial Intelligence in 2019”, C4ISRNET, February 16, 2018, https://www.c4i srnet.com/intel-geoint/2018/02/16/heres-where-the-pentagon-wantsto-invest-in-artificial-intelligence-in-2019/. Jesse Ellman, “Assessing the Third Offset Strategy”, Center of Strategic and International Studies, (2017): 2. Accessed February 11, 2020, https://csis-prod.s3.amazonaws.com/s3fs-public/publication/170302_ Ellman_ThirdOffsetStrategySummary_Web.pdf?EXO1GwjFU22_Bkd5A. nx.fJXTKRDKbVR. Collin Clarke, “Artificial Intelligence: Are We Losing The Race?”, Breaking Defense, February 12, 2019, https://breakingdefense.com/ 2019/02/artificial-intelligence-are-we-losing-the-race/. Sydney J. Freedburg, “US Must Hustle On Hypersonics, EW, AI: VCJCS Selva & Work”, Breaking Defense, June 21, 2018, https://breakingd efense.com/2018/06/us-must-hustle-on-hypersonics-ew-ai-vcjcs-selvawork/. Kelley M. Sayler, “Artificial Intelligence and National Security”, Congressional Research Service, 2019, https://fas.org/sgp/crs/natsec/R45178. pdf. Nick Statt, “Google Reportedly Leaving Project Maven Military AI Program After 2019”, The Verge, June 01, 2019, https://www.theverge. com/2018/6/1/17418406/google-maven-drone-imagery-ai-contractexpire. Sarah Zhang, “China’s Artificial-Intelligence Boom”, The Atlantic, February 16, 2018, https://www.theatlantic.com/technology/archive/ 2017/02/china-artificial-intelligence/516615/. Hill Chase, “China’s Mysterious Underwater Base Features A.I. And Robots”, Inside Hook, March 11, 2019, https://www.insidehook.com/
10
14.
15.
16.
17.
18. 19.
20. 21.
22.
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
237
daily_brief/news-opinion/chinas-mysterious-underwater-base-featuresrobots. Ecatarina Garcia, “The Artficial Intelligence Race: U.S., China and Russia”, Modern Diplomacy, April 19th, 2018. https://www.academia. edu/36451550/THE_ARTIFICIAL_INTELLIGENCE_RACE_US_C hina_Russia_by_Ecatarina_Garcia_Modern_Diplomacy_April_19_2018. Vadim B. Kozyulin, “Militarization of AI from a Russian Perspective”, Research Gate, July 2019, https://www.researchgate.net/publication/ 335422076_Militarization_of_AI_from_a_Russian_Perspective. Vadim B. Kozyulin, “Russia’s Automated and Autonomous Weapons and Their Consideration from a Policy Standpoint”, Research Gate, (2016), https://www.researchgate.net/publication/309732151_Russia ’s_automated_and_autonomous_weapons_and_their_consideration_from_ a_policy_standpoint. Kristen Gronlund, “State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons”, Future of Life, May 09, 2019, https://futureoflife.org/2019/05/09/state-of-ai/?cn-reloaded=1. Ibid. Jayshree Pandya, “The Weaponization of Artificial Intelligence”, Forbes, January 14, 2019, https://www.forbes.com/sites/cognitiveworld/2019/ 01/14/the-weaponization-of-artificial-intelligence/?sh=1a1c1e8b3686. Owen Daniels and Brian Williams, “Day Zero Ethics for Military AI”, War on the Rocks, January 28th, 2020. Chriss Telly, “Info Ops Officer Offers Artificial Intelligence Roadmap”, Breaking Defense, July 11, 2017, https://breakingdefense.com/2017/ 07/info-ops-officer-offers-artificial-intelligence-roadmap/. Marx Tegmark, “Benefits & Risks of Artificial Intelligence” Future of Life, June 2016, https://futureoflife.org/background/benefits-risks-ofartificial-intelligence/.
Bibliography Carlson, Kristofer. “The Military Application of Artificial Intelligence”. Research Gate. (2019). https://www.researchgate.net/publication/335310524_THE_ MILITARY_APPLICATION_OF_ARTIFICIAL_INTELLIGENCE. Chase, Hill. “China’s Mysterious Underwater Base Features A.I. And Robots”. March 11, 2019. https://www.insidehook.com/daily_brief/news-opinion/ chinas-mysterious-underwater-base-features-robots. Clarke, Collin. “Artificial Intelligence: Are We Losing The Race?”. Breaking Defense. February 12, 2019. https://breakingdefense.com/2019/02/artifi cial-intelligence-are-we-losing-the-race/.
238
S. ARIF
Ellman, Jesse. “Assessing the Third Offset Strategy”. Center of Strategic and International Studies. No. 2 (2017). https://csis-prod.s3.amazonaws.com/ s3fs-public/publication/170302_Ellman_ThirdOffsetStrategySummary_Web. pdf?EXO1GwjFU22_Bkd5A.nx.fJXTKRDKbVR. Freedburg, Sydney J. “US Must Hustle On Hypersonics, EW, and AI: VCJCS Selva & Work”. Breaking Defense. June 21, 2018. https://breakingdefense. com/2018/06/us-must-hustle-on-hypersonics-ew-ai-vcjcs-selva-work/. Garcia, Ecatarina. “The Artificial Intelligence Race: U.S., China and Russia”. Modern Diplomacy. April 19th, 2018. https://www.academia.edu/364 51550/THE_ARTIFICIAL_INTELLIGENCE_RACE_US_China_Russia_ by_Ecatarina_Garcia_Modern_Diplomacy_April_19_2018. Gronlund, Kristen. “State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons”. Future of Life. May 09, 2019. https://futureoflife.org/2019/05/09/state-of-ai/?cn-reloaded = 1. Harris, Jessica. “Weapons of the Weak: Russia and AI Driven Asymmetric Warfare”. Brookings. (2018). https://www.brookings.edu/research/weaponsof-the-weak-russia-and-ai-driven-asymmetric-warfare/. Kerns, Jeff. “What’s the Difference Between Weak and Strong AI?”. Machine Design. February 2017. https://www.machinedesign.com/markets/robotics/ article/21835139/whats-the-difference-between-weak-and-strong-ai. Knapp, Brandon.“Here’s Where the Pentagon Wants to Invest in Artificial Intelligence in 2019”. C4ISRNET. February 16, 2018. https://www.c4i srnet.com/intel-geoint/2018/02/16/heres-where-the-pentagon-wants-toinvest-in-artificial-intelligence-in-2019/. Kozyulin, Vadim B. “Militarization of AI from a Russian Perspective”. Research Gate. July, 2019. https://www.researchgate.net/publication/335422076_ Militarization_of_AI_from_a_Russian_Perspective. Kozyulin, Vadim B. “Russia’s Automated and Autonomous Weapons and Their Consideration from a Policy Standpoint”. Research Gate. (2016). https:// www.researchgate.net/publication/309732151_Russia’s_automated_and_aut onomous_weapons_and_their_consideration_from_a_policy_standpoint. Mentz, Cade. “In Two Moves, AlphaGo and Lee Sedol Redefined the Future”. The Wired. March 3rd, 2016. https://www.wired.com/2016/03/ two-moves-alphago-lee-sedol-redefined-future/. Owen Daniels and Brian Williams. “Day Zero Ethics for Military AI”. War on the Rocks. January 28th, 2020. Pandya, Jayshree. “The Weaponization of Artificial Intelligence”. Forbes. January 14, 2019. https://www.forbes.com/sites/cognitiveworld/2019/01/14/theweaponization-of-artificial-intelligence/?sh=1a1c1e8b3686. Rajna, George. “Weak AI, Strong AI and Super Intelligence”. Vixra. No. 2 (2018). https://vixra.org/pdf/1706.0468v1.pdf.
10
MILITARIZATION OF ARTIFICIAL INTELLIGENCE …
239
Sayler, Kelley M. “Artificial Intelligence and National Security”. Congressional Research Service. (2019). https://fas.org/sgp/crs/natsec/R45178.pdf. Tegmark, Marx. “Benefits & Risks of Artificial Intelligence”. Future of Life. June 2016. https://futureoflife.org/background/benefits-risks-of-artifi cial-intelligence/. Telley, Chriss. “Info Ops Officer Offers Artificial Intelligence Roadmap”. Breaking Defense. July 11, 2017. https://breakingdefense.com/2017/07/ info-ops-officer-offers-artificial-intelligence-roadmap/. Zhang, Sarah. “China’s Artificial-Intelligence Boom”. The Atlantic. February 16, 2018. https://www.theatlantic.com/technology/archive/2017/02/ china-artificial-intelligence/516615/.
CHAPTER 11
Artificial Intelligence and International Security Nitin Agarwala
and Rana Divyank Chaudhary
Introduction Artificial intelligence (AI) is a branch of computer science that aims to create intelligent machines (IM) that can work and react like human beings. This essentially requires the IMs to be educated with knowledge, reasoning, problem-solving, perception, learning, planning, and the ability to manipulate and move objects so as to develop features such as speech recognition, learning, planning, and problem-solving. Such learning eventually allows the machine to learn from experience, adjust to new inputs and perform human-like tasks. Today’s AI relies heavily on deep learning and natural language processing to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data to implement its tasks. Though the term AI was coined in 1956, it gained importance many years later, after improvements in increased data volumes, advanced algorithms and improved power and storage were possible. While the initial research on AI in the 1950s was limited to problem-solving, the efforts of the Defense Advanced Research Projects
N. Agarwala (B) · R. D. Chaudhary National Maritime Foundation, New Delhi, India © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5_11
241
242
N. AGARWALA AND R. D. CHAUDHARY
Agency (DARPA) on mimicking basic human reasoning helped develop the intelligent-personal-assistants in 2003. This work paved the way for automation and formal reasoning, including decision support system and smart search system that could be used to complement and augment human abilities. On the contrary, the image of AI created by movies and science fiction as human-like destructive robots is one that has left a negative and a lasting impression on the human mind. In actuality, the current technology is far from being destructive and is aimed more toward benefitting the industry. One of the many areas where AI has found an important usage is security, in a broad sense, both for the industry and in the government. These areas include the defense or military security, the human security (intelligence, homeland security and economic and financial security), job security, health security, and cyber-security (information security and IoT). These innumerable application areas and humanity’s inability to exercise complete control like the machines, force those in the security community to think of the possible vulnerabilities and the security gaps that exist due to this evolving technology. The major fears revolve around AI exceeding human intelligence and human control, possible replacement of humans in every area of society, and in government monitoring, understanding, and controlling citizens. The key challenge is to understand the influence AI has on the various facets of security. To provide clarity, this chapter aims to shed light on the current trends and applications of AI, both in the industry and the government, and to understand how this AI may possibly aggravate the security dilemma for the world and what can be done about it to ensure international peace and order.
Importance and Usage of Artificial Intelligence The human brain, though very powerful, is limited in usage based on the training it is imparted. Hence, no two human brains think alike, evaluate, or perceive a problem the same way, and hence may have their own limitations which tend to increase with age and fatigue. On the contrary, AI is planned to replicate the human brain with the same learning and with the ability to have multiple machines (brains) thinking the same way, not affected by age or fatigue while doing things faster and with more accuracy and control. Some of the reasons that make AI important are their ability to:
11
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
243
a. Automate repetitive learning. b. Provide additional intelligence to existing systems, such as voice recognition systems in phones. c. Adapt to progressive learning through techniques such as back propagation. d. Analyze more and greater learning through deep learning models. e. Achieve greater accuracy through deep neural networks. f. Achieve better results from the available data. This thus provides three broad application roles in which AI can support humans—analytical, predictive, and operational. Accordingly, the area of security applications where AI has been used both by the industry and the government includes: Military Security Military security is one of the instruments available in international politics that permits a nation-state to apply for conflict prevention, crisis management, and peace-building activities so as to promote and enhance regional security by joint engagement in activities of arms control, border management, combating terrorism, policing, conflict and military reform (Tiersky n.d). Like many facets of the human life, military security is one of the areas wherein AI will have an outsized impact. Though the possible uses of AI in the military are many, some of them, wherein AI is being used, and will impact the working of the conventional military security include: a. Tele-operated ground robots that are controlled by humans from a distance. b. Autonomous weapons such as Unmanned Aerial Vehicles (UAV), Unmanned Underwater Vehicles (UUV) and Autonomous Aerial Vehicle (AAV) that would change the ways in which surveillance and payloads may be delivered. c. Shoals made up of autonomous underwater robots, sensitive to tiny distortions in the earth’s magnetic field that can complicate efforts to conceal the submarines. d. Swarms of unmanned submarines that can change the present ethos of naval warfare.
244
N. AGARWALA AND R. D. CHAUDHARY
e. Use of AI in logistics, intelligence and surveillance, and even weapons design will transform how the business of these activities is undertaken. f. Strategic-level AI operating as an ‘oracle’ for decision-makers will be able to test accepted wisdom, discard spurious associations, reject pet theories and identify key vulnerabilities in enemies. g. Several types of autonomous helicopter controlled by a soldier using a smartphone are under development in the US, in Europe, and in China. h. Autonomous ground vehicles (AGV) and Autonomous Underwater Vehicles (AUV) are under development worldwide. i. Target recognition system using machine learning techniques to automatically locate and identify targets with the help of SyntheticAperture Radar (SAR) images. j. Combat simulation and training to acquaint soldiers with the various combat systems deployed during military operations. k. Threat monitoring and situational awareness to acquire and process information to support a range of military activities. While the use of AI in UAVs has increased in recent years, letting it or any other autonomous system to make the decision on weapons release is still several years away. Most of the above-mentioned technologies are under development with a struggle to make the leap from development to operational implementation. Human Security Various facets of human security are being addressed by AI. The list is so long that to discuss each one of them here would be impossible, however, some of them which are considered as essential and are changing the society are: a. Use of human biometric identification data for social security, humanitarian aid, physical verification, to name a few, are changing the way humans are being managed and addressed in the society. Though such usage reduces frauds, it can be used for political oppression. b. Facial recognition technology.
11
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
245
c. Human security solutions using AI are being developed using data analysis, machine learning, and game-theory algorithms to prevent crimes by providing descriptive, diagnostic, predictive, and prescriptive analytics as seen through the use of CompStat,1 Armorway,2 and DARMS.3 Such applications provide both insider threats and physical campus security threats for an organization (see, for example, Forrest 2016). d. Human security due to global warming, growing connectivity due to social media, changes in labor and production due to advancing technologies have the potential to create systemic challenges, such as war, social, economic, or political disruptions. Predicting such possible dangers to respond effectively has been possible using AI. e. Though the use of AI has reduced credit card frauds, such a use often leaves unanswered questions on rules, regulations, and moral judgment. Job Security With AI developing, it is assumed that machines will replace labor for many jobs in, both the developing and the developed countries. A McKinsey Global Report (MGI 2017) suggests that by 2030, intelligent agents and robots could eliminate as much as 30 percent of the world’s human labor amounting to nearly 400 to 800 million jobs. This would require nearly 375 million people to switch jobs. Similarly, a Brookings Institution report (West 2018) suggests that such automation would result in some Western democracies to resort to authoritarian policies to stave off civil chaos, much like they did during the Great Depression. According to Brookings‚ the US would then look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft. Health Security Personalized and on the move health care is possible through AI. At an individual level, personalized point-of-care (POC) diagnostics platforms, which enable mobile healthcare delivery and personalized medicine by providing information for bio-analytical science, including digital microscopy, cytometry, immunoassay tests, colorimetric detection, and healthcare monitoring would be possible using a mobile phone.
246
N. AGARWALA AND R. D. CHAUDHARY
At a societal level, with increased social media connectivity and AI‚ by leveraging big data, preventive measures against events such as foodborne illness and epidemics can be achieved. This is allowing identification of patterns to diagnose diseases such as cancer and provide preventive health care. Cybersecurity Today, computers, networks and cyberspace of government, industries, organizations, and academia (NGIOA) are connected to be an integral part. Since they are eventually connected to the cyberspace, everything, including Geospace and space is now controlled through the cyberspace. The number of areas wherein AI plays an important role in ensuring cybersecurity is: a. Handling huge volumes of security data. b. Speeding up detection of genuine problems, rapidly crossreferencing different alerts and sources of security data. c. Assisting humans to make proper judgment and decisions when being swamped with threats and incidents. d. Tracing cyber-criminals hiding within systems that can only be tracked using AI. This eventually allows securing of cyber-systems. e. As hackers too use the latest tools of AI to launch attacks, the cycle of AI development continues. f. Help refine a hypothesis through iteration.
The Dilemma Though we have seen that the use of AI is to the benefit of humanity and in a larger context to his advantage, a dilemma exists whether the methodology of AI being currently used is right or not and if AI is good for humanity or not. Such a dilemma is resulting in uncertainty in the mind of developers and policy makers, wherein, at times policy making is put on the back burner for want of clarity and development, eventually creating a security risk (Agarwala 2021). Some such dilemmas are:
11
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
247
The Military Security Dilemma New technologies introduce uncertainty about the strength of the opponent. Each such technological advancement brings with it uncertainty about the way the technology may be used and its power, as seen in the Second World War, when Germany set the stage for the use of new technologies such as radar, mechanized artillery and aircraft to their advantage. Such dilemma forces countries in a race for technological supremacy and AI is one such technological development for which the exact use in war is a dilemma. No one knows how it will be used and its possible success in the battlefield. At the tactical level, AI is currently being built into a wide variety of weapon systems and core infrastructure rather than creating a single weapon system itself. This has allowed tanks, artillery, aircraft, and submarines, to detect targets on their own and provide a response accordingly. Such is the response of an AI system, that it can outperforms an experienced military pilot in simulated air-to-air combat. Yet, it is unclear as to how these advancements will change the very nature of conflict. Though speculations are rampant, uncertainties exist about the possible change and advantage AI will provide in a battle scenario due to its integration in the existing weaponry and command-and-control centers. As the core components of AI, namely, algorithms, data, and computing power, are improving exponentially, it is difficult to forecast the future of AI. What remains, is to ask questions and speculate as to how rival powers might use AI in innovative and unexpected ways and try and create alternates for such use, resulting in a never-ending power race. On the development front, the areas that command promise are transfer or one-shot learning4 (Bhagyashree 2018) on the algorithmic side, and neuromorphic processors5 (Simonite 2015; Snow 2018) and quantum computing6 on the hardware side. Such is the interest and competition for these new technologies that Google, IBM, Intel, and Microsoft have expanded their working team. China and the EU both have launched new programs by providing funding in billions of dollars while the US has created a new committee to coordinate government work on quantum information science and provide a funding upward of US $1.3 billion. The race is on and the winner is likely to gain big economic and national security advantages (Simonite 2018).
248
N. AGARWALA AND R. D. CHAUDHARY
The Human Security Dilemma With social behavior analytics, predictive analytics, and competitive analytics being the basis of determining potential insider threats, the risk of ‘privacy invasion’ exists. Though such predictive solutions help in the security of an organization, they focus on human behavioral data which result in forced marketing, selective pricing, and are permitting collection of data for not-yet-invented applications and not-yet-discovered algorithms. This thus creates a dilemma whether the data should be captured, stored and used by the companies and if such capture and usage can be considered ethical. The issue of data mining and privacy is particularly complex, essentially because the machine-learning and data mining technologies are oblivious to the consequences of exploitation or trespassing on personal privacy laws. The dilemma in appreciating the effect of global warming, social media, and labor and production changes emerge from the indecision of which data to be collected should be shared as such data can be misused for political gains and oppression of the common man. Job Loss Dilemma Looking back in history, fears and concerns regarding the loss of jobs due to AI and automation are understandable, but ultimately unwarranted‚ as such technological changes may eliminate specific jobs but create more in the process. They eventually would allow people to pursue careers that give a greater sense of meaning and well-being but may require higher specialization and critical thinking. Further, human pursuits such as leaders, art, music, machine maintainers and improvers and data analysts would continue to exist with a high demand for professional with skills who can navigate companies through these transformations. Machines by and large are better than humans at physical tasks, speed, agility, precision, and ability to lift greater loads. If these machines attain intelligence, they are definite to replace repetitive human jobs, and that is inevitable. The best one can do is brace for impact. The Health Care Dilemma Human mapping and disease prediction using AI may not be desirable when the pattern that it finds is controversial, racist, sexist, or extremist.
11
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
249
Such patterns may well exist because of the available data, existing inequalities or systemic biases in a society. AI could merely be making visible the tyranny of the majority in this situation by classifying people, groups, or behaviors in various categories. Such a mapping and inference may lead to a greater racial discrimination and hence the dilemma of the requirement of such an inference. A recent case in point is where an image classification algorithm on Google classified images of African-American individuals as gorillas for which Google later apologized (BBC 2015). Cybersecurity Dilemma Though the importance in AI to cybersecurity cannot be challenged, humans fear that due to the speed, accuracy, and ability to handle multiple analyses simultaneously with ease, AI may eventually take over and replace humans and that too seamlessly and automatically without their knowledge. However, in the current state of AI, both AI and humans can coexist in cybersecurity. It is with the humans alone who can formulate hypotheses which can be refined using AI. Similarly, it is the humans who set objectives for security; classify data as good or bad before handing it to AI. Additionally, humans play an important role in judging the analysis and recommendations that machines provide and ultimately make the call as to what course of action is the best. In summary, AI can provide power and speed, but humans provide skills, insights, and judgments that AI cannot replicate, at least not for now.
The Security Dilemma Security dilemma is a term used in international relations that refers to a situation in which, under anarchy, actions by a state to heighten its security through increased military strength, can lead to similar responses by other States, thereby resulting in an increased tension that can create conflict, even when no side really desires it. As seen, the usage of AI is both for the beneficial and harmful purposes and hence, the dual nature of AI is bringing enormous security risks to not only individuals and entities across nations but also to the future of humanity. As a result, the concern about AI-based security dilemma is growing. The associated security dilemma with AI in a broader sense of security as discussed in the preceding paragraphs can be discussed as under:
250
N. AGARWALA AND R. D. CHAUDHARY
Military Security When the available capacity and strengths of AI are not known with the adversary, it is natural that a security dilemma would be created. This is compounded by the demonstrations that countries do of their technological achievements thereby creating greater uncertainty and mistrust (Meserole 2018). Human Security With mapping of individuals through social media, demographics, biometrics, browsing interests, to name a few, eventual outcome of elections, humanitarian aids, etc. have been found to have been manipulated. Such a manipulation may create unrest within a county and between nation-states resulting in a security dilemma. Similarly, events like global warming and production changes due to technology have a direct bearing on the economic well-being of a nation. A prolonged downward trend of a nation due to these changes would eventually create a security dilemma. Job Loss Security As discussed earlier, AI is likely to create job loss in millions. The impact of this job loss for developed nations may result into job changes as adequate skill sets are available in these countries. However, for developing nations it would create joblessness leading to anarchy with war, violence, and theft as the only available employment leading to a security dilemma for the developing nations. A similar point in case is that of the Somali pirates who too had taken to piracy due to loss of job avenues to unite and take preventable actions. Health Care Security As human mapping and disease prediction by AI has the capability to make visible the tyranny of the majority, it creates a greater divide between the majority and the minority groups, and can result in a security dilemma, as the majority would prefer to suppress the minority, while the minority would prefer to break free.
11
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
251
Cybersecurity As NGIOA of various counties get hooked onto the cyber-world for their daily activities, cybersecurity becomes critical. A cyber-attack has the potency of crippling the economy of a nation. Such is the dependence that a disruption of the cyber-world in 2014 caused loss of millions to the economy due to the damage of the Asia-Pacific Cable Network-2 (APCN2) (Victor 2014). Such attacks to the cyber-network may be physical or cyber-based. Cyber-attacks may be undertaken by individuals or may be state sponsored. While those by individuals may be handled better and easily, those by states causing economic disruption to a nation may create a security dilemma. Cybersecurity essentially poses a threat to the economy of a nation and hence can be considered as a critical element in the creation of a security dilemma.
Way Ahead To address the security dilemma created by AI in various facets of security there is a need to begin at the very root by overhauling the entire education system and re-skilling the people. One needs to realize that the present education system is geared for the Industrial Revolution, an age which is now two generations old. Need thus exists to make the education system Industry 4.0 compliant which would allow a greater understanding of AI and hence a greater faith among societies and people thereby reducing creation of a security dilemma. Another area of concern is that of a policy framework. As the areas of usage for AI grow, the possible weapons of mass destructions and the areas of attack increase. The need of the hour is to develop an AI framework. This requires one to know what data is used, the guiding assumptions and the practices employed by the developers. However, care needs to be taken when asking for transparency, accountability, equity, and universality as such efforts will affect future development (Agarwala 2021). There is also a need to create accountability and responsibility when democratizing availability and usage of digital data. While the democratization of big data brings universal accessibility and empowers individuals and entities, it brings to the fore more critical security risks as discussed in the preceding paragraphs. Anyone with or without formal training, can
252
N. AGARWALA AND R. D. CHAUDHARY
accidentally or even purposefully cause chaos, catastrophe, and existential risks to community, ethnicity, race, religion, nation, and humanity. It thus becomes imperative to create accountability and responsibility when making available the digital data. The current generation is that of growth and development of Industry 4.0 aided by AI and it cannot be stopped. Auditing, mapping, governing, and preventing may be considered as steps to avoid a security dilemma due to AI. However, if AI remains a ‘dual-use’ technology, such a control may be possible and would inhibit an all-out growth and development while encouraging remote and dark development which would make governance of AI even more difficult and may fuel greater security dilemma which the world would be better without.
Conclusion With the evolution of AI, the definition and meaning of security are getting fundamentally challenged as security is no more about violence toward respective nations in Geospace from within or across its geographical boundaries but much more than that and hence needs to be evaluated and updated. As AI emerges with unknowns, fear, uncertainty, competition and an arms race are leading us toward a new battlefield that has no boundaries or borders, which may or may not involve humans and will be impossible to understand and perhaps control. The paper has discussed how the challenges and complexities created by the threats and security of AI have crossed the barriers of space, ideology, and politics, demanding a constructive collaborative effort of all stakeholders across nations (Jayashree 2019). Though some way ahead has been recommended, they are not exhaustive and require continuous updating and brainstorming due to the ever-growing unknowns AI throws at us from time to time.
Notes 1. The CompStat (Computer statistics) a combination of management, philosophy, and organizational management tools for the police. As an early predictive AI it is being used by many police departments, both in the United States and abroad. It offers a dynamic predictive policing for crime
11
2.
3.
4.
5.
6.
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL SECURITY
253
reduction, quality of life improvement, and personnel and resource management. Spikes in crimes can be identified using comparative statistics, which can be addressed using targeted enforcement. Now called Avanta Intelligence. It focuses on analyzing data to provide situational awareness and make strategic recommendations. For example, RSS (Really Simple Syndication) feeds are filtered through a machine learning process to present the organization with material relevant to their objectives. The Dynamic Aviation Risk Management Solution (DARMS) aims to check passengers as they walk through, eliminating the need for Inefficient Security lines. Transfer learning allows us to deal with scenarios by leveraging the already existing labeled data of some related task or domain so as to avoid the need of relearning for the same activity with slightly different requirements. Modeled on biological brains—designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. By providing exponential speedups for central problems like clustering, pattern-matching, and principal component analysis for machine learning problems, critical to big data analysis, blockchain, and IoT. For AI, it can be used in five phases namely Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Consciousness, Artificial Super-Intelligence (ASI), and Compassionate Artificial Super-Intelligence (CAS).
References Agarwala, N. 2021. “Role of Policy Framework for Disruptive Technologies in the Maritime Domain”. Australian Journal of Maritime & Ocean Affairs. Accessed May 19, 2021. https://doi.org/10.1080/18366503.2021. 1904602. BBC. 2015. “Google Apologises for Photos App’s Racist Blunder”. BBC News. July 1, 2015. Accessed July 03, 2020. http://www.bbc.com/news/techno logy-33347866. Bhagyashree, R. 2018. “5 Types of Deep Transfer Learning”. Packt. November 25, 2018. Accessed July 03, 2020. https://hub.packtpub.com/5-types-ofdeep-transfer-learning/. Forrest, Conner. 2016. “Can AI Predict Potential Security Breaches? Armorway Is Betting on It”. TechRepublic. June 7, 2016. Accessed July 03, 2020. https://www.techrepublic.com/article/armorway-grabs-2-5million-to-expand-ai-security-platform/.
254
N. AGARWALA AND R. D. CHAUDHARY
Jayashree, Pandya. 2019. “The Dual-Use Dilemma of Artificial Intelligence”. Forbes. January 7, 2019. Accessed July 03, 2020. https://www.forbes.com/ sites/cognitiveworld/2019/01/07/the-dual-use-dilemma-of-artificial-intell igence/#53e63bec6cf0. Meserole, Chris. 2018. “Artificial Intelligence and the Security Dilemma”. Brookings. November 6, 2018. Accessed July 03, 2020. https://www.bro okings.edu/blog/order-from-chaos/2018/11/06/artificial-intelligence-andthe-security-dilemma/. MGI. 2017. “Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation”. McKinsey Global Institute. Accessed July 03, 2020. https:// www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gainedwhat-the-future-of-work-will-mean-for-jobs-skills-and-wages. Simonite, Tom. 2015. “Teaching Machines to Understand Us”. MIT Technology Review, Vol. 118, No. 5. Accessed July 03, 2020. https://web.mit.edu/6. 033/www/papers/deep_learning_language.pdf. Simonite, Tom. 2018. “The Wired Guide to Quantum Computing”. Wired. August 24, 2018. Accessed July 03, 2020. https://www.wired.com/story/ wired-guide-to-quantum-computing/. Snow, Jackie. 2018. “An Artificial Synapse Could Make Brain-on-a-Chip Hardware a Reality”. MIT Technology Review. January 22, 2018. Accessed July 03, 2020, https://www.technologyreview.com/2018/01/22/241373/ an-artificial-synapse-could-make-brain-on-a-chip-hardware-a-reality/. Tiersky, Alex, n.d. “Military Aspects of Security”. Commission on Security and Cooperation in Europe (CSCE). Accessed July 03, 2020. https://www.csce. gov/issue/military-aspects-security. Victor Jr., Barreiro. 2014. “Damaged Undersea Cables Effect Internet in the PH, Asia-Pacific”. Rappler.com. March 31, 2014. Accessed July 03, 2020. https://www.rappler.com/technology/news/54323-damaged-apcn2internet-connectivity. West, Darrell M. 2018. “Will Robots and AI Take Your Job? The Economic and Political Consequences of Automation”. Brookings. April 18, 2018. Accessed July 03, 2020. https://www.brookings.edu/blog/techtank/2018/04/18/ will-robots-and-ai-take-your-job-the-economic-and-political-consequences-ofautomation/.
Index
A AI Arms race, 12, 13, 40, 148 AI priorities, 12, 195–197, 200, 203, 208, 210 AI Strategic Policy, 195, 197, 207, 210, 213 Alan Turing, v Arms race, 12, 13, 40, 148, 228, 232, 233, 252 Artificial General Intelligence (AGI), 159, 253 Artificial Intelligence (AI), 4, 7, 8, 19, 39–41, 52, 56, 63–71, 73–78, 85, 92, 99, 102, 113, 130, 147, 148, 158, 163, 170, 177, 183, 195, 202–205, 219, 220, 223, 227, 229, 231, 232, 241 Artificial Neural Network, 40 Augmented, 45 Automated, 55, 66, 74, 78, 118, 125, 157, 160, 185 Automation, 55, 64, 67, 75, 99, 101, 120, 206, 242, 245, 248
B Big data, 86, 246, 251, 253 Bots, 179, 181, 185
C Centaur relationships, 39, 41, 42, 44–46, 52, 55 China, 4, 10, 12, 49, 51, 54, 64, 65, 70–72, 74, 76, 78, 86, 90–93, 97, 101, 102, 104, 196, 198, 199, 201, 204, 211–213, 221, 222, 225, 226, 229, 230, 234, 235, 244, 247 Cloud, 47, 74, 86, 90, 91, 93–95, 97–99, 101–105, 122, 224 Cloud computing, 11, 118, 224 Conflict prevention, 148, 152, 169, 170, 243 Cultural inertia, 187 Culture, 4, 21, 23, 24, 47, 51, 63, 78, 115, 128, 178
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Keskin and R. D. Kiggins (eds.), Towards an International Political Economy of Artificial Intelligence, International Political Economy Series, https://doi.org/10.1007/978-3-030-74420-5
255
256
INDEX
D Data collection, 55, 75, 78, 113, 118, 123, 126, 132–134, 166 Data surveillance, 121 Deep learning, 40, 68, 86, 186, 241, 243 Democracy, 48, 65, 96 Democratized Artificial Intelligence, 179 Disinformation, 149, 163–166 Drones, 41, 181, 183, 186, 187, 226, 234 Dystopia, 50, 63
E Education/Education Policy, 10, 21, 26, 44, 64–69, 71–79, 101, 132, 200, 213, 251 Employment, 18, 21, 55, 99, 100, 245, 250
F Facial recognition, 49, 63, 68, 76, 78, 92, 94, 95, 118–120, 124, 127, 244 Feminism, 20, 21, 24, 26, 30, 31 Feminization, 19, 28, 30–32 Foreign policy, 6, 90, 91
G General-Purpose Technology (GPT), 3, 4, 7, 9, 13 Global South, 4, 85, 87, 88, 90 Government budget, 197, 198, 201, 203, 205, 206, 212
H Human creativity, 52, 53
Human intelligence, 178, 183, 196, 233, 242 Human-like robots, 241, 242
I India, 4, 10, 12, 28, 64, 65, 73, 74, 76, 77, 85, 154, 221, 229–231, 233 Information security, 203, 242 Intelligence augmentation (IA), 10, 39, 43, 44 International security, 13, 154, 179, 187 Internet bots, 179, 181, 185 Internet of Things (IoT), 39, 45, 97, 118
L Latin America, 10, 11, 86, 87, 90–92, 94–104 Lethal autonomous weapons (LAWS), 63, 65, 221, 224, 227, 228, 233
M Machine learning, 19, 27, 40, 41, 64, 67, 69, 94, 117, 122, 123, 127, 159, 185, 219, 220, 226, 244, 245, 253 Memes, 148, 149, 152–155, 160, 162–165 Militarization of AI, 12, 92, 221, 224, 229, 232, 234 Military, 7, 8, 12, 19, 39–42, 46, 47, 64–66, 85, 92, 102, 103, 206, 207, 211, 212, 220–223, 225, 227–230, 233, 234, 243, 244, 247 Multinational corporations (MNCs), 85–91, 93, 98, 99, 101–104
INDEX
N Narrow Artificial Intelligence, 166, 253 National Artificial Intelligence Strategic Policy, 195, 207, 210, 213 Natural language processing, 65, 69, 74, 76, 77, 241 Network, 11, 25, 40, 41, 43, 45–48, 52, 56, 72, 86, 95, 97, 98, 101, 102, 118, 121, 122, 134, 157, 158, 182, 183, 203, 220, 223, 228, 229, 232, 243, 246, 251 Networked machines, 46, 48
P Pakistan, 230, 233 Peace, 11, 147, 149, 167, 170, 242 Political violence, 11, 148–150, 152, 164, 179 Practical judgment, 10, 40, 52–56 Prediction, 8, 41, 42, 49, 50, 72, 156, 248, 250 Privacy, 64, 66, 68, 71–73, 78, 98, 119, 127, 128, 133, 248 Public education, 10, 64, 65, 67, 71, 73–79 Public safety, 114, 118, 120, 122, 124, 125, 132, 134
R Risk Management, 253 Risk(s), 10, 12, 25, 39, 64, 67, 76, 77, 102, 114, 116, 122–132, 134, 149–151, 163, 230, 235, 246, 248, 249, 251, 253 Robots, 4, 19, 26–30, 40, 41, 221, 224, 227, 230, 243, 245 Russia, 4, 7, 12, 85, 89, 220–223, 227–229, 234
257
S Security, 6, 9, 11, 44, 49, 67–69, 73, 76, 92, 98, 114, 115, 118–120, 123, 125, 127, 129, 132, 151, 154, 178, 179, 182, 187, 200, 206, 223, 228, 233, 234, 242–253 Security dilemma, 242, 247–252 Self-driving cars, 179, 181, 184 Singularity, 178 Social production, 4, 5, 7, 9, 13 Society, 5–7, 9, 11, 12, 22–25, 30, 64, 71, 74, 79, 86–89, 93, 104, 114–116, 121, 125, 127–129, 131, 134, 163, 166, 178, 187, 195, 196, 198, 204, 205, 211–213, 242, 244, 249 Strategic policy/(ies), 195, 197, 207, 213 Strong AI, 220 Surveillance, 11, 12, 49, 50, 67, 71, 75, 76, 78, 92, 94, 113–135, 167, 220, 222, 230–232, 243, 244
T Terrorism, 119, 121, 123, 179, 181, 183, 184, 223, 232, 243 Threat analysis, 179, 186, 187 3D printing, 179, 181, 186 Transcultural Femininization, 21 Turing Test , v
U Uneven development, 11, 87–89, 99, 103, 105 United States, 5, 20, 64–67, 71, 75, 78, 185, 196, 205, 212, 213, 252
258
INDEX
W War, 6, 7, 12, 13, 21, 41, 63, 87, 90, 104, 150
Weak AI, 220, 233