135 60 3MB
English Pages [217] Year 2021
Technology and International Relations
Technology and International Relations The New Frontier in Global Power
Edited by
Giampiero Giacomello Associate Professor of Political Science, Department of Political and Social Sciences, University of Bologna, Italy
Francesco Niccolò Moro Associate Professor of Political Science, Department of Political and Social Sciences, University of Bologna, Italy
Marco Valigi Senior Research Fellow in Defence Studies, Department of Political and Social Sciences, University of Bologna, Italy
Cheltenham, UK • Northampton, MA, USA
© Giampiero Giacomello, Francesco Niccolò Moro and Marco Valigi 2021
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical or photocopying, recording, or otherwise without the prior permission of the publisher. Published by Edward Elgar Publishing Limited The Lypiatts 15 Lansdown Road Cheltenham Glos GL50 2JA UK Edward Elgar Publishing, Inc. William Pratt House 9 Dewey Court Northampton Massachusetts 01060 USA A catalogue record for this book is available from the British Library Library of Congress Control Number: 2021932265 This book is available electronically in the Political Science and Public Policy subject collection http://dx.doi.org/10.4337/9781788976077
02
ISBN 978 1 78897 606 0 (cased) ISBN 978 1 78897 607 7 (eBook)
Contents List of contributorsvii Introduction: Technology and International Relations – The New viii Frontier in Global Power Giampiero Giacomello, Francesco Niccolò Moro and Marco Valigi PART I
TECHNOLOGY AND INTERNATIONAL RELATIONS: POLITICAL, ECONOMIC AND ETHICAL ASPECTS
1
Theorizing technology and international relations: prevailing perspectives and new horizons Johan Eriksson and Lindy M. Newlove-Eriksson
2
Mapping technological innovation Francesco Niccolò Moro and Marco Valigi
3
Autonomy in weapons systems and its meaningful human control: a differentiated and prudential approach Daniele Amoroso and Guglielmo Tamburrini
PART II
3 23
45
ROBOTICS AND ARTIFICIAL INTELLIGENCE: FRONTIERS AND CHALLENGES
4
Context matters: the transformative nature of drones on the battlefield68 Sarah Kreps and Sarah Maxey
5
Artificial intelligence: a paradigm shift in international law and politics? Autonomous weapon systems as a case study Luigi Martino and Federica Merenda
89
PART III SPACE AND CYBERSPACE: INTERSECTION OF TWO SECURITY DOMAINS 6
The use of space and satellites: problems and challenges Luciano Anselmo v
109
vi
Technology and international relations
7
Cyber attacks and defenses: current capabilities and future trends 132 Michele Colajanni and Mirco Marchetti
8
Critical infrastructure protection Andrea Locatelli
9
A perfect storm: privatization, public–private partnership and the security of critical infrastructure Giampiero Giacomello
152
173
Index193
Contributors Daniele Amoroso, University of Cagliari, Italy Luciano Anselmo, ISTI Institute, Italian National Research Council, Italy Michele Colajanni, University of Modena and Reggio Emilia, Italy Johan Eriksson, Södertörn University, Sweden Giampiero Giacomello, Alma Mater Studiorum – University of Bologna, Italy Sarah Kreps, Cornell University, USA Andrea Locatelli, Catholic University of Milan, Italy Mirco Marchetti, University of Modena and Reggio Emilia, Italy Luigi Martino, University of Florence, Italy Sarah Maxey, Loyola University Chicago, USA Federica Merenda, Scuola Superiore Sant’Anna, Italy Francesco N. Moro, Alma Mater Studiorum – University of Bologna, Italy Lindy M. Newlove-Eriksson, Swedish Defense University, Sweden Guglielmo Tamburrini, University of Naples Federico II, Italy Marco Valigi, Alma Mater Studiorum – University of Bologna, Italy
vii
Introduction: Technology and International Relations – The New Frontier in Global Power Giampiero Giacomello, Francesco Niccolò Moro and Marco Valigi TECHNOLOGY, HUMAN NATURE AND INTERNATIONAL POLITICS Thucydides, who is considered the ‘father’ of modern international relations (IR) theory, was able to leave ‘a possession for all time’ because he centered his magnus opus, The Peloponnesian War, on the only ‘immutable’ element of human history – namely, human nature. Centuries later, this led another giant thinker, General Carl von Clausewitz, also to conclude in his magnus opus, Vom Kriege, that while all conflicts are different, the nature of war is immutable; it never changes, because war is a human endeavor. If human nature remains the same, how can the nature of war change? It is just the tools that are different. Technology is one of such tools. Or, rather, it is the formulas, ingenuity, and organization of resources that enable the production and improvement of such tools. Therefore, technology is a multiplier. Technology plays a central role in human life. Economy, medicine, and warfare are all deeply shaped by technological change. If human nature is the constant, technology is the variable that makes historical evolution what it is. Unsurprisingly, technology transformation does influence the relationship among individuals and organizations, not only in domestic politics but also in the international system. Much as technology has evident, positive effects on the well-being of societies, humans have always displayed mixed, complex reactions to technology itself (Brosnan, 2002). Such ‘fears’ are particularly acute regarding computers because they are perceived to have some ‘thinking’ capabilities. Despite the reassurances of computer scientists that the machines do what they are programmed to do, part of public opinion has always had some misgivings (Dinello, 2005). Artificial intelligence (AI) has now brought that suspicion to a new level of viii
Introduction
ix
alarm, because this time, machines fueled by AI seem capable of autonomous thinking and decision making. If these conditions were to be extended to the military and generate lethal autonomous weapons, a sense of doom would become, if not justified, at least comprehensible. It is indicative than no less that Henry Kissinger, a scholar who spent many years investigating limited wars and nuclear weapons, reached out to the public by writing an angst-ridden article in The Atlantic to warn the world of such dangers (Kissinger, 2018). This ambivalent attitude by human beings is likely to increase, as the contributions to this book show, because technology is now pervasive and only certain to become more so. But what about states? Even for them, technology is a source of empowerment and, at the same time, of concern. The impact that technology has on key aspects of political and social life is the subject of private and public discussions daily. Take how global media outlets present competition on telecommunication infrastructure as a key feature of today’s international power politics. On the one hand, advances in information communication technologies (ICTs) are widely believed to have given a boost to US power in the past decades. The US military has long considered technology as the core of its primacy. National security strategies have repeatedly treated technology as a means to give the United States an edge over competitors and as a source of concern about possible implications of increasing accessibility of disruptive technologies for rival nations and non-state actors. American tech companies are still dominant on the business scene. Most recently, the so-called FAGA (Facebook, Amazon, Google, and Apple) companies seem to have supplanted major oil companies as the embodiment of global success in the US economy. Yet, on the other hand, in the last few years, some in the US have been seeing American technological edge eroding. The U.S.-China Economic and Security Review Commission – a Congress-mandated body – has recently restated that China’s practices over technology transfers, use of ‘illicit means’ to achieve technological leadership, and investment in ‘next-generation defense technologies’ are some of the major reasons for concern for US global power. The business sector is affected too: the fate of the company Huawei – which aims to provide infrastructures for 5G technologies around the world – is one of the most immediate images of China–US growing rivalry, at least by looking at the newspapers’ headlines. The centrality of technology as a tool for harnessing wealth and power in the twenty-first century is recognized outside the US and China. European Union member states – and the European Commission – have been promoting awareness on the centrality of combining science and technology strategies in public policy discourses, planning and implementation, and research funds allocation. European initiatives in the civilian domain, such as those to promote so-called key enabling technologies (micro- and nanoelectronics,
x
Technology and international relations
nanotechnology, industrial biotechnology, advanced materials, photonics, and advanced manufacturing technologies) have been central in EU’s industrial policy for a decade. Finally, non-state actors and illegal/criminal organizations perceive technology as an instrument with which to ensure their persistence and the achievement of their objectives. Technology can function as a vector of ideas, a means of task circulation/division of work within the organization, and a power multiplier (i.e., as a source of tactical advantage in implementing the strategies of those actors). In particular, for minorities pursuing goals that are mostly not compatible with those of political institutions, or part of an illicit plan, as in the case of criminal organizations, non-state actors look at technological change as a potential source of advantage and resilience vis-à-vis powerful players.
AN (INEVITABLY) SHORT LITERATURE REVIEW If technology represents such a huge factor in the evolution of the economy, medicine and warfare, it is not surprising that the literature about it is equally large. Science and technology embody a theoretical and empirical intersection among different disciplines, such as sociology, history, and anthropology (Sismondo, 2004). However, IR, international political economy (IPE) and security studies have often failed to incorporate the importance of technological innovation and its spread (Horowitz, 2010). The nexus among the evolution of international environment and technological trends has been occasionally investigated (Hanson, 2008; Herrera, 2006; McNeill, 1982; Ogburn, 1949; Sanders, 1983; Skolinkoff, 1993; Van Creveld, 1989), without specifically connecting with the major research questions of IR theory. While in 1950s and 1960s, the impressive growing of technology promoted uncertainties about human control over machines and the evolution of technology (Winner, 1977), in the 1980s and 1990s, the spread of large ‘socio-technical systems’ (Fritsch, 2011, p. 28) among industrialized societies generated questions – though not systematically – about their political implications (Smith and Marx, 1994; Winner, 1986). Far from political economists’, such as Karl Marx and Adam Smith, idea of an effective and autonomous vector of historical change, in that period technology was also identified as a reason for oppression and as a generator of new challenges (Vig, 1988). Transnational threats, international terrorism, and the global economic turmoil accompanied the international scenario of the 2000s (Fearon, 1994; Powell, 1999; Smith and Stam, 2004), stimulating speculations about the role of information technology as a possible optimizer of the US military strategy. While the US led – quoting Andrew Krepinevich (1994) – the ‘cavalry to computer’ transition, the robotics revolution will impact extensively on future
Introduction
xi
warfare – thus, benefits for America and its allies seemed to be coupled with rising risks (Singer, 2009). Most IR scholars have considered technology as an exogenous variable that can only impact on minor features of international affairs, perhaps with exceptions such as nuclear weapons or the Internet, and not as a central matter of inquiry and methodological debate. At the same time, major concerns about international security and military affairs cannot be explained without reference to the number of people and technologies involved (Brooks, 2007, p. 228). Assuming that technology is not a mere practical application of material sciences, technological change will be encompassed among the key explanatory variables of international politics. However, a multidisciplinary approach that also takes account of the impact of decisions (both at individual and organizational level) and norms of the evolution of technology is far from being developed. This book aims to fill (at least) some of the existing theoretical and empirical gaps in the field of IR, security studies, and defense transformation that we have mentioned above and to investigate the implications of technological change in international affairs, emphasizing the relation between technological transformation and socio-political variables. In the last decades, to some extent, technological change has reshaped the global distribution of power and interests by empowering emerging countries and non-state actors, but that process has been neither progressive nor homogeneously distributed among world regions and countries. Notwithstanding that the occurrence of some breakthroughs makes the patterns of technological evolution difficult to foresee in exact or clear terms, we remain convinced (as we do in this book) that a thorough analysis of the macro-dynamics governing technological change is the only reasonable choice for international actors and public opinion to direct that change towards bettering the conditions of planet Earth and not contributing to its demise.
STRUCTURE OF THE BOOK Given the huge number of economic, industrial, health, and social areas affected by advanced technologies, a comprehensive approach to the subject would require a 1000-page volume, and that may not be enough. Hence, we chose to select certain sectors that appear either to be moving quickly or that their further developments are likely to affect international affairs more than others; more specifically, we decided to focus on AI, robotics, space and satellites, computers, and networks. The latter in particular are cross-sectional, as without today’s computing power, all technological research would still be years behind. The book is divided into three parts.
xii
Technology and international relations
Part I: Technology and International Relations: Political, Economic and Ethical Aspects This first part of the book opens by setting some of the theoretical boundaries of the debate on technology and international power. In Chapter 1, ‘Theorizing technology and international relations: prevailing perspectives and new horizons’, Eriksson and Newlove-Eriksson provide an overview of the literature on technology and IR theory. First, the chapter considers how technology is treated in more general IR theory, including what role technology plays in the wider paradigmatic debates of IR. Second, the chapter scrutinizes attempts to develop specific theories on technology and international relations. Third, the chapter discusses the advantages and disadvantages of different approaches, including whether there are some areas that are amply theorized while others remain under-researched. In particular, the chapter addresses how and to what extent various approaches have been able to analyze the relationship between technological and societal change, both including the rapid development of new technologies (concerning, for example, cyber, nanotechnologies, space, robotics, and AI), and how technologies and critical infrastructures are becoming increasingly interconnected. Finally, the chapter suggests new horizons for empirically grounded theory on the relationship between technology and international society. Chapter 2, ‘Mapping technological innovation’ by Moro and Valigi, focuses on how the global distribution of technological innovations changes, regardless of past investments and innovations influencing the ongoing trends in R&D and patents. The attention devoted to patents and international standards-setting seems to increase as countries are securitizing their intellectual property – as shown by the case of China, whose patent applications have skyrocketed since 2000, overtaking the US in 2011. Regarding patents effectively granted by World Intellectual Property Organization (WIPO) offices, Asia is far ahead of other continents, and Africa and Latin America are at the lower margins. North America has grown irregularly, but doubled the number of granted patents since 1990, while the number for Europe considerably decreased, notwithstanding the vitality of some European firms. High-tech export in mature economies does not show significant differences among regions. On the contrary, looking at R&D, remarkable differences among countries and regions exist. Japan and the US maintain the leading positions, China is quickly closing the gap, and the EU countries follow. Moving from these preliminary observations, the chapter then provides a map of technological innovation, also giving a clear representation of regional trends and their peculiarities by sectors. Amoroso and Tamburrini follow with Chapter 3, ‘Autonomy in weapons systems and its meaningful human control: a differentiated and prudential
Introduction
xiii
approach’. In the 2017 report by the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), which met in Geneva under the auspices of the Convention on Certain Conventional Weapons, the need to further consider ‘[t]he human element in the use of lethal force’ was emphasized. This need was already advocated – in far stronger terms – by the non-governmental organization (NGO) Article 36, which has long been making the case that meaningful human control (MHC) should be ensured over any weapons systems (including, but not limited to, lethal ones). However, while the notion of MHC is commonly viewed as a viable starting point for an international legal regulation on autonomous weapons, its precise content is still shrouded in controversy, giving rise to many conflicting interpretations as to what a veritable ‘meaningful’ human control should amount to. Amoroso and Tamburrini provide a tentative answer or, at least, point a way forward to solving this challenging problem. Their working hypothesis is that the problem of establishing MHC does not lend itself to a one-size-fits-all solution achievable by endorsing one and only one of the various definitions propounded by states, NGOs, and scholars. Accordingly, they contend that, in the quest for ‘meaningfulness’ of human control over weapons systems, the international community should adopt a differentiated approach, grounded on ethical and legal principles: to be ‘meaningful’, in particular, human control over weapons systems should ensure compliance with the international rules regulating the use of lethal or sub-lethal force in armed conflicts and peacetime, and should avoid responsibility gaps in the case of harmful events affecting civilians or other protected persons. Such a principled approach leads to the identification of relevant properties (including timeframe of weapons system action, operational goals and context, human veto powers, human cognitive limitations and the increasing pace of war), and a suitable range of values for those properties in terms of which to assess the presence or else the lack of MHC on weapons systems. This methodology is applied to examine exemplary cases of weapons systems – including fire-and-forget munitions and swarming unmanned aerial vehicles (UAVs) – to evaluate whether each satisfies the MHC requirement. Part II: Robotics and Artificial Intelligence: Frontiers and Challenges The second part of the book begins with Chapter 4, ‘Context matters: the transformative nature of drones on the battlefield’ by Kreps and Maxey. Since drones are prevalent on contemporary battlefields, the authors ask to what extent and under what conditions such technology can be transformative. They tackle the question by evaluating two competing perspectives – one considers drones as transformative and destabilizing, the other views drones as just another platform. They also present an alternative, third way to evaluate
xiv
Technology and international relations
drone proliferation that shows that the transformative nature of drones for the near future depends on the battlefield context. Drones for counterterrorism and non-state actors are likely to be more influential than in an interstate or intrastate conflict context. Drones for humanitarian intervention and peacekeeping purposes have untapped transformative potential. Ultimately, future generation drones that improve on stealth, speed, size, or ability to swarm are likely to have wider-ranging impacts across battlefield contexts. Chapter 5 is, ‘Artificial intelligence (AI): a paradigm shift in international law and politics? Autonomous weapon systems (AWS) as a case-study’ by Martino and Merenda. The authors start by noting that, particularly in recent decades, AI has been revolutionizing modern societies in a wide range of different fields: AI devices are currently employed to assist judges, physicians, and soldiers with complex intellectual tasks. As technology further advances, not only the assistance of, but also the substitution of human beings with AI systems is being considered. Here, Martino and Merenda investigate the issue by considering a specific case study that is particularly relevant from an IR perspective: autonomous weapon systems (AWS) – namely, AI military robots that, allegedly, would not require human intervention in order to perform a military action. After an overall examination of the scholarly debate on AWS and a brief insight into the main technological questions involved, the authors consider the practice of AWS development and possible deployment by states and current international negotiations that consider their regulation or even outright prohibition. Part III: Space and Cyberspace: Intersection of Two Security Domains The book closes with four more contributions. The first is Chapter 6, ‘The use of space: problems and challenges’ by Anselmo, who reflects that international relations and technological change have been an integral part of the development of space activities for more than 60 years now. In fact, space is international, because any activity carried out in orbit around the Earth has implications for many, if not all, countries. Moreover, space has been the stage for fierce technological competition, with military, civilian, and commercial fallout. A growing number of important applications and critical functions depend on more than 1000 spacecraft, and this number might increase by one order of magnitude in the coming decade, with many public and private players, mostly newcomers. The challenges of the past could therefore pale compared with the new problems to be faced to guarantee a safe and responsible use of a resource and the emergence of new players in space, first and foremost China. Colajanni and Marchetti are the authors of Chapter 7, ‘Cyber attacks and defences: current capabilities and future trends’. Their contribution analyses
Introduction
xv
the main capabilities and actors involved in the cyber landscape that represents a unique scenario for politics, military, intelligence, and companies. All of them are forced to play an asymmetric game where all the cards disadvantage the defenders. Anonymity, physical distance from targets, almost impossible attribution, known software vulnerabilities, inexpensive weapons based on human competences, and often free tools, are some of the features that attackers can leverage. On the other hand, the defense needs expensive frameworks and large numbers of competent and always active people who must guard increasing attacks. Therefore, there is a clear trend in investing in aggressive tools and in people more than in defensive technologies; moreover, it may not be the major Western countries that represent harmful adversaries, as several recent cases demonstrate. Colajanni and Marchetti conclude their work with a look at two emerging risk factors: the automation of cyber attacks on a large scale and the massive advent of the Internet of Things (IoT) and cyber-physical systems, representing a new battleground where people’s safety will be the primary concern of cyber security. As the defenders are continuing to lose their battles and cyber threats involve the whole of society, it is time for a disruptive change to cyber security, requiring multicultural holistic approaches, positive application of well-known practices, and strong commitment of political and industrial management even when remedial actions may not be popular. In Chapter 8, Locatelli considers the issue of ‘critical infrastructure protection’ (CIP) in his chapter of the same name. He begins by noting that, while it is widely accepted that cyber threats qualify as one of the major contemporary security issues, their real scope has been appreciated only recently. In particular, as seen most vividly with the 2017 spread of WannaCry ransomware, cyber offenders pose a serious societal threat. For this reason, CIP is increasingly being mentioned in national security documents all over the world. In this chapter, he presents a broad overview of the current literature on the topic. He thus begins with a concise description of the infrastructures that are currently deemed critical, with a view to showing the complexities of CIP; he then reviews the main policies devised in the US and the European Union (EU) to secure critical infrastructures (CIs). Finally, in the concluding section, he draws conclusions and discusses future avenues of research. The last contribution, Chapter 9, ‘A perfect storm: privatization, public– private partnership and the security of critical infrastructure’ by Giacomello, is an exploratory examination of three ‘historical’ events that, in conjuncture but unintentionally, have increased the potential weaknesses of critical information infrastructures (CIIs) – those computer-managed assets like financial services, energy, telecommunications, transportation, and more, on which modern societies depend. The first event was the ‘business Internetization’ of data gathering and remote management of industrial control systems, which allowed businesses worldwide to reduce personnel costs and time manage-
xvi
Technology and international relations
ment. The second was the ‘privatization wave’ of the 1980s, when utilities were privatized in the United States, Europe and elsewhere, under the conviction that the private sector could be more efficient in delivering the same services. Finally, the emergence of transnational public–private partnerships (PPPs) in the ownership and governance of utilities further aggravated the inherent CII vulnerabilities brought about by the ‘privatization wave’. The chapter is thus an investigation of how such historical events can contribute to explain the origins of today’s CII vulnerability.
ACKNOWLEDGMENTS The book is the offspring of a workshop held at the Fondazione Bruno Kessler (Trento) in December 2016 and organized under the auspices of the NATO Allied Command Transformation (NATO ACT) and Alma Mater Studiorum – University of Bologna. Some of the key concepts related to technological change, military transformation, and adaptation of complex social organizations to innovation became the backbone of this book. For this reason, we would like the thank the Director of the Research Center on International Politics and Conflict Resolution (CeRPIC) of the Fondazione Bruno Kessler, Professor Filippo Andreatta, the scholars that contributed to the debate, and Professor Sonia Lucarelli, who coordinates scientific and dissemination activities between the University of Bologna and NATO ACT.
REFERENCES Brooks, Stephen G. (2007), Producing Security: Multinational Corporations, Globalization, and the Changing Calculus of Conflict, Princeton, NJ: Princeton University Press. Brosnan, Mark J. (2002), Technophobia: The Psychological Impact of Information Technology, Abingdon: Routledge. Dinello, Daniel (2005), Technophobia! Science Fiction Visions of Posthuman Technology, Austin, TX: University of Texas Press. Fearon, James D. (1994), ‘Domestic political audiences and the escalation of international disputes’, American Political Science Review, 88(3), 577−92. Fritsch, Stefan (2011), ‘Technology and global affairs’, International Studies Perspectives, 12(1), 27–45. Hanson, Elizabeth (2008), The Information Revolution and World Politics, Lanham, MD: Rowman & Littlefield. Herrera, Geoffrey L. (2006), Technology and International Transformation: The Railroad, the Atom Bomb, and the Politics of Technological Change, Albany, NY: State University of New York Press. Horowitz, Michael C. (2010), The Diffusion of Military Power: Causes and Consequences for International Politics, Princeton, NJ: Princeton University Press.
Introduction
xvii
Kissinger, Henry (2018), ‘How the enlightenment ends’, The Atlantic, June, accessed 25 January 2020 at https://www.theatlantic.com/magazine/archive/2018/06/henry -kissinger-ai-could-mean-the-end-of-human-history/559124/. Krepinevich, Andrew F. (1994), ‘Cavalry to computer: the pattern of military revolutions’, The National Interest, 1 September, accessed 25 January 2020 at https:// nationalinterest.org/article/cavalry-to-computer-the-pattern-of-military-revolutions -848. McNeill, William H. (1982), The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000, Chicago, IL: University of Chicago Press. Ogburn, William F. (ed.) (1949), Technology and International Relations, Chicago, IL: University of Chicago Press. Powell, Robert (1999), In the Shadow of Power: States and Strategies in International Politics, Princeton, NJ: Princeton University Press. Sanders, Ralph (1983), International Dynamics of Technology, Westport, CT: Greenwood Press. Singer, Peter W. (2009), Wired for War: The Robotics Revolution and Conflict in the 21st Century, New York: Penguin Books. Sismondo, Sergio (2004), An Introduction to Science and Technology Studies, Malden, MA and Oxford: Blackwell Publishing. Skolinkoff, Eugene B. (1993), The Elusive Transformation: Science, Technology, and the Evolution of International Politics, Princeton, NJ: Princeton University Press. Smith, Alastair and Allan C. Stam (2004), ‘Bargaining and the nature of war’, Journal of Conflict Resolution, 48(6), 783–813. Smith, Merritt R. and Leo Marx (eds) (1994), Does Technology Drive History? The Dilemma of Technological Determinism, Cambridge, MA: MIT Press. van Creveld, Martin (1989), Technology and War: From 2000 B.C. to the Present, Newark, NJ: Free Press. Vig, Norman J. (1988), ‘Technology, philosophy, and the state: an overview’, in Michael E. Kraft and Norman J. Vig (eds), Technology and Politics, Durham, NC: Duke University Press. Winner, Langdon (1977), Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, Cambridge, MA: MIT Press. Winner, Langdon (1986), The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago, IL: Chicago University Press.
PART I
Technology and international relations: political, economic and ethical aspects
1. Theorizing technology and international relations: prevailing perspectives and new horizons Johan Eriksson and Lindy M. Newlove-Eriksson 1
SELECTIVE ATTENTION TO TECHNOLOGY IN INTERNATIONAL RELATIONS
This chapter presents an introduction to and brief overview of the study of technology and international relations, including a discussion of research gaps and new horizons. In particular, this contribution addresses whether and how prevailing theoretical approaches have been able to analyze the relationship between technological and international political change. This includes how the personal, social, societal, and, to an extent, also biological worlds are becoming increasingly interconnected through new technologies – what has been referred to as the ‘fourth industrial revolution’ (Newlove-Eriksson and Eriksson, 2021; Schwab, 2017). How then is technology addressed within the field of international relations (IR)? Given the considerable attention IR literature pays to globalization and global structural change – core themes of contemporary IR – it might be expected that the role of technology in world politics would be a major focus. What would global politics and globalization be if the rapid development and diffusion of global information and communications technologies (ICTs) were not taken into account? It would seem, nonetheless, that technology has received rather mixed and selective attention within IR. On the one hand, notions of ‘information society’ and ‘network society’ (Castells, 2000) have certainly been picked up in IR, as is also the case in many other social sciences (Keohane and Nye, 1998; Mayer, Carpes and Knoblich, 2014; McCarthy, 2018; Nye, 2004). This body of scholarship comprises several more specific topics on which there is now a considerable amount of IR research – for example, Internet governance (Carr, 2015; Eriksson and Giacomello, 2009; Mueller, 2010; Price, 2018); cyber-security (Deibert, 2017; 3
4
Technology and international relations
Dunn Cavelty, 2008; Eriksson and Giacomello, 2007; Valeriano and Maness, 2018); digital diplomacy (Bjola and Holmes, 2015); international surveillance (Bauman et al., 2014; Lyon, 2007); and the role of social media in world politics (Hamilton and Shepherd, 2016). Some of the IR literature has also developed its work in the area of weapons of mass destruction (WMD), particularly nuclear weapons (Herrera, 2006; Herz, 1950; Masco, 2018; Sagan and Waltz, 2002; Waltz, 1981). On the other hand, several other technological developments, which arguably impact on the shape and conduct of world politics, have until recently largely gone unnoticed in IR, including: artificial intelligence (AI), autonomous weapon systems (AWS), robotics, nanotechnology, 5G, the Internet of Things (IoT), space technology, bioengineering, neurotechnology, microelectronics, and combinations thereof. Valerie Hudson’s 1991 book on AI and international politics arguably remains the most comprehensive work on this topic within IR, despite the rapid development of AI. Technology with military applications has mainly been dealt with within the subfield of strategic studies, including analyses of the consecutive revolutions in military technology (Bousquet, 2017). There is growing interest within strategic studies concerning the more recent development of AWS and drones (Bode and Huelss, 2018; Bousquet, 2017; Fleischmann, 2015; Schwartz, 2016; Williams, 2015). Space policy has become a small but multidisciplinary field of its own (see branch journals Space Policy and Astropolitics), but only a few contributions have been explicitly anchored within IR, either through publishing in IR journals or book series, or through application of IR theory (Eriksson and Privalov, 2020; Newlove-Eriksson and Eriksson, 2013; Peoples, 2018; Sheehan, 2007). The general impression is that many of the new technological developments are studied in subfields, with little communication with the wider literatures and theories of IR. Recently, however, several new contributions have been published that explicitly address technology and international relations (Drezner, 2019; Hoijtink and Leese, 2019; Kaltofen, Carr and Acuto, 2019; Mayer et al., 2014; McCarthy, 2018; Singh, Carr and Marlin-Bennett, 2019). These publications urge for a more explicit focus on technology within IR, and they also make several original observations on specific technologies, as well as on the relationship between technology, society and politics more generally. These contributions are certainly useful in both expanding and deepening theory and research on technology and world politics. It is by no means certain, however, that they will have a wider impact on general IR theory. Will the debates on general IR and theories on, for example, globalization, international security and global governance pick up these important findings and contributions? We assert that they should, but given the increasingly heterogeneous and frag-
Theorizing technology and international relations: perspectives and new horizons
5
mented nature of IR, there is a clear risk that technology and IR will simply develop into another subfield. While there are growing subfields and niche bodies of literature that address technology and world politics, they have thus far had negligible impact on major IR theories and in major IR textbooks. Widely used textbooks such as the popular Globalization of World Politics edited by Baylis, Smith and Owens (2019) have in their more recent editions incorporated ‘new’ theories, specifically feminism and post-colonialism in addition to the established realist, liberal, Marxist and constructivist paradigms. Yet there is still no trace of a distinct IR theory of technology in major textbooks. More surprisingly, with the traditional exception of WMD, technology is typically not even addressed in the thematic or empirical sections of major textbooks. Textbooks explicitly dealing with topics such as globalization, terrorism, new wars and global governance – in which technology arguably plays a significant role – pay scant attention to technology and existing theories on technology and politics. In particular, the still-dominant theories of IR – realism, liberalism, Marxism and largely also constructivism – tend to treat technology as external to politics, not as something integral to how contemporary politics and world affairs are carried out (Eriksson and Giacomello, 2007; Fritsch, 2014; Leese and Hoijtink, 2019; Mayer et al., 2014). Moreover, despite subfield growth and diversification of research on technology and international relations, articles on ‘science’ and ‘technology’ do not amount to more than around a percentage of articles in major IR journals (Mayer et al., 2014, p. 14). Noteworthy attempts have recently been made to incorporate technology in IR theorizing, including the emerging paradigm of ‘techno-politics’, which draws largely on the separate yet multidisciplinary field of science and technology studies (STS) (Drezner, 2019; Hoijtink and Leese, 2019; Kaltofen et al., 2019; Mayer et al., 2014; McCarthy, 2018; Newlove-Eriksson and Eriksson, 2021; Singh et al., 2019). Moreover, with some success, technology has become an explicit topic (among many others) in major social science associations, such as the International Studies Association (ISA). The ISA, like most social science organizations, has become more diversified, with an ever-increasing number of sections. In 2013, technology finally and explicitly made its way into the ISA family, through the creation of the STAIR section, an acronym for Science, Technology and Arts in International Relations (cf. Singh et al., 2019). A similar project was established within the European International Studies Association (EISA) (cf. Hoijtink and Leese, 2019). Despite these new contributions – and the arguably tremendous impact of globalization of the Internet, the increasing societal dependency on so-called critical information infrastructures (CII), the development of AI, robotics, nanotechnology and AWS (Bode and Huelss, 2018; Schwab, 2017) – there is much room for new theory and research that do not merely treat technology as
6
Technology and international relations
a subfield of IR, but also put it at the very core of IR theory (cf. Hoijtink and Leese, 2019; Kaltofen et al., 2019; Mayer et al., 2014; McCarthy, 2018; Singh et al., 2019). Given this room – and indeed necessity – for progress in the field, we turn now to a discussion and review of technology theorizing within the IR community.
2
PREVAILING PERSPECTIVES
Whereas technology is not yet a focal point in major IR theories and textbooks, there is now certainly a large but also quite diverse and fragmented literature on technology within various subfields of IR. Hence, it is impossible for a short introductory chapter to provide a comprehensive and nuanced overview of existing approaches. Notwithstanding, we have found that, despite the variety in detail and theoretical sophistication, it is relevant and useful to categorize contributions in terms of dominant IR paradigms (here limited to broadly conceived realism, liberalism and constructivism), with the addition of the heterogeneous paradigm of ‘techno-politics’, largely inspired by the field of STS (Fritsch, 2014; Leese and Hoijtink, 2019; Mayer et al., 2014; McCarthy, 2018). However, there are certainly other ways to categorize theorizing on technology. Social science studies of technology, in IR, sociology and elsewhere, have been categorized along a scale of determinism and non-determinism – that is, whether technological development is seen as a ‘social product’ (Collins, 1983, p. 193) and whether it is seen to shape politics in a certain direction (Mayer et al., 2014; McCarthy, 2018, p. 14; Singh et al., 2019). Towards the deterministic end of this continuum, we find strongly optimistic as well as strongly pessimistic perspectives. Notably, the optimistic–pessimistic continuum cuts across the realist–liberal divide, and partly also across constructivism and other non-rationalist paradigms. As argued below, however, determinism is strongest among the rationalist realist and liberal paradigms, and optimistic perspectives are most commonly associated with liberalism, especially in theories of modernization and technologically fueled human and societal progress. Non-deterministic perspectives are located primarily within the constructivist and wider non-rationalist camp. Somewhere in between we isolate the multifaceted paradigm of techno-politics. Examples of this identification and localization follow. Realism Realism – whether in its classical or neo version (also referred to as structural realism), defensive or offensive – is focused on states as actors, and security and material power are the main analytical categories. Within realism, technology
Theorizing technology and international relations: perspectives and new horizons
7
is traditionally regarded a ‘force multiplier’, or more generally as belonging to the category of ‘material capabilities’ of states (Eriksson and Giacomello, 2007; Fritsch, 2014; Leese and Hoijtink, 2019). While realists acknowledge the significance of technology for achieving superiority in terms of military power, surveillance and communication, realists also tend to observe the series of ‘military revolutions’ – from archery to machine guns, nuclear weapons, satellites and cyber-warfare. The notion of a ‘revolution in military affairs’ (RMA) captures the development of and dependency on digital information and communications technology and development in robotics, materials and AI for command and control in modern warfare (Arquilla and Ronfeldt, 2001; Bode and Huelss, 2018; Bousquet, 2017; Hoijtink and Leese, 2019). Despite recognition of the significance of technology as a means of power and warfare, realists tend to nevertheless hold the view that technology does not change the nature of international relations (Drezner, 2019; Eriksson and Giacomello, 2007; Fritsch, 2014; Leese and Hoijtink, 2019). Even in globally connected highly technologized society, basic realist conceptualizations persist: the conception of a state-centric and ‘anarchic’ international system; the strategic notions of hegemony, balance of power, bandwagoning, buck-passing and so forth; and the supremacy of material power capabilities. From a realist perspective, technology may change, and technology may also impact on international relations, but the nature of politics remains essentially the same. Hence, realists focusing on technology have observed how, for example, the development of nuclear weapons was crucial for the post-war ‘terror balance’ between the two superpowers, and more generally for the relative stability of the bipolar system of the Cold War era (Herz, 1950; Sagan and Waltz, 2002; Waltz, 1981). Indeed, if there is one particular technology that realists (and IR more generally) have addressed, it is nuclear weaponry. While realists seldom attribute technology with negative or positive values, they do tend to claim that the distribution of technological capacity can have pivotal effects on the balance of power, and thus on whether there is stable peace or heightened risk of war. Given that realism has a fundamentally pessimistic view of international relations, and that it tends to treat technology as unable to change the nature of politics, it may seem as if realists are also preconditioned to view technological development in pessimistic terms. This is not necessarily the case, however. Kenneth Waltz, the ‘father’ of neorealism, made the oft-cited claim that with respect to the global spread of nuclear weapons, ‘more may be better’ (Waltz, 1981; Sagan and Waltz, 2002). This assumption was inferred from the realist concept of deterrence – contending that states that equip themselves with nuclear weapons will use their new capabilities to deter threats and preserve peace.
8
Technology and international relations
While realists acknowledge that technological capacity can be pivotal for power, war and peace, they maintain their rationalist, state-centric and materialist perspective. Factors emphasized by contending theoretical paradigms – such as the emergence of non-state actors in world affairs, the significance of norms, domestic politics, international institutions, identity, culture, or the notion that technology, although it is a product of human invention, may ultimately change both the nature of politics and what it means to be human – have not entered mainstream realist conceptions of politics and technology. Liberalism As an IR paradigm, liberalism incorporates a multitude of theories on, for example, complex interdependence, globalization, international institutions and linkages between domestic and foreign policy (for an overview, see Dunne, 2016). Liberalism emphasizes plurality of actors, including the growth in numbers and influence of non-state actors in world politics. Liberalism also opens up the ‘black box’ of the state in world affairs, showing how both the bureaucracy and organization of state apparatuses, as well as domestic actors and opinion, can shape international politics. Moreover, liberalism has inspired theories of international institutions, integration and global governance, showing how and under what conditions international agreements, norms and principles can make states comply and even voluntarily surrender autonomy. In addition, liberalism has emphasized a much wider agenda than the military security agenda of realism, suggesting both a widened security concept, and that non-security issues can have equal or greater priority on political agendas. Liberalism shares a rationalist epistemology with realism, and most liberal theories also take ‘anarchy’ (as absence of central government) in international affairs into account, although liberalism suggests ways in which the anarchic self-help system can be overcome. With regard to technology, liberal theorists generally, although not exclusively, adopt an optimistic perspective. This is evident particularly in the many instances and applications of modernization theory. Traditional optimistic liberal ideas on the possibility – if not inevitability – of societal, economic, political and possibly also human progress are found in writings on technology and international relations. The first wave of globalization studies, which was generally guided by liberal ideas, was very optimistic, if not utopian in character. This body of liberal literature includes the much-cited ‘end of history’ debate (Fukuyama, 1989), and related topics such as ‘the end of sovereignty’ (Camilleri and Falk, 1992; Rosenau, 1990) and the alleged (yet much critiqued) emergence of a ‘borderless world’ (Ohmae, 1991). In particular, the emergence and diffusion of the Internet has often been interpreted as a major liberalizing force, empowering social movements, and
Theorizing technology and international relations: perspectives and new horizons
9
in itself considered a form of democratization, providing platforms for communication and agenda-setting far exceeding that of political and economic elites. For example, in a 1998 Foreign Affairs article, Keohane and Nye updated their highly influential theory on ‘complex interdependence’ to the ‘information age’, basically claiming that global information and communications technology enhanced their original assumptions on power and interdependence (Keohane and Nye, 1998). Moreover, Nye addressed the diffusion of Internet access across the globe, which he claimed implied a form of democratization (Nye, 2004). The global diffusion of social media has become a topic in a second wave of Internet studies, which liberal theorists frequently interpret as a means for challenging or even toppling autocratic governments. Several studies along these lines have been conducted on the use of social media during the 2011 Arab Spring – for example, Khondker (2011) and Comunello and Anzera (2012). Admittedly, literature on social media and IR more generally, let alone that on the case of the Arab Spring, has become rather heterogeneous. Unsurprisingly, the liberal optimism of a cyber-fueled borderless and democratic future has faced critique from not only realists and critical theorists, but also from within liberalism. For example, in his book The Net Delusion, Evgeny Morozov, himself a disillusioned democratic activist, challenges the allegedly naive belief that ‘Internet freedom’ is a democratizing force that can topple autocracy (Morozov, 2011). According to his analysis, the global interconnectedness of people, states and societies through cyberspace has allowed much greater and more intrusive systems of state surveillance and political control than ever before. The emergence of Wikileaks and the 2013 Snowden revelations further inspired this liberal pessimism. Critical studies of counterterrorism and surveillance sprouted into new and growing multidisciplinary fields of studies (see the specialized journal Critical Studies on Terrorism). Some of these contributions observe similarities in surveillance in liberal democracies and autocratic states, such as Sweden and China (Eriksson and Lagerkvist, 2016). The new liberal pessimism could also be observed in foreign policy change, particularly exemplified by the US under Obama. The reigning approach to democratization by intervention was abandoned along with notions of the US as global guardian of democracy and ‘freedom’, which some have referred to as the American ‘world police’ (not a novel way of conceptualizing American hegemony and interpretations thereof, cf. Nye, 1995). Today, 30 years after the fall of the Berlin Wall, post-millennial dystopianism seems to have taken the place of the liberal utopianism of the early 1990s, fueled by the spread of insular and reactionary ideas: authoritarian nationalism, economic isolationism and the rise of post-truth society. Research and debate on how democracies die (Levitsky and Ziblatt, 2019) might not have replaced the optimistic themes of
10
Technology and international relations
democratization and democratic peace, but they have at least become equally significant. Scholarship combining ‘liberal’ observations of complex interdependence, network society and the dangers of technological dependency existed before the more recent pessimistic turn, however. For example, Scott Sagan rejected Kenneth Waltz’s claim that the spread of nuclear weapons is beneficial for world peace. Sagan’s argument rested on rationalist organization theory, emphasizing the risk of theft, accident or intentional decisions to use nuclear weapons (Sagan and Waltz, 2002). Moreover, Ulrich Beck’s notion of (global) risk society and Charles Perrow’s ‘normal accidents’ concept are of relevance (Beck, 1992, 2012; Perrow, 1999). What Beck, Perrow, ‘high-reliability organization’ (HRO) scholars (cf. La Porte and Consolini, 2007) and critical theorists that followed (cf. Weick, 2011) have shown is that complex and tightly coupled socio-infrastructural systems – such as airports, electricity infrastructure and railway interchanges – imply not only interdependency, but also high levels of risk. Societies that are dependent on highly complex and integrated technologies and infrastructures are also highly vulnerable to serious accidents and disasters. However, these concepts and bodies of literature have hardly been picked up in the wider IR literature (a noteworthy exception is Jarvis and Griffiths, 2007). Within the realm of IR, the liberal contribution to understanding risk and dangers has, rather, been through a widened security concept. With the exception of cyber-threats, however, the literature on widened security rarely deals with technology or technologically induced risk. Notwithstanding, liberalism has not undergone an overwhelmingly pessimistic shift. In addition to the more deterministic approaches – ranging from largely optimistic modernization theory to primarily pessimistic accounts of global risk society – there is a multitude of approaches emphasizing conditionality and contextuality. Some liberal approaches also emphasize the simultaneous globalization and fragmentation of world politics – what Rosenau (2000) referred to as ‘fragmegration’ – noting that this involves opportunities as well as risks. Such more balanced or mixed approaches seem to be common in analyses of, for example, Internet governance and climate politics – two of the most salient topics in which technology is problematized (Eriksson and Giacomello, 2009; Mueller, 2010; Price, 2018). Whether determinist or non-determinist, however, liberal approaches share a rationalist ontology, emphasizing plurality of state and non-state actors, linkages of domestic and international politics, the significance of international institutions, and globally networked complex interdependence. In general, liberalism assumes the possibility of systemic change, for better or worse, concerning both the substance and structure of global politics.
Theorizing technology and international relations: perspectives and new horizons
11
Constructivism Constructivism emerged as a new and rapidly influential IR paradigm in the early 1990s, initially through critique of then-dominant rationalist realist and Liberal paradigms (for an overview, see Barnett, 2018). Leading constructivists claim that in order to understand the interests, preferences and actions of political actors, we must first understand how interests and preferences are shaped. Constructivists have placed an emphasis on how interests cannot be taken for granted, suggesting that interests and preferences are shaped by the identity of actors, whether identity is defined in terms of nationhood, class, religion, or something else. While constructivists do not necessarily share conceptual frameworks, they defy determinism and essentialism. Moreover, constructivists emphasize the ‘logic of appropriateness’ over the ‘logic of consequences’ (cf. Sending, 2002), explaining why actors – including states and non-state actors in international relations – behave according to established social norms, such as respecting the territorial integrity of a neighboring state or, when violating the norm, either deny it or seek to legitimize it with reference to certain core values, such as ‘national security’. With respect to technology, a number of studies have been undertaken on cyber-communities, the mobilizing power of, for example, Jihadism (e.g., Ranstorp, 2007), and on the social construction of cyber-threats such as the notion of an ‘electronic Pearl Harbor’ (Dunn Cavelty, 2008; Eriksson and Giacomello, 2007). Many of these studies can be said to share a constructivist epistemology, emphasizing the pivotal role of identity, culture and social norms. Regarding the one technology explicitly addressed by both realists and liberals – nuclear weapons – the constructivist paradigm contributes by suggesting a new explanation as to why, for example, the nuclear arsenal of some states is seen as unproblematic, while those of others, however small, is considered an existential threat. In the words of Wendt: ‘500 British nuclear weapons are less threatening to the United States than 5 North Korean nuclear weapons, because the British are friends with the United States and the North Koreans are not, and amity and enmity are functions of shared understandings’ (Wendt, 1995, p. 73). Thus, according to constructivism, nuclear posturing and procurement are shaped less by material capacity and ‘relative gains’ as realists would have it, and more so by shared understandings and identities. Constructivist contributions to studies of technology and IR are now both numerous and diverse, yet are sometimes lumped together as social construction of technology studies (SCOT; cf. Bijker, 2017; Pinch and Bijker, 1984). If there is one common ground though, it is that technology is not equipped with any pre-given or ‘essential’ meaning. Technology is what actors make of it, and in this sense it is ‘politically neutral’, or rather possible to politicize in many different ways (Carr, 2016; Manjikian, 2018). Technology – for example, the
12
Technology and international relations
vast and complex Internet – does not have any built-in propensity towards any particular political outcome, according to constructivist thinking. This draws on Alexander Wendt’s critique of the realist (and partly liberal) assumption of anarchy in world politics. Wendt (1992), in his influential article, claimed that ‘anarchy is what states make of it’, despite the absence of central government in world affairs, states may foster many different types of international relations, some in which common norms are established and accepted, and others in which violation or aggression is considered possible, if not even excused. What matters, at the end of the day, is processes of socialization and identity formation, which take time, and which can seldom be intentionally shaped by any single actor. Thus, with regard to technology, constructivists do not theorize on how technology shapes politics, but rather on how identities, norms and interests regarding technology are formed . Many constructivists make it a starting point to criticize technological determinism in other approaches, notably that of liberal modernization theory, but also that of realist theory – for example, on deterrence. In her constructivist study of US cyber policy, Carr makes this point very clear: ‘technology does not determine outcomes’ (Carr, 2016, p. 78; cf. Manjikian, 2018). Constructivists claim that perceptions of technology are shaped more by identities, ideas and processes of socialization than by technological development in and of itself. For example, a handful of studies have been conducted on the securitization of technology and infrastructure – demonstrating how actors identify and selectively frame technology and infrastructure as either threat or opportunity (Dunn Cavelty, 2005, 2008; Eriksson, 2001). In so doing, constructivists are not asking whether threats are ‘real’, but rather how, why and with what consequences something is labeled and treated as if it is a threat. This research highlights the political potency of ‘national security’ rhetoric, which legitimizes extraordinary measures, including the use of force. While constructivists tend to adopt a ‘neutral’ stance on technological development, emphasizing how outcomes are shaped by social and political factors rather than by technology itself – that is, viewing technology as embedded in socio-political layers (cf. Leese and Hoijtink, 2019) – there are elements of dystopianism among theorists further away on the post-rationalist continuum. Post-structuralists, feminists and some critical theorists – while generally adopting a non-determinist methodology – have published several openly pessimistic studies of technological development. They have done so particularly by adopting normative theory, rejecting, for example, automated weapons and surveillance systems, from ethical and moral standpoints (e.g., Rainey and Goujon, 2011). Most of these contributions, however, have not been made in IR, but in sociology, cultural studies, STS and other disciplines. Constructivists and other post-rationalists have also made several more specific observations on technology and international relations. Of particular
Theorizing technology and international relations: perspectives and new horizons
13
significance is the reflection that many novel technologies can reshape perceptions of distance. For example, it has been argued that war in the digital age distances actors from the bloody reality of war, as in the case of remotely controlled drone pilots or computer hacking, which may, for example, immobilize air traffic control or nuclear power plants (Der Derian, 2009; Eriksson and Giacomello, 2007, p. 20). In addition to distancing from the effects of actions – which is basically a continuation of the effect achieved from indirect fire (artillery) or aerial bombing – digitalization creates a sense of virtual reality. Remotely controlling a drone armed with missiles is similar to playing a computer game – the ‘pilot’ sits in a room in a location often far away from the killing site, looking at a computer screen, only using a keyboard and a joystick (Coeckelbergh, 2013; Fleischmann, 2015; Schwartz, 2016; Williams, 2015). Techno-politics The fourth overarching paradigm to be discussed has been termed techno-politics (Hoijtink and Leese, 2019; Kaltofen et al., 2019; Mayer et al., 2014; McCarthy, 2018; Singh et al., 2019). This is a relatively new, broad, multifaceted, rapidly growing and therefore also loosely formed ‘paradigm’. It draws heavily on the multidisciplinary field of STS. IR theory plays a limited role herein, just as STS more generally has largely developed through disciplines other than IR and political science. Yet, a few pioneers have recently tried to develop techno-politics with an explicit attempt to contribute to IR – that is, to make techno-politics a subdiscipline of IR (Carr, 2016; Fritsch, 2014; Hoijtink and Leese, 2019; Kaltofen et al., 2019; Mayer et al., 2014; McCarthy, 2018; Singh et al., 2019). Unlike the general theories of IR, STS is to a much greater extent based on empirical, conditional and contextualized theory, emphasizing variety and idiosyncrasy (Fritsch, 2014; Kaltofen et al., 2019). This also means that techno-politics is much more loosely presented than any of the major IR paradigms. Nevertheless, there seem to be some shared assumptions among scholars within the techno-politics paradigm, at least among those who seek to make explicit contributions to IR. First, technology is considered neither good, bad nor neutral (Fritsch, 2014, p. 115; Kranzberg, 1986; Newlove-Eriksson and Eriksson, 2021). This statement is meant to clarify that in contrast to major IR paradigms (including constructivism), technology is considered to be ‘deeply political’, and that technology is intertwined with or embedded within society and politics, rather than to be seen as an exogenous factor. As Fritsch puts it: technology is an ‘ambivalent endogenous core component of the global system’ (Fritsch, 2014, p. 115). The paradigm of techno-politics seeks to ‘cover the deserted area between technological determinism and human agency’ (Mayer et al., 2014).
14
Technology and international relations
Thus, it shares a non-deterministic approach with constructivism, but unlike constructivism it does not see human agency as the ultimate explanation for political outcomes. By contrast, techno-politics implies that technology and politics (as well as social systems more generally) shape and reshape each other. Second, there are multifaceted ways in which technology and politics are intertwined and shape and reshape each other. Scholars of techno-politics often address large-scale socio-technical systems, also called ‘assemblages’ (Mayer et al., 2014; cf. Singh et al., 2019). Assemblages imply that technology and politics continuously shape each other in complex and unpredictable ways. The Internet is an obvious and probably the most closely studied ‘assemblage’ – a global infrastructure that relies on hardware in the form of cables on land, under water, and in the sky, routers, computers and other devices; software in the form of technical protocols (TTP/IP), the domain name system (DNS), browsers and other applications; and a multifaceted governance structure ranging from Internet service providers to national governments and the stakeholders of global Internet governance, including the company at its core, The Internet Corporation for Assigned Names and Numbers (ICANN). How the hardware, software, governance and human utilization of the Internet shape each other is in focus in much scholarship on the Internet (Price, 2018). Increasingly, research is being done on how the Internet is not simply a separate socio-technical system, but also integrated with everything from our ‘smart’ phones, to ‘smart’ homes, ‘smart’ healthcare and ‘smart’ cities – including the so-called Internet of Things powered by fifth-generation (5G) networks (cf. Abomhara and Kølen, 2015; Weber, 2010). Other socio-technical systems are also addressed, such as railway interchanges and airports – mega-sites that increasingly combine speedy transportation of people and goods with other commercial functions, such as shopping, leisure and entertainment. Notably, in such junctions, public–private partnerships are built into both the physical infrastructure and into its organization and governance (Newlove-Eriksson, 2020). Third, studies on techno-politics have made more specific contributions, some of which have already been picked up by the wider IR literature. Particularly noteworthy is the notion of time–space compression – the observation that the development of global information and communications technology has allowed real-time communication on a global scale, regardless of where people are located. The notion of time–space compression, which has been elaborated in STS (Fritsch, 2014), became a core element of the literature on globalization – a core theme in IR (Scholte, 2005). More recently, the field of surveillance studies has elaborated new forms of power and control – regarding, for example, self-imposed censorship, governmental and commercial utilization of Internet search histories, and how online algorithms
Theorizing technology and international relations: perspectives and new horizons
15
shape and adapt to people’s interests and communities (Bauman et al., 2014; Lyon, 2007). There are also some studies suggesting that new techno-social systems change the very meaning of agency, theorizing non-human agency both in the form of wider system actor-like capacity and in the form of self-learning AI (Mayer et al., 2014; Srnicek, 2018). The ‘cyborg’ and AI theme also raises questions of ethics, whether laws and constitutional rights apply to non-humans, and whether ‘cyborgs’ may seek independence from and eventually turn against human society – themes that have been more thoroughly explored in science fiction than in science (cf. Olander and Greenberg, 1978).
3
GAPS AND NEW HORIZONS
To begin with, IR general theory and textbooks need to pay explicit attention to technology. Advanced theory and research on technology and societal development has progressed in other disciplines, not just in STS, but this theme is still largely absent from major IR textbooks, and IR research conducted on technology is largely treated as a subfield. With IR theory and textbooks being largely silent on the rapid development and increasing societal dependency on complex and highly integrated technologies and infrastructure, the discipline and teaching of IR run the risk of becoming increasingly irrelevant. Theories on war and peace, globalization and global governance – to name but a few core IR topics – will continue to lack depth, explanatory power and societal saliency if they are not explicitly focusing on the role of technology. Techno-political perspectives, which are currently rapidly developing, deserve their own chapters in IR textbook sections on theory, and specific technologies and infrastructural ‘assemblages’ deserve a place in sections on cases and issues, in addition to the traditional chapters on WMD, and the occasional piece on Internet governance as a case of global governance. If technology is not explicitly problematized in the major textbooks we use to train and educate future generations of IR scholars, progress will likely still happen, but it will be much slower than it needs to be, and IR theory will be perceived as largely lagging behind not just real-world developments, but also behind other disciplines. We also acknowledge that for many scholars interested in technology and world politics, it does not really matter what disciplinary identity their research is associated with, as long as it gets acknowledged by peers (from whatever discipline) who are interested in similar themes. This may also reflect the general fragmentation and diversification not just within IR, but also most of the social sciences. Whether IR is in the midst of an identity crisis with an uncertain future has been debated (Dunne, Hansen and Wight, 2013). This may seem to be the case with respect to how IR approaches technology. Yet it would
16
Technology and international relations
be a mistake if ‘technology and global studies’ developed into a new subfield, with its own conferences and publishing outlets. The relationship between technology, society, governance and human agency is far too important to be relegated to specialized journals – it needs to be a core theme in studies of world politics, regardless of what discipline individual scholars come from. Having made this exhortation, we would also like to point out some specific gaps and new horizons. First, there is considerable room for theory and research on the interconnectedness of multiple techno-societal systems. Social media, big data, cloud services and the IoT are not developing in isolation from each other, but are increasingly integrated and influential in the daily activities of individuals, organizations and governments – domestically as well as internationally. Governmental and commercial services such as electricity, transportation, financial services, news media, education, health and medicine are connected through global ICT. This new and deeper form of complex interdependence, techno-political transformation, fourth industrial revolution or whatever this new reality is called – goes beyond the emergence of ‘information society’. This new structural shift is about the fusion of the physical, biological, digital and social worlds – through interconnected technologies (Schwab, 2017; Newlove-Eriksson and Eriksson, 2021). The smartphone is the most literal expression of what this means – a hand-held multipurpose digital device that in itself implies a fusion of the social, the societal, the global, the personal and even the biological (e.g., through applications that can assess your health). When almost every societal sector, infrastructure and walk of life are integrated and interconnected, crucial and still largely unanswered questions arise. What is the nature of this development, what are the causes of it, what are the consequences, and how can they be dealt with? Second, while there is now a relatively large body of literature on specific technologies, particularly WMD, AWS and ICTs, there is a noteworthy absence of IR-oriented research on, for example AI, robotics, nanotechnology and genetic editing (but see Hoijtink and Leese, 2019; Mayer et al., 2014; McCarthy, 2018). As noted in the introduction, there are some specialized journals and subfields conducting social science research on these technologies, but much of it is rather weak on theory, often lacking any attempt to provide systematic generalizations or contributing to wider theories on global change. Finally, the relationship of technology, politics and popular culture deserves further attention. While a handful of noteworthy contributions have raised this within IR, particularly with regard to IR and science fiction and fantasy, most of the still rather few studies are focused on how fiction illustrates real-world politics and political discourse (e.g., Kiersey and Neumann, 2013). There are some noteworthy contributions, however, that deal more deeply with how politics and popular culture not only reflect but also shape each other, and
Theorizing technology and international relations: perspectives and new horizons
17
that they are increasingly entangled, sometimes in a literal sense (Der Derian, 2001; Crilley, 2020). There is considerable room for more theory and research on these themes, however. For example, what is the connection between the simultaneous release in October 2015 of the motion picture The Martian, and NASA’s then new project ‘Journey to Mars’? What are the implications of China’s 2019 release of the science fiction movie, The Wandering Earth, the simultaneous successful Chinese landing of a probe on the far side of the Moon, the opening of a Mars settlement exhibition, and the development of a new Chinese space station orbiting Earth? There appears to be a new horizon for theory and research that goes beyond how popular culture reflects or inspires what goes on in the real world, but also focuses on how technology, politics and popular culture are increasingly integrated, sometimes in explicit techno-political-cultural projects.
4 CONCLUSION This chapter confirms past observations on how general IR and major textbooks have largely failed to take into account technology either as a core theme of theory or as issue-areas. The major IR theories of realism, liberalism and constructivism have in various degrees and forms addressed technology, but focus has largely been on WMD and ICT. This chapter also corroborates the observation that these theories largely treat technology as an exogenous factor – that is, it has not been assigned a core role in explaining politics and power. By contrast, a more loosely organized paradigm of techno-politics, inspired more by STS than by IR, has increasingly addressed core topics of IR, including war and peace, governance and global power shifts. This nascent but rather fragmented literature emphasizes that technology is not essentially good or bad, but also that it is not neutral but has deep political implications, albeit in a non-deterministic way. Techno-political studies have yet to make a significant mark in IR, but they are making progress with regard to conceptualization of, for example, the fusion of new technologies with the social, the political and even the biological. There is also room for new theory and research both on such structural techno-political shifts and on the politics of specific technologies, including AI, automated weapons and bioengineering.
REFERENCES Abomhara, Mohamed and Geir M. Kølen (2015), ‘Cyber security and the Internet of Things: vulnerabilities, threats, intruders and attacks’, Journal of Cyber Security, 4, 65–88. Arquilla, John and David Ronfeldt (2001), Networks and Netwars: The Future of Terror, Crime and Militancy, Santa Monica, CA: RAND Corporation.
18
Technology and international relations
Barnett, Michael L. (2018) ‘Constructivism’, in Alexandra Gheciu and William C. Wohlfort (eds), Oxford Handbook of International Security, Oxford: Oxford University Press. Bauman, Zygmund, Didier Bigo, Paulo Esteves and Elspeth Guild (2014), ‘After Snowden: rethinking the impact of surveillance’, International Political Sociology, 8(2), 121–44. Baylis, John, Steve Smith and Patricia Owens (eds) (2019), The Globalization of World Politics, 8th edition, Oxford: Oxford University Press. Beck, Ulrich (1992), Risk Society: Towards a New Modernity, London and New York: SAGE Publications. Beck, Ulrich (2012), ‘Global risk society’, in The Wiley-Blackwell Encyclopedia of Globalization, Wiley Online Library, accessed 15 November 2020 at https:// onlinelibrary.wiley.com/doi/abs/10.1002/9780470670590.wbeog242. Bijker, Wiebe E. (2017), ‘Constructing worlds: reflections on science, technology and democracy (and a plea for bold modesty)’, Engaging Science, Technology and Society, 3, 315–31. Bjola, Cornelia and Marcus Holmes (eds) (2015), Digital Diplomacy: Theory and Practice, Abingdon: Routledge. Bode, Ingvild and Henrik Huelss (2018), ‘Autonomous weapons systems and changing norms in international relations’, Review of International Studies, 44(3), 393–413. Bousquet, Antoine (2017), ‘A revolution in military affairs? Changing technologies and changing practices of warfare’, in D.R. McCarthy (ed.), Technology and World Politics (pp. 165–81), Abingdon: Routledge. Camilleri, Joseph A. and Jim Falk (1992), The End of Sovereignty: The Politics of a Shrinking and Fragmenting World, Aldershot, UK and Brookfield, VT, USA: Edward Elgar Publishing. Carr, Madeline (2015), ‘Power plays in global Internet governance’, Millennium: Journal of International Studies, 43(2), 640–49. Carr, Madeline (2016), US Power and the Internet in International Relations: The Irony of the Information Age, Basingstoke: Palgrave Macmillan. Castells, Manuel (2000), The Rise of the Network Society, Hoboken, NJ: Wiley-Blackwell. Coeckelbergh, Mark (2013), ‘Drones, information technology, and distance: mapping the moral epistemology of remote fighting’, Ethics and Information Technology, 15(2), 87–98. Collins, Randall (1983), ‘Development, diversity, and conflict in the sociology of science’, The Sociological Quarterly, 24(2), 185–200. Comunello, Francesca and Guiseppe Anzera (2012), ‘Will the revolution be tweeted? A conceptual framework for understanding the social media and the Arab Spring’, Islam and Christian Muslim-Relations, 23(4), 453–70. Crilley, Rhys (2020), ‘Where are we at? New directions for research on popular culture and world politics’, International Studies Review, accessed 15 November 2020 at https://academic.oup.com/isr/advance-article/doi/10.1093/isr/viaa027/5843456. Deibert, Ronald (2017), ‘Cyber-security’, in Myriam Dunn Cavelty and Thierry Balzacq (eds), Routledge Handbook of Security Studies (pp. 186–96), 2nd edition, Abingdon: Routledge. Der Derian, James (2001), Virtuous War: Mapping the Military-Industrial-Medi a-Entertainment Network, Boulder, CO: Westview Press. Der Derian, James (2009) Virtuous War: Mapping the Military-Industrial-Medi a-Entertainment Network. 2nd edn. London, New York; Routledge.
Theorizing technology and international relations: perspectives and new horizons
19
Drezner, Daniel W. (2019) ‘Technological change and international relations’, International Relations, 33(2), 1–18. Dunn Cavelty, Myriam (2005), ‘The socio-political dimensions of critical information infrastructure protection (CIIP)’, International Journal of Critical Infrastructures, 1(2/3), 258–68. Dunn Cavelty, Myriam (2008), Cyber-security and Threat Politics: US Efforts to Secure the Information Age, Abingdon: Routledge. Dunne, Tim (2016), ‘Liberalism’, in John Baylis, Steve Smith and Patricia Owens (eds) (2016), The Globalization of World Politics (pp. 116–28), 7th edition, Oxford: Oxford University Press. Dunne, Tim, Lene Hansen and Colin Wight (2013), ‘The end of international relations theory?’, European Journal of International Relations, 19(3), 405–25. Eriksson, Johan (2001), ‘Cyberplagues, IT and security: threat politics in the information age’, Journal of Contingencies and Crisis Management, 9(4), 200–210. Eriksson, Johan and Giampiero Giacomello (2007), ‘Introduction: closing the gap between international relations theory and studies of digital-age security’, in Johan Eriksson and Giampiero Giacomello (eds), International Relations and Security in the Digital Age, Abingdon: Routledge, pp. 1–28. Eriksson, Johan and Giampiero Giacomello (2009), ‘Who controls the Internet? Beyond the obstinacy or obsolescence of the state’, International Studies Review, 11(1), 205–30. Eriksson, Johan and Johan Lagerkvist (2016), ‘Cyber-security in Sweden and China: going on the attack?’, in Karsten Friis and Jens Ringsmose (eds), Conflict in Cyber Space: Theoretical, Strategic and Legal Perspectives, Abingdon: Routledge, pp. 83–94. Eriksson, Johan and Roman Privalov (2020), ‘Russian space policy and identity: visionary or reactionary?’, Journal of International Relations and Development, accessed 20 September 2020 at https://link.springer.com/article/10.1057/s41268 -020-00195-8. Fleischman, William M. (2015), ‘Just say “no!” to lethal autonomous robotic weapons’, Journal of Information, Communication and Ethics in Society, 13(3/4), 299–313. Fritsch, Stefan (2014), ‘Conceptualizing the ambivalent role of technology in international relations: between systemic change and continuity’, in Maximilian Mayer, Mariana Carpes and Ruth Knoblich (eds), The Global Politics of Science and Technology – Vol. 1: Concepts from International Relations and Other Disciplines, Heidelberg: Springer, pp. 115–38. Fukuyama, Francis (1989), ‘The end of history’, The National Interest, No. 16 (Summer), 3–18. Hamilton, Caitlin and Laura J. Shepherd (eds) (2016), Understanding Popular Culture and World Politics in the Digital Age, Abingdon: Routledge. Herrera, Geoffrey (2006), Technology and International Transformation: The Railroad, the Atom Bomb, and the Politics of Technical Change, Albany, NY: State University of New York Press. Herz, John H. (1950), ‘Idealist internationalism and the security dilemma’, World Politics, 2(2), 157–80. Hoijtink, Marijn and Matthias Leese (eds) (2019), Technology and Agency in International Relations, Abingdon: Routledge. Hudson, Valerie (1991), Artificial Intelligence and International Politics, Boulder, CO: Westview Press.
20
Technology and international relations
Jarvis, Daryl S.L. and Martin Griffiths (2007). ‘Learning to fly: the evolution of political risk analysis’, Global Society, 21(1), 5–21. Kaltofen, Carolin, Madeline Carr and Michele Acuto (eds) (2019), Technologies of International Relations: Continuity and Change, Cham, Switzerland: Palgrave Macmillan. Keohane, Robert O. and Joseph S. Nye (1998), ‘Power and interdependence in the information age’, Foreign Affairs, 77(5), 81–95. Khondker, Habibul Hacque (2011), ‘Role of the new media in the Arab Spring’, Globalizations, 8(5), 675–79. Kiersey, Nicholas J. and Iver B. Neumann (eds) (2013), Battlestar Galactica and International Relations, Abingdon: Routledge. Kranzberg, Melvin (1986), ‘Technology and history: “Kranzberg’s laws”’, Technology and Culture, 27(3), 544–60. La Porte, Todd and Paula Consolini (1998) ‘Theoretical and operational challenges of “high-reliability organizations”: air traffic control and aircraft carriers’, International Journal of Public Administration, 21(6–8), 847–52. Leese, Matthias and Marijn Hoijtink (2019), ‘How (not) to talk about technology: international relations and the question of agency’, in Marijn Hoijtink and Matthias Leese (eds), Technology and Agency in International Relations, Abingdon: Routledge, pp. 1–23. Levitsky, Steven and Daniel Ziblatt (2019), How Democracies Die, New York: Crown Publishing. Lyon, David (2007), Surveillance Studies: An Overview, Cambridge, UK: Polity Press. Manjikian, Mary (2018), ‘Social construction of technology: how objects acquire meaning in society’, in Daniel R. McCarthy (ed.), Technology and World Politics: An Introduction, Abingdon: Routledge, pp. 25–41. Masco, Joseph P. (2018) ‘Nuclear technoaesthetics: sensory politics from Trinity to the virtual bomb in Los Alamos’, in Daniel R. McCarthy (ed.), Technology and World Politics: An Introduction (pp. 103–25), Abingdon: Routledge. Mayer, Maximilian, Mariana Carpes and Ruth Knoblich (2014), ‘A toolbox for studying the global politics of science and technology’, in Maximilian Mayer, Mariana Carpes and Ruth Knoblich (eds), The Global Politics of Science and Technology – Vol. 2: Perspectives, Cases and Methods, Heidelberg: Springer, pp. 1–17. McCarthy, Daniel R. (ed.) (2018), Technology and World Politics: An Introduction, Abingdon: Routledge. Morozov, Evgeny (2011), The Net Delusion: How Not to Liberate the World, London: Allen Lane. Mueller, Milton L. (2010), Networks and States: The Global Politics of Internet Governance, Cambridge, MA: MIT Press. Newlove-Eriksson, Lindy M. (2020), ‘Accountability and patchwork governance in urban rail interchanges: junctions of London Crossrail and Stockholm City Line compared’, Public Works Management & Policy, 25(2), 105–31. Newlove-Eriksson, Lindy M. and Johan Eriksson (2013), ‘Governance beyond the global: who controls the extraterrestrial?’, Globalizations, 10(2), 277–92. Newlove-Eriksson, Lindy M. and Johan Eriksson (2021), ‘The EU and the technological megashift: threats, vulnerabilities and fragmented responsibilities’, in Antonina Bakardjieva Engelbrekt, Anna Michalski and Lars Oxelheim (eds), The EU and the Technological Shift, Basingstoke: Palgrave Macmillan. Nye, Joseph S. (1995), ‘The case for deep engagement’, Foreign Affairs, 74(4), 90–102.
Theorizing technology and international relations: perspectives and new horizons
21
Nye, Joseph. S. (2004), Power in the Global Information Age: From Realism to Globalization, Abingdon: Routledge. Ohmae, Kenichi (1991), The Borderless World: Power and Strategy in the Interlinked Economy, New York: HarperCollins. Olander, Joseph D. and Martin H. Greenberg (1978), International Relations Theory through Science Fiction, New York: New Viewpoints. Peoples, Columba (2018), ‘Extra-terrestrial technopolitics: the politics of technology in space’, in Daniel R. McCarthy (ed.), Technology and World Politics: An Introduction (pp. 182–203), Abingdon: Routledge. Perrow, Charles (1999), Normal Accidents: Living with High Risk Technologies, Princeton, NJ: Princeton University Press. Pinch, Trevor J. and Wiebe E. Bijker (1984), ‘The social construction of facts and artifacts: or how the sociology of science and the sociology of technology might benefit each other’, Social Studies of Science, 14(3), 399–441. Price, Monroe (2018), ‘The global politics of Internet governance: a case study in closure and technological design’, in Daniel R. McCarthy (ed.), Technology and World Politics: An Introduction, Abingdon: Routledge, pp. 126–45. Rainey, Stephen and Philippe Goujon (2011), ‘Toward a normative ethics for technology development’, Journal of Information, Communication and Ethics in Society, 9(3), 157–79. Ranstorp, Magnus (2007), ‘The virtual sanctuary of al-Qaeda and terrorism in the age of globalization’, in Johan Eriksson and Giampiero Giacomello (eds), International Relations and Security in the Digital Age, Abingdon: Routledge, pp. 31–56. Rosenau, James N. (1990), Turbulence in World Politics: A Theory of Change and Continuity, Princeton, NJ: Princeton University Press. Rosenau, James N. (2000), ‘The governance of fragmegration: neither a world republic nor a global interstate system’, Studia Diplomatica, 53(5), 15–39. Sagan, Scott D. and Kenneth N. Waltz (2002), The Spread of Nuclear Weapons: A Debate Renewed, 2nd edition, New York: W.W. Norton & Company. Scholte, Jan A. (2005), Globalization: A Critical Introduction, 2nd edition, London: Red Globe Press. Schwab, Klaus (2017), The Fourth Industrial Revolution, New York: Currency Books. Schwartz, Elke (2016), ‘Prescription drones: on the techno-biopolitical regimes of contemporary “ethical killing”’, Security Dialogue, 47(1), 59–75. Sending, Ole Jacob (2002), ‘Constitution, choice and change: problems with the “logic of appropriateness” and its use in constructivist theory’, European Journal of International Relations, 8(4), 443–70. Sheehan, Michael (2007), The International Politics of Space, Abingdon: Routledge. Singh, J.P., Madeline Carr and Renée Marlin-Bennett (eds) (2019), Science, Technology and Art in International Relations, Abingdon: Routledge. Srnicek, Nick (2018), ‘New materialism and posthumanism: bodies, brains and complex causality’, in Daniel R. McCarthy (ed.), Technology and World Politics, Abingdon: Routledge, pp. 84–99. Valeriano, Brandon and Ryan C. Maness (2018), ‘International relations theory and cyber-security: threats, conflicts and ethics in an emerging domain’, in Chris Brown and Robyn Eckersley (eds), The Oxford Handbook of Political Theory (Ch. 20), Oxford: Oxford University Press. Waltz, Kenneth N. (1981), ‘The spread of nuclear weapons: more may be better’, The Adelphi Papers, 21(171).
22
Technology and international relations
Weber, Rolf H. (2010), ‘Internet of Things – new security and privacy challenges’, Computer Law & Security Review, 26(1), 23–30. Weick, Karl (2011), ‘Organizing for transient reliability: the production of dynamic non-events’, Journal of Contingencies and Crisis Management, 19(1), 21–7. Wendt, Alexander (1992), ‘Anarchy is what states make of it: the social construction of power politics’, International Organization, 46(2), 391–425. Wendt, Alexander (1995), ‘Constructing international politics’, International Security, 20(1), 71–81. Williams, John (2015), ‘Distant intimacy: space, drones, and just war’, Ethics and International Affairs, 29(1), 93–110.
2. Mapping technological innovation Francesco Niccolò Moro and Marco Valigi 1 INTRODUCTION The objective of this chapter is to map the geographic distribution of technological change in the current global context and preliminarily assess its impact on the global distribution of power. The chapter also aims to provide a preliminary view of the ‘social geography’ of technological innovation – that is, the actors (beginning with the distinction between governments and private actors) that are responsible for, and are ‘in charge’ of, new technological developments. Technological innovation is typically difficult to grasp. In this chapter we rely largely upon data on patents and R&D. Use of these data is not fault free. Scholarship on innovation has long noticed how such data are retrospective and how they fail to measure important transformations occurring in rapidly developing sectors (Basberg, 1987). Yet, they are also relatively transparent, homogeneous over time, and have often been found to provide a good, if imperfect, indication of where technological development will take place in the future (Kogan et al., 2017). The picture that emerges is characterized by two key features. First, the global distribution of technological innovation has been changing deeply over the past years. Although data on patents and investments in R&D (two standard indicators for technological innovation that we use in this study) are inherently imperfect indicators, the Asian countries have consistently increased their margin over other continents in patent applications. China’s patent applications have skyrocketed since 2000, overtaking the USA in 2011. European countries file much fewer applications. Nonetheless, differences within Europe exist, as illustrated by Germany, whose submissions are three or more times larger than those of France and the UK. Second, the process of technological transformation has implied a significant change in agency among producers of technology. Recently in the United States, the growth of private R&D investments in industry has largely outpaced the expansion of federal investments in comparable sectors. Moreover, the trend mentioned seems a common trait in Western capitalist democracies. In Asia, on the contrary, the picture is more nuanced. Despite the primacy of 23
24
Technology and international relations
China on copyrights and the role played in R&D by government, Japan is still the world’s third largest country in terms of patents, and R&D is mostly funded by private actors. The chapter is structured as follows. Section 2 provides an overview of the (increasing) role of technology as a source of states’ power and influence in the international arena. Section 3 focuses on Asia as the locus of major changes in world power distribution as an effect of the role of R&D in technology and the implications of policies relating to innovation as measured by patents. Section 4 looks at the key sectors in technological innovation. Section 5 investigates the influence of private actors on technological change. Finally, in its conclusion, this chapter tries to assess the impact of recent major changes with respect to the geographical distribution of innovation, and what this implies for the international system.
2
TECHNOLOGY, STATES AND BUSINESS
Over the last ten years, the relationship between state power and technological development has grown increasingly critical. On the domestic front, the politics–technology relationship is in turn altering that between the state and the people, above all regarding the delicate equilibrium between individual security and freedom. In the international arena, technological competition and the not exclusively state agencies that are in the forefront are undergoing a rapid change in their nature, profoundly altering the course of international politics, though not always in unidirectional and linear ways (see Chapter 1 for a thorough discussion). In an international setting dogged by uncertainty and mounting competition, the possession of cutting-edge technology seems able to produce an even greater strategic advantage than in the past. Meanwhile, knowledge, technologies and manufactured products that were once exclusively state owned are nowadays proving accessible to a whole range of individuals and groups – led by multinational and hi-tech colossi like Amazon, Google and Huawei – whose influence is the subject of all kinds of speculation. The spread of technology is opening up hitherto unimaginable opportunities for non-state actors and this is thought by some to indicate that states are henceforth destined to cede their primacy even in terms of their ability to promote social progress via original know-how and innovation of their own making (Clarke and Lee, 2018). While the state’s relationship to society is patently changing, and the role of public institutions concerning the production of innovative knowledge is in turn taking on a different guise, arguing that such a change is an index of state power being eroded is possibly not correct, or is an undue simplification of a far more complex and subtle phenomenon. Although the idea that the protagonists of world economic growth are also bearers of technological change
Mapping technological innovation
25
holds some fascination and a basis of truth inasmuch as economic power and profit seeking do fuel research towards new products and solutions, a glance at the history of the last centuries will reveal a less linear picture. In the first place, redistribution of the roles and changing relations between public and private actors when it comes to technological innovation does not necessarily imply that states are losing their prerogatives or power. It is one thing to promote even radical forms of technological innovation, and quite another to take them over, or, conversely, grant private entities quotas in them. In short, we should bear in mind that, though we have witnessed a huge diffusion of certain technologies – for instance, the civil sector taking over from the military in robotics and self-drive vehicles – in most cases this has been because state power has decided that such a transition is functional and advantageous to its own plans for social organization. Second, even if we are facing an upturn in business dynamism, why should that correspond with a proportional contraction in the state’s role? To see the innovation phenomenon as zero-sum game between state and non-state actors is a distorted viewpoint. Although at this stage, state action might seem less effective or less evident that in the past, that does not mean there is a direct relation to the behaviour of, or role being played by, private agencies. In focusing on private dynamism as though it were the prime feature of innovation itself, one is in danger of obscuring how certain policies designed to make some behaviours profitable boost the advancement of technology and its social dissemination. In the energy field, for example, regulation and taxation have been powerful factors in transforming the market and assisting major innovations in wind and solar power technology to find a foothold in society. The issue becomes still clearer if we look at pure research. Although that rarely produces directly marketable innovations, the guiding hand of the state in allocating the taxpayer’s money to some walks of research rather than others has in many cases caused technology to come into being that would later change the world. Nowadays, for example, there is talk of space tourism and private enterprise spearheading it – as in the case of SpaceX – but if the American government more than half a century ago had not sought to conquer space and invested the taxpayers’ money in training engineers, technicians and astronauts, today’s developments would have been unthinkable (Foray, Mowery and Nelson, 2012). As mentioned in the previous section, the emerging role of private enterprise does not stand in opposition but should be seen as complementary to that of the public side. In a complex world where even space exploration has by now gone way beyond the pioneering phase, relations between public and private agencies are becoming more intricate. Precisely by virtue of technological dissemination, the private sector is providing a different, and perhaps unprecedentedly large, contribution. In this new functional differentiation of
26
Technology and international relations
roles, private enterprise is working alongside the state on ambitious projects that call for large-scale collaborative use of capital and know-how, with diversification of the funding sources and sharing of the risks. Such a view of the state–private tandem should incidentally prompt us to be on our guard against some of the more popular assumptions about technological innovation – rarely held, be it said, by those with a truly scientific background. Many are suggesting that the state–private relationship is turning upside down, that the supposedly top-down process of innovating is somehow becoming a bottom-up phenomenon at a more or less similar rate everywhere. The diffusion of technology like the Internet and mobile phones and the resultant perception of an increasingly interconnected world has given the impression that such a phenomenon is generalized. On closer inspection, however, the global distribution of technological innovation is neither quantitatively nor qualitatively uniform. Even in areas and countries that are held to be technologically advanced, we shall see in the course of this chapter that significant differences exist. That amid the global spread of technology there should remain a marked discrepancy between regions and countries supposedly equal in development would seem to boil down to a differing role of the state, which thus continues to be pivotal. For instance, in Mazzucato’s view, for any objective understanding of the real role of the state concerning R&D investment we should begin by distinguishing: (1) the type of investment; (2) how wide-ranging it is in scope and the kind of knowledge involved in the research; and (3) the degree of two-way contamination between public and private financing (Mazzucato, 2017, pp. 1–4). Against those who focus on the ability of private enterprise to turn innovation into profit, Mazzucato argues that the state still preserves a fundamental role. It acts as the risk-taker, underwriting any losses from financing pure research and long-term programmes, but the private sector – for which a sustainable investment is nearly always tied to a tighter view of profit and a shorter return schedule – still seems unable or unwilling to shoulder such risks and costs. Although less visibly, since they are not involved in manufacturing the products and services encapsulating technological innovation, states still hold a primary role in shaping and spurring on the transformative processes behind the spread of technologies. For, while public investment sets the general conditions for innovation to take place, it is often the corrective policies of regulation and taxation that enable the private sector to decide with reasonable confidence whether it is profitable to invest in technological innovation and what margin can be expected from a technological type of business. Clearly, this is a key topic on which positions are divided (Dyba, 2016; Goldfarb and Henrekson, 2003; Mazzucato, 2017). Perhaps, though, it may pay to investigate the case of Tesla Motors, one of the brands connected to Elon Musk. The case of this firm, synonymous with both technological inno-
Mapping technological innovation
27
vation and environmental sustainability, may throw some light on the extent of surviving state power when it comes to technological innovation. Tesla is undoubtedly an innovative company (Forbes rated it top for innovation in 2015); it uses the latest technology and grounds its success on a synergy of both traditional car firms like Daimler, Toyota, Freightliner and Lotus, and hi-tech companies like Panasonic. But to ascribe the brand’s success (2015 sales of the Tesla Model S topped 100 000) exclusively or mainly to dynamic private enterprise divorced from any role by federal American government would be a mistake. For, at the delicate start-up stage in 2009 when it launched its first vehicle, Tesla benefited by a US$500 million-dollar funding from the federal Department of Energy (DOE) when Daimler was throwing in US$50 million aid to the Musk cause. The difference is clear-cut. On top of that contribution there were further state incentives to the tune of US$280 million (Mises Institute, 2018). In the case of SpaceX, the federal government’s contribution amounted to US$5 billion dollars. Without belittling the private response to the twenty-first-century technological challenge, such figures do confirm that the primacy of the state is still barely dented. Compared with the days of the Apollo space programme, the private sector has played an undeniably more active and visible managerial role. However, when it comes to defining strategic objectives and controlling allocation of the investments needed for the bolder technological projects to take off, states are maintaining their gatekeeper role. We have seen that the claim that they are losing their primacy is without foundation from the quantitative angle. Rather, the question to be asked is, what kind of relationship is maturing between political power and the financial world where technological innovation is concerned, and will it lead to widespread progress, or will the notable contribution from society’s pocket tend to line the pockets of a few elites? For while the Western world has its champions, Asia is no way behind. The case of Huawei, world leader of the 5G networks, is emblematic, considering the public support its founder has procured at crucial moments of its existence (Yap, 2019). In short, wherever technological innovation assumes strategic importance, the relationship between public and private grows more complicated, while the question of whether state primacy will endure or the business world effect a revolution seems, overall, less relevant than the social effects and the significance that the state–private ratio takes on from one region to the next, not to mention the impact it produces on international politics.
3
THE GEOGRAPHY OF PATENTS AND R&D: THE RISE OF ASIA
Innovation and its distribution in certain countries – or concentration in certain areas, according to the angle we observe from – is one of the chief factors from
28
Technology and international relations
which states, or groups thereof, derive power via transformative processes, as in the European Union. In the last 15 years, the pattern of innovation distribution around the globe has radically changed, calling for more detailed examination from one region to another. As mentioned above, although data on patents and R&D investments (two standard indicators for technological innovation) are retrospective, they do provide preliminary information as to where technological development will take place in the future – though extrapolation is naturally a complex process. Patents as key tools to protect intellectual property (IP) have figured increasingly in the debate over countries’ approach towards innovation, and their capabilities. In the United States, the subject has been addressed by a series of analyses designed to understand the role of other countries (in particular, China) that have also adopted a plurality of strategies (some of them allegedly illegal, such as the subtraction of trade secrets and IP theft) to advance their standing in the global innovation race. In 2013, the ad hoc Commission on the Theft of American Intellectual Property highlighted the importance of tools to protect IP and thus nurture the American economic eco-system (Moretti, 2012; Oswald and Pagnattaro, 2015), especially highly innovative start-ups that might be more vulnerable to action by foreign countries and industries. The so-called FAGA group (Facebook, Apple, Google, Amazon) have also shown their interest in policies to guarantee IP when operating in foreign countries. Action designed to protect IP has thus become a cornerstone of American foreign economic policy. The Trans-Pacific Partnership (TPP), for instance, contained several IP-related provisions that were meant to guarantee large American companies (Akhtar, Wong and Fergusson, 2020; Okediji, 2004). Despite the unfortunate destiny of that complex scheme after the conclusion of Obama’s second term, the attention devoted to patents and international standard setting will increase as countries proceed to ‘securitize IP’, defining it as a key factor in their long-term economic prosperity and expanding the range of tools to protect and promote it (Breznitz and Murphree, 2013). Such analyses – and concerns –at least partly mirror the changing geography of innovation, as illustrated by the indicators selected. Asian countries have increased their margin over other continents in patent applications. North American and Caribbean countries have increasingly applied for patents too, while Europe remains stable at lower levels. Strategic projects funded by the European Commission such as Horizon 2020, as well as the EU strategies on key enabling technologies (KETs), echo the European Union’s recognition that further cooperation is needed in R&D to maintain and possibly increase its research and innovation prospects. Latin America and Africa have extremely low levels of R&D investments, and, as with innovation in general, the lack of it is a long-term trend. Figure 2.1 provides a graph of patent numbers in China, the European Union, Japan, the Republic of Korea and the United States.
Mapping technological innovation
Source:
29
World Bank (n.d.).
Figure 2.1
Total patents: major countries: 1980–2018
Focusing on country-level data, China’s patents have gone sky-high since 2000, overtaking the USA in 2011. Japan lost its primacy in 2006, and now ranks third. Among the so-called BRICS (Brazil, Russia, India, China and South Africa), Brazil – often heralded as a new powerhouse in innovation – files less than 8000 applications, half those of India and about 1.3 per cent of China’s. As for the European context, it is far from homogeneous: Germany filed about three times as many applications as the UK and almost four times those of France in 2012. By contrast, countries such as Italy and Spain are very far from regional leaders. Figure 2.2 sums up data relative to the five main European countries (in terms of GDP). Analysis of patents granted by World Intellectual Property Organization (WIPO) offices shows regional patterns to be substantially similar to those illustrated for the application process. Asia is way above other continents and Africa and Latin America are at the bottom. North America has grown irregularly but doubled the number of patents granted since 1990. The number of copyrights approved in the EU has considerably decreased, notwithstanding the vitality of some European firms. Despite the regional trends described above, some remarkable differences among countries still exist and may be summarized as follows: (1) Japan and the USA retain the two leading positions on a global scale; (2) China is quickly closing the gap; (3) the European countries lag behind, though countries such as France are relatively effective
Technology and international relations
30
in translating applications into grants; (4) in this picture, Brazil and India score a relatively low number of grants, and have shown a relative decrease in grants over time – more noticeable in Brazil’s case.
Source:
World Bank (n.d.).
Figure 2.2
Patents among major European countries: 1980–2018
Further, more qualitative analysis, is also useful. In 2016, the China National Intellectual Property Administration (CNIPA) recorded 42.8 per cent of the world patent applications: in numerical terms, over 1 300 000 applications, that is, 120 per cent more than the USA, and 320 per cent more than Japan (China Power, 2016). These are staggering figures, both in absolute terms and in comparison with other technological leader nations renowned for their innovation and far more mature than China on the question of regulations and protection of intellectual property. More in detail, what we need to analyse is whether such a domestic explosion of applications corresponds with a similar leap forward by China on the international arena. The CNIPA, which expresses Chinese state policy, equates the concept of patent with innovativeness. This creates an incentive to apply – above all, since companies certified as innovative can achieve special tax benefits and state aid. It results in intellectual property being safeguarded even in cases that are not so much genuine innovations as improvements and developments on already existing products. If one looks at the Chinese situation, the innovations are indeed rarely radical. The patents phenomenon would thus seem to hold
Mapping technological innovation
31
largely domestic value, without bearing all that much on the country’s international competitiveness, at least in the short term. Equating patent with innovation, as in China where it ensures applicant firms special advantages, translates on the one hand into a policy of state aid/ protection for national companies, whilst on the other hand compelling them to conform or more docilely comply with a series of regulations that will be vital in the long run if ‘Made in China’ is ever to be compared in quality to Western and Japanese products. This hypothesis is actually confirmed by data given out by WIPO and the Organisation for Economic Co-operation and Development (OECD). Ninety-six per cent of Chinese applications are of the domestic kind, while the United States and Japan apply overseas in about 43 per cent of cases. In terms of the triadic patent system (Japan Patent Office, United States Patent and Trademark Office, and European Patent Office) the United States obtains about 10 000 patents a year, while China does not reach 2000. For the moment, in short, Beijing does not set the rules of the game on the technology markets, nor does it seem able to meet those of the leader nations (USA, Japan and Europe). It is plausible that the CNIPA policy is aiming at a long-term change in the patents system: from triadic – based on the standards of Europe, the USA and Japan – to quadripartite. In such a light, the proliferation of domestic Chinese patents should be seen, not as a sign of innovativeness (or of the country’s current power), but as a political lever designed to work on the international patent rules, its ultimate purpose being to upgrade the future value of Chinese technological products. By the Bloomberg ratings, Chinese patents tend to hold much lower ‘retention power’ than Western and Japanese patents. Compared with the staggering application figures, their impact in transforming reality is very often quite limited (Yilun Chen, 2018). Add to this another consideration: that among the biggest hi-tech firms, ‘the two Chinese telecom giants Huawei and ZTE have been the top PCT [Patent Cooperation Treaty] applicants since 2015, followed by Intel, Mitsubishi, and Qualcomm… Furthermore, Chinese spending on research and development (R&D) has surged significantly. In 2016, China’s nominal R&D expenditure surpassed that of Japan, Germany, and South Korea combined’ (China Power, 2016). Returning to the subject of innovation and its geographic distribution, what does seem to be changing is another key indicator: R&D expenditure as a percentage of gross domestic product (GDP). In this case, Japan consistently led the group until 2011, with the EU ranking second. The euro countries and the USA spend around 2 per cent of their GDP on R&D, with China converging towards that level. India remains stable below 1 per cent of GDP, although the figure is necessarily biased because of the size and structure of the country. Most R&D is concentrated in certain areas of the country alone (the state of Maharashtra and the city of Bangalore) which have specific information and communications technology (ICT) related specializations. The Republic of
Technology and international relations
32
Korea (RoK) has experienced a leap forward in the past two decades, and in 2011 moved from 2 per cent to 3.7 per cent of GDP devoted to R&D, surpassing Japan in relative terms. Figure 2.3 shows R&D expenditure over country GDP for China, the EU, Japan, RoK and the United States.
Source:
World Bank (n.d.).
Figure 2.3
Research and development expenditure (% of GDP) by country: 1996–2017
Taking into account some complementary indicators such as the researchers working on R&D – defined as ‘professionals engaged in the conception or creation of new knowledge, products, processes, methods, or systems and in the management of the projects concerned’ (OECD, 2015) – the data correlate with R&D expenditure. Japan and the USA lead the group. By contrast, due to wider population dynamics, China’s value has not increased as steadily as in other related statistics. Finally, Korea – which is not mentioned by the World Bank – is similar in numbers to Japan. The peculiarity of Seoul’s performance, however, is its correlation with fast socio-economic development, so that soon the country should achieve the internal conditions potentially to outpace its regional counterparts. Figure 2.4 shows the number of researchers by country for China, the European Union, Japan, RoK and the United States. This preliminary survey of technological change shows that the evolution of global trends in R&D is subject to competing forces that might either accelerate or delay China’s catching up with and overtaking the USA. First, China is indeed planning to invest heavily in R&D. The idea that China is overtaking the USA and Europe in terms of R&D investments is also confirmed by the study by the United Nations Conference on Trade and Development (UNCTAD) on the tendency of multinational corporations (MNCs) to set up R&D centres in China rather than in the USA (UNCTAD, 2006). Attempting to reverse this trend, Western decision makers have made this a key point of
Mapping technological innovation
33
focus. The Plan for Science and Innovation, launched by the United States in 2010, stated that the federal budget devoted to funding R&D should double in ten years (2006–16).
Source:
World Bank (n.d.).
Figure 2.4
Researchers in R&D (per million people) by country: 1996–2017
The timeline, however, was then relaxed, and China seems not far from catching up with the USA. Strategic projects funded by the European Commission such as Horizon 2020, as well as the EU strategies on KETs, echo the European Union’s recognition that further cooperation is needed in R&D to maintain and possibly increase its research and innovation prospects. To conclude, this preliminary mapping of technological change by region illustrates that: (1) leader countries in R&D and innovation do not change at a similar pace; (2) investments in R&D and patent applications/grants are largely limited to the USA, Europe (with significant limitation in the Mediterranean area) and only four Asian countries (Japan, China, Taiwan and South Korea); (3) even though the USA, Japan and Europe account for two-thirds of the world’s innovative enterprises, these countries are challenged both externally and internally; (4) Asia is the most interesting scenario to investigate, for its dynamism and the unpredictable consequences of introducing some form of welfare state in China.
34
4
Technology and international relations
KEY SECTORS IN TECHNOLOGICAL INNOVATION: TRENDS AND SCENARIOS
After the distribution of technology by country and area of the globe, a second and no less important point bearing on innovation as a conditioner of international relations is the poles around which that process is catalysing, or in other words the key sectors in which technological transformation is taking place. The telecommunications revolution that ought to materialize when the 5G networks arrive seems to have been heralded by a classic geopolitical showdown between the United States and China. The Trump administration was quick to use its veto power over foreign acquisitions to obstruct China – the Qualcomm case is emblematic. For its part, the Beijing government has begun a season of diplomatic activism using a powerful economic lever to influence countries reluctant to enter agreements with the Chinese champion Huawei. If we simply look at the direct advantages a state would gain by a national company achieving a controlling position over communications infrastructure, we fail to do justice to the issue. Bound up with 5G and the formidable speed of transfer entailed by such technology is the question of self-drive vehicles and the so-called Internet of Things – two more key sectors where technological innovation would seem to have crucial implications for the future course of international politics. Last, there is the issue of artificial intelligence (AI) on which various national governments have proved to differ in terms both of investment and of regulatory measures. Along with these issues, the European Parliament and other international agencies like the United Nations have identified energy as another field of innovatory impact – especially concerning technology linked to decarbonization and to latest-generation accumulators (European Parliamentary Research Service, Global Trends Unit, 2017). However, the impact of such innovations on power relations among states appears less drastic than the questions we are about to examine. Automation, machine learning and control of the communications infrastructure are ambivalent issues. On the one hand, automation, and less human involvement in crucial social processes like manufacturing and defence, are synonymous with technical progress. On the other hand, though, changing the human–machine relationship and the roles of each in producing certain outcomes poses an existential challenge to the state itself. It is essential for the survival and future prosperity of states that they be able to adapt and rethink their role vis-à-vis society; that they continue to fulfil certain requirements, from maintaining employment levels and decent working hours and conditions, down to security seen as a state monopoly on legitimate violence (for a review see Valigi, 2014). However, in a set-up where the technological chal-
Mapping technological innovation
35
lenge can only be met by joint involvement of many public and private entities, states are forced to wonder just how far control over their own ICT infrastructure can be handed over to third parties, possibly sited in a potentially hostile country. Finally, progress in AI is bound up with the limitations imposed by privacy requirements – an area that may be due for a rethink. Developing new and more complicated algorithms is tied to analysis of data provided by the consumer. Hence, progress in this field and the manner in which it comes are strictly connected to the role of the state and its policies on regulation. As for future trends, the first consideration concerns the future use of the Web. Cyber warfare and cyber attacks have already been witnessed during elections in the USA and France and may reasonably be expected to increase in future. According to the Global Risk Report 2020 drawn up by the World Economic Forum, means of communication, from huge public systems down to smartphones, will increasingly be ‘weaponized’: increasingly, they will become potential tools for launching information attacks (Dor, 2020). In the future, the Internet will not be jeopardized, but more massive resources will be mustered on the defence front. Both public institutions and private companies are progressively going in for departments of cyber security, have appointed specific officers on their staff, and are recruiting personnel trained in ICT defence (European Parliamentary Research Service, Global Trends Unit, 2017; OECD, 2016). In parallel with this analysis, which applies to all advanced societies, the potential role of self-drive or unmanned autonomous vehicle (UAV) technology is attracting the attention of experts and scholars. Google, Uber and Baidu are investing on a massive scale, banking on the commercial application of such technology. Clearly the change will not occur from one day to the next. However, 2025 is being forecast as the deadline for development of fully autonomous vehicles. We may thus reasonably expect to be faced with widespread technology of this kind in a span of 20 years (2035). Here again, the transition process has shifted from the military to the civil/commercial setting, and the two countries chiefly involved in UAV development are the United States and China. Much of the development potential for such vehicles will depend on their being integrated with town traffic. UAV technology stems from the military setting and has already increased humans’ safety in certain less-than-optimal phases of development – for example, mine-clearing operations or in similar circumstances. But the issue of danger still applies to civil operations. Integration of unmanned autonomous vehicles, infrastructures and humans is a goal that is still far off and fraught with problems – in the first place because human behaviour is unpredictable. As for the regulatory side (above all, the management of already complex town traffic), it seems no less crucial than the technology itself.
36
Technology and international relations
A few successful experiments have been recorded in circumscribed urban areas like Helsinki and Singapore (European Parliamentary Research Service, Global Trends Unit, 2017, p. 29), while the projections for sales of UAVs made by Oxford Analytica suggest that after 2030 traditional vehicle sales should diminish in favour of self-drive. Much of the outcome of this transformation or of hybridization (it is still too early to know) will depend on the investment plans governments intend to implement by supporting R&D on UAVs or making them part of broader plans for public transport. Whatever the impetus likely to come from dynamic private enterprise, states are the appointed gatekeepers for the degree and manner of integrating UAVs into public infrastructures, steering a middle course between progress and social order. We are hence unlikely to see such technology spreading uniformly throughout the various regions of the world; more probably, inequality will increase not only between countries but even within the same country. Though automation, machine learning and AI are distinct subjects from an analysis and definition standpoint, they tend to lump together quite considerably. When one talks of automation and AI, it must first be made clear that such phenomena have been in progress industrially for decades. Thus, trends or projections should be read diachronically. The real breakthrough seems likely to come when automation combines with AI, a phenomenon that could spur new types of industry and progress, giving countries that possess the technology a distinct edge over competition and a powerful control factor over allies. Frey and Osbourne forecast that over 40 per cent of jobs in the USA will be computerized in the next 20 years, while 35 per cent of UK jobs will disappear as a result of automation (Frey and Osbourne, 2013). But figures apart, the most interesting aspect is the kind of work expected to be affected by the phenomenon. As for the risks associated with automation, the alert sounded by the OECD points in a similar direction. In countries like Sweden and Norway, the jobs classified as high risk for automation range from 5 to 10 per cent; in Baltic countries, Greece, Slovenia and Spain the figure is from 20 to 30 per cent; while in Slovakia it reaches 34 per cent (Nedelkoska and Quintini, 2018, Figure 2.14). In advanced capitalist countries, technological change may be expected to transform the labour market, with some jobs disappearing and new ones appearing. What gives most food for thought, however, is the fact that such changes are not expected to involve medium-low roles and jobs requiring little know-how, but higher positions where the human input is measured in qualitative terms. In that case there would be significant repercussions on decision making, on the length of the command chain and on the power structure inside a firm, as well as in certain manufacturing sectors. The medium-term risk is that the combined effect of improving many education systems and spreading the technology of AI may be a mismatch between
Mapping technological innovation
37
greater supply of qualified staff and the number of jobs available. If such tasks are deputed to machines, the effect will tend to be a worsening of salary levels all round and worker dissatisfaction, for people may be increasingly forced to accept roles beneath their own ability and education. Given the prospect of AI developing, relations between private big players and public institutions, and the impact on society, the subject of governance once more appears crucial. In a situation where a few highly qualified companies possess niche or even sole importance for AI development, the keystone for any progress is the data – or immense volumes of data – able to produce feedback detecting as many kinds of user as possible. Such resources are currently known to be in the hands of a quite limited group like Google, Amazon, Facebook, Microsoft and IBM, who are already working in partnership. It is equally important how we handle the joint complexities of developing a technology whose positive effects are as yet unknown, safeguarding social order/control and coping with permeation/intrusion by non-state entities into people’s lives. The question has evident ethical and legal, more than technical, implications. To devise a shared international statute for AI might be a first step towards safeguarding certain liberties won by sweat and suffering in the process that led to democracy in some parts of the globe. The question of public or private handling of data is likewise bound up with the role of the state: this may need to be thought out again in future – above all, in a society where the services government is expected to provide are ever-more complex, calling for private participation and data access/management that often impinges on people’s privacy.
5
AGENCY IN TECHNOLOGICAL INNOVATION: PRIVATE ACTORS AND STATE-OWNED ENTERPRISES
The process of technological transformation involves a significant change in agency among producers and consumers of technology. In the United States, the growth of industrial R&D investments has largely outpaced the growth of federal investments (Figure 2.5). However, federal funds are important to the funding of basic research as they provide the context in which private investments will then flourish. Stable public investments in research funding provide private enterprises and academia with the means to create innovative and exploitable goods and services. With respect to the role of the state as entrepreneur (Block and Keller, 2011; Holland, 1972; Jacobs and Mazzucato, 2016; Mazzucato, 2011), it seems particularly remarkable in those markets that have: (1) high entry barriers; (2) high concentration of capital; and (3) risks too high for private actors to sustain.
Technology and international relations
38
Source:
National Science Foundation (n.d.).
Figure 2.5
R&D expenditure by sector, United States: 1953–2018 ($ millions)
The empirical evidence on the emergence of broadly used technologies seems to confirm that their origin can often be traced back to the public sector, in particular the defence and military fields (Bresnahan and Trajtenberg, 1995). It has been shown that the link between public support and specific areas of R&D is strong, but its relative impact on the final stages of commercial technologies remains unclear (David, Hall and Toole, 2000, p. 525). This is especially evident when public intervention takes various forms and addresses very diverse goods and products. In general, the increasing economic and technological interdependency amongst countries has the potential to create stability in the international system. However, the transition to a multipolar and multidimensional world has created a certain degree of instability and the impact of emerging technology on this transformative process is uncertain. Disruptive technologies are now available to more actors than ever before, while means that were previously only available to governments will be ever-more widespread among non-state actors, criminals and terrorist organizations. Unprecedented challenges in technology governance are arising as a consequence. In this scenario, ICTs are one of the most prominent examples, as they provide the material tools that need to be organized and act in the cyber domain. They also reduce access costs and help in the exchange of information and knowledge. The process is driven by a dual movement that connects more explicit commercial applications of technology with the increasing size of markets. While
Mapping technological innovation
39
the process has been made visible by personal computers and mobile ICT technologies, it is likely to continue with the increasing diffusion of AI-related and robotics devices (Duhamel et al., 2013). Moreover, ICTs in association with new production techniques such as 3D printing have the potential to change the identity of producers and users of technology dramatically. Organized groups including non-state actors will be able to acquire skills in the production of tools and resources, allowing them to compete and achieve objectives in certain domains at a level comparable with nation states. Business plays a major role in the funding and undertaking of R&D. The USA ranks first (hosting 40 per cent of the companies), while China still lags behind (6 per cent of the companies). This difference is due in part to the fact that company headquarters – in particular those of multinational companies (MNCs) – are located in their country of origin, notwithstanding delocalization and international outsourcing of production and internationalization in sales. Moreover, in China, the EU, and Russia, companies are mostly state-owned enterprises (SOEs) – in particular in the defence and energy sectors – and some of them are ‘national champions’ that heavily invest in R&D. This argument at least seems to contradict the idea that the control of national governments over technological change is declining. The figures continue to show that public funding is still the primary source of technological innovation, and this is even truer in the case of strategic sectors such as military technology, energy and the infrastructure. General trends in R&D illustrate that the most innovative companies – by money spent on R&D – are located in the USA, Japan and Europe with more than 90 per cent of the total expenditure in R&D. A similar pattern applies to the world distribution of largest companies by R&D, despite some minor exceptions such as the case of India’s Tata Motors and the Chinese company PetroChina. Although several of the largest companies operate in ICT (software, Internet, computers and telecommunications), the majority of these industries are concentrated in the fields of pharmaceuticals and biotechnology and are located in Western countries. These sectors indeed require high-density investments in industrial assets and human capital, the ability to observe specific security procedures, and several related technologies protected by Western patents. In conclusion, American and European predominance in this field is not accidental, but a combination of mature industrial know-how, constant investments in R&D, plant and human capital and effective policies regulating the protection of intellectual property.
40
6
Technology and international relations
A CONCLUSION: THE NEW MAP OF TECHNOLOGICAL INNOVATION AND INTERNATIONAL POLITICS
The technology mapping exercise outlined in the first part of this chapter illustrates that through specific analysis – taking into account both technological changes and trends by industrial sector as well as innovation rankings among countries and regions following their macro-economic changes – a rising security threat can be ascertained. It also provides the following key findings. Countries leading the field in R&D and innovation do not change alike. Investments in R&D and patent applications/grants are concentrated in the USA, Europe (with significant limitations in the Mediterranean area/Southern Europe), and in four Asian countries (Japan, China, Taiwan and RoK). In China, due to the size of its internal market and population, social policies with a direct impact on the labour market and productivity have the potential to bring about consequences comparable to a postmodern industrial revolution that would not be confined to China itself or the Asian region. In terms of the implications for Western countries, Asia is one of the most interesting scenarios to investigate, for both its dynamism and the unpredictable consequences of new types of welfare state being introduced. Asia, the USA, Japan and Europe, which represent more than two-thirds of the world’s innovative enterprises, are challenged both externally (radicalism, terrorism, authoritarian regimes, transnational criminal groups) and internally (crisis of democracy and welfare state, economic and financial crisis, lack of shared integration models, especially in the EU). How the most innovative countries will respond to these challenges is not yet clear. In the case of the USA and Europe and their future relationship, technology itself does not seem problematic. Among Western countries, military interoperability and similar technological standards are taken for granted. However, major and uncertain implications for the future of partnerships such as the North Atlantic Treaty Organization (NATO) arise from a combination of poor communications and a lack of shared values among parties. Manufacturing, deployment and export by the United States of dual-use technologies such as drones, and the lack of regulation in the cyber domain, are creating a background that in the long term could generate negative effects on the coherence and effectiveness of NATO. How to deal with the proliferation of advanced technologies to non-state actors is still not clear. It is evident, however, that the issue is not limited to the military domain. Utilization of AI has the potential to bypass and/or enhance processes typically carried out by humans. Use of AI within cyber warfare is a topic for specific and further discussion. Of mounting significance even in
Mapping technological innovation
41
the military domain is the topic of big data (BD). Due to its relationship with the global circulation of technology, materials, information technologies, robotics and AI, BD is among the most effective defensive means when it comes to protecting networks and computer infrastructures from cyber attacks. Likewise, it represents an immense advantage for intelligence and perception management – as proved by the case of the 2016 US presidential elections. The impact of data mining for all security domains is self-evident, just as BD’s success relates to its merits as a problem-solving tool. The opportunities for application to essentially all realms of human knowledge and research are formidable and are just beginning to be explored. As pointed out in all the scientific domains, in BD the quality of the human element is also crucial. It is not only a matter of training enough data scientists. For public institutions it would be a matter of attracting and retaining such scientists. It is thus evident that all the dimensions of technological change and innovation investigated in this chapter will require smart and knowledgeable individuals to develop further and original applications. The future challenge for states relates to recruitment, how to attract and retain inventive individuals by various incentives. Mindsets, positions and career opportunities previously unheard of by many organizations need to be designed and implemented. This mindset change will require major reframing of human resources departments. The persons who will have to coordinate and manage such individuals would have to be ‘different’ themselves and (almost) equally innovative. Concerning scenarios where technological innovation might be of assistance in the next 15–20 years, a few conclusory remarks are in order, tying up the facts and ideas raised in this chapter. Although technology is spreading, it is not a linear or progressive phenomenon, nor is it uniformly distributed across the globe. The potential for technological innovation improving current living standards comes up against a distribution issue: some people will be denied access. This may create a broad swath of underprivileged and increase social inequality. Given the difficulty of synchronizing technical progress with social policies making it available to most people, there arises a problem of order within states: how to keep it and – no less important – how technology, in itself a neutral factor, may bear on the relation between the state and the people. Besides affecting domestic change/adaptation, technological innovation bears on the international arena, altering the geometry and once again forcing states to rethink roles and partnerships. Though American military primacy remains, the technological primacy of Washington is less marked in some sectors than it used to be. In the case of 5G – which President Donald Trump stressed as confrontation with China as part of a new Cold War – not only is the United States falling behind, but also two of the countries potentially able to face that competition
42
Technology and international relations
were Japan and Germany, both US stable allies. The issue is thus more complicated than it looks and cannot be solved by simple recourse to invoking an external enemy. While it is important to regain ground vis-à-vis Beijing, it is no less important for Washington to maintain effective leverage over the medium-sized friendly powers (Valigi, 2017), lest they enter agreements that diminish the USA’s relative power, or indirectly damage its primacy in other areas to do with communications infrastructure. Likewise, the so-called ‘patent war’, in which mushrooming Chinese patents suggested the Asian power was raising the competitiveness bar in R&D, seems the effect of internal mechanisms needed by Chinese companies seeking public funding. Only rarely do Chinese applications turn into international patents such as to give the Asian giant a primacy that belongs to Western powers. That fact prompts reflection on a more wide-reaching phenomenon: regionalization of the international system and how it ties up with the subject of innovation. For, in a regionalized international system, interpretation of certain data conventionally taken to indicate innovation – patent numbers – proves to be more complicated: this is partly quantitative, given the volume of the data, but is also due to qualitative differences between one country and the next. Innovation as such involves the whole international system. But the fragmentation witnessed in this transition from American hegemony to a more complex set-up poses a challenge when it comes to comparing countries and areas. In terms of progress/regulation by society, the data, and even the innovatory phenomena themselves, differ widely according to the social system they refer to. Likewise, the consequences, whether domestic or to do with one country’s technological innovation affecting the whole system’s power distribution, are proving very hard to analyse and understand. There is thus little likelihood of one univocal interpretation emerging.
REFERENCES Akhtar, Shayerah I., Liana Wong and Ian F. Fergusson (2020), Intellectual Property Rights and International Trade, Congressional Research Service report, RL34292. Basberg, Bjorn L. (1987), ‘Patents and the measurement of technological change: a survey of the literature’, Research Policy, 16(2–4), 131–41. Block, Fred L. and Matthew R. Keller (eds) (2011), State of Innovation: The U.S. Government’s Role in Technology Development, Boulder, CO: Paradigm Publishers. Bresnahan, Timothy and Manuel Trajtenberg (1995), ‘General purpose technologies “engines of growth”?’, Journal of Econometrics, 65(1), 83–108. Breznitz, Dan and Michael Murphree (2013), The Rise of China in Technology Standards: New Norms in Old Institutions, U.S.-China Economic and Security Review Commission report. China Power (2016), ‘Are patents indicative of Chinese innovation?’, updated 20 May 2020, accessed 16 June 2020 at https://chinapower.csis.org/patents/.
Mapping technological innovation
43
Clarke, Thomas and Keun Lee (eds) (2018), Innovation in the Asia Pacific: From Manufacturing to Knowledge Economies, Singapore: Springer. David, Paul A., Bronwyn H. Hall and Andrew Toole (2000), ‘Is public R&D a complement or substitute for private R&D? A review of the econometric evidence’, Research Policy, 29(4–5), 497–529. Dor, Dorit (2020), ‘These will be the main cybersecurity trends in 2020’, weforum.org, 7 January, accessed 17 June 2020 at https://www.weforum.org/agenda/2020/01/ these-will-be-the-main-cybersecurity-trends-in-2020/. Duhamel, John, Maneeshika Madduri and Same Emaminehad et al. (2013), ‘Rethink robotics – finding a market’, Stanford Case Publisher, 204-2013-1, accessed 21 January 2021 at http://www.stanford.edu/class/ee204/Publications/Rethink %20Robotics%202013-204-1.pdf. Dyba, Wojciech (2016), ‘Mechanisms of knowledge flows in bottom-up and top-down cluster initiatives’, Regional Studies, Regional Science, 3(1), 287–95. European Parliamentary Research Service, Global Trends Unit (2017), Global Trends to 2035: Geo-politics and International Power, PE 603.263, Brussels: European Union. Foray, Dominique, David C. Mowery and Richard R. Nelson (2012), ‘Public R&D and social challenges: what lessons from mission R&D programs?’, Research Policy, 41(4), 1697–702. Frey, Carl B. and Michael Osborne (2013), ‘The future of employment: how susceptible are jobs to computerisation?’, working paper, Oxford Martin Programme on Technology and Employment, Oxford Martin School, University of Oxford. Goldfarb, Brent and Magnus Henrekson (2003), ‘Bottom-up versus top-down policies towards the commercialization of university intellectual property’, Research Policy, 32(4), 639–58. Holland, Stuart (ed.) (1972), The State as Entrepreneur, New York: International Art and Sciences Press. Jacobs, Michael and Mariana Mazzucato (eds) (2016), Rethinking Capitalism: Economics and Policy for Sustainable and Inclusive Growth, Hoboken, NJ: John Wiley & Sons. Kogan, Leonid and Dimitris Papanikolaou, Amit Seru and Noah Stoffman (2017), ‘Technological innovation, resource allocation and growth’, The Quarterly Journal of Economics, 132(2), 665–712. Mazzucato, Mariana (2011), The Entrepreneurial State: Debunking Public vs. Private Sector Myths, London: Demos. Mazzucato, Mariana (2017), ‘Mission-oriented innovation policy: challenges and opportunities’, Working Paper IIPP WP 2017-01, UCL Institute for Innovation and Public Purpose. Mises Institute (2018), ‘Elon Musk’s taxpayer-funded gravy train’, 12 December, accessed 16 June 2020 at https://mises.org/wire/elon-muskss-taxpayer-funded-gravy -train. Moretti, Enrico (2012), The New Geography of Jobs, Boston, MA and New York: Houghton Mifflin Harcourt. National Science Foundation (n.d), ‘National patterns of R&D resources’, accessed 21 January 2021 at https://www.nsf.gov/statistics/natlpatterns/. Nedelkoska, Jubica and Glenda Quintini (2018), ‘Automation, skills use and training’, OECD Social Employment and Migration Working Papers, No. 202, accessed 23 December 2020 at https://www.oecd-ilibrary.org/employment/automation-skills-use -and-training_2e2f4eea-en.
44
Technology and international relations
Okediji, Ruth L. (2004), ‘The institutions of intellectual property: new trends in an old debate’, Proceedings of the American Society of International Law, 98, 219–22. Organisation for Economic Co-operation and Development (OECD) (2015), Frascati Manual, 7th edition, accessed 21 January 2021 at https://www.oecd.org/sti/frascati -manual-2015-9789264239012-en.htm. Organisation for Economic Co-operation and Development (OECD) (2016), ‘Skills for a digital world’, policy brief, Paris: OECD Publishing. Oswald, Lynda J. and Marisa Anne Pagnattaro (eds) (2015), Managing the Legal Nexus Between Intellectual Property and Employees: Domestic and Global Contexts, Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing. United Nations Conference on Trade and Development (UNCTAD) (2006), Globalization of R&D and Developing Countries. Part II: Case Studies, UNCTAD/ ITE/IIA/2005/6, Geneva: UNCTAD, accessed 16 June 2020 at http://unctad.org/en/ docs/iteiia20056p2_en.pdf. Valigi, Marco (2014), Stati, produzione politica e mercato della forza – potere politico, monopolio della violenza e private military firms, Rivista di Politica, No. 1, 143–54. Valigi, Marco (2017), Le medie potenze: teoria e prassi in politica estera, Milan: Vita e Pensiero. World Bank (n.d.), ‘World Bank open data’, accessed 21 January 2021 at https://data .worldbank.org. Yap, Chuin-Wei (2019), ‘State support helped fuel Huawei’s global rise’, The Wall Street Journal, 25 December, accessed 16 June 2020 at https:// www .wsj .com/ articles/state-support-helped-fuel-huaweis-global-rise-11577280736. Yilun Chen, Lulu (2018), ‘China claims more patents than any country – most are worthless’, Bloomberg, 28 September, accessed 16 June 2020, https://www .bloomberg.com/news/articles/2018-09-26/china-claims-more-patents-than-any -country-most-are-worthless.
3. Autonomy in weapons systems and its meaningful human control: a differentiated and prudential approach Daniele Amoroso and Guglielmo Tamburrini 1 INTRODUCTION In academic and diplomatic debates about so-called ‘autonomous weapons systems’ (AWS), a watchword has rapidly gained ground across the opinion spectrum: all weapons systems, including autonomous ones, should remain under human control. While references to the human element were already present in early documents on AWS (US Department of Defense [DoD], 2012, p. 2),1 the UK-based non-governmental organization (NGO) Article 36 must be credited with putting it at the centre of discussion by circulating, since 2013, a series of reports and policy papers making the case for establishing meaningful human control (MHC) over individual attacks as a legal requirement under international law (Article 36, 2013).2 Unlike the call for a pre-emptive ban on AWS, the ‘meaningful human control’ formula (and the like) was promptly met with interest by a substantial number of states. This response is explainable by a variety of converging reasons. To begin with, human control is an easily understandable concept that ‘is accessible to a broad range of governments and publics regardless of their degree of technical knowledge’: it therefore provides the international community with a ‘common language for discussion’ on AWS (United Nations Institute for Disarmament Research [UNIDIR], 2014, p. 3). A second feature contributing to the success of this formula is its constructive ambiguity (Crootof, 2016, pp. 58–60; Rosert, 2017, pp. 2–3), which may prove helpful to span the gap between various positions expressed at the international level on the AWS issue. Indeed, as will be seen, a substantive case for an MHC requirement can be made on the basis of the existing legal framework. At the same time, however, new law is needed to detail the contents of MHC on weapons systems and make it operational. Also, a binding instrument 45
46
Technology and international relations
imposing a certain level of human control over weapons systems would count as a form of regulation and, at the same time, as ‘a ban on full autonomy over certain (critical) functions of a weapons system’ (Bhuta et al., 2016, p. 381). Third, and finally, the notion at hand allows one to shift the focus of the AWS debate from a (possibly unsolvable) definitional problem – that is, the precise drawing of boundaries between automation, machine autonomy and general artificial intelligence – to a normative problem – that is, what kinds and levels of human control ought to be exerted on weapon systems (Brehm, 2015, p. 5; UNIDIR, 2014, p. 4; United Kingdom, 2018, para. 8). Unlike the former definitional problem, the latter normative problem appears to be more tractable and likely to be successfully addressed through negotiations (Brehm, 2015, p. 5). In this perspective, it was correctly underlined that one should not look at MHC necessarily as a ‘solution’, as it rather indicates the right ‘approach’ to cope with the ethical and legal implications of autonomy in weapons systems (UNIDIR, 2014, p. 4). Growing attention to the issue of human control emerges from diplomatic talks that have been taking place in Geneva within the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) established by the state parties to the Convention on Certain Conventional Weapons (CCW). In addition to the fact that a number of participating state parties have explicitly endorsed the call for an MHC requirement (Austria, Brazil and Chile 2018),3 the importance of this issue was underscored by most delegations taking part in the CWW proceedings, in both their official speeches and their working papers.4 Such a convergence of views is reflected in the ‘Possible Guiding Principles’ adopted by the GGE at its August 2018 meeting, and notably in Principle 2, which posits that ‘Human responsibility for decisions on the use of lethal force must be retained’ (GGE, 2018, para. 21(b)). This is where international consensus stops, however. Two questions, in particular, have not yet received a generally agreed answer. On the one hand, it is debated what is (or should be) the legal basis of an MHC requirement under international law: is it a prescription already in force in international law? Or would it be better characterized as ‘new law’, to be enshrined into a treaty provision (and/or into an emerging customary norm)? On the other hand, and more significantly, it is far from settled – even among those favouring an MHC requirement – what its actual content should be or, to put it in clearer terms, what is normatively demanded to make human control over weapons systems truly ‘meaningful’. Against this background, the present contribution aims to provide tentative answers to (or, at least, to pinpoint a way forward to analyse and solve) these issues. After providing a brief overview of the ethical and legal reasons broadly supporting the MHC requirement (Section 2), in Section 3 we suggest that human control over weapons systems is required by a general principle of
Autonomy in weapons systems and its meaningful human control
47
international law, which is inferred from current legal frameworks and whose existence is confirmed by the international community lato sensu (that is, including global civil society). In Section 4, we argue that, to be operative, the principle in question has to be concretized into specific rules, distilling more detailed obligations incumbent upon states. In this respect, we distinguish between obligations of two kinds: (1) those obligations aimed at ensuring the informed use of weapons by human operators; (2) and those concerning the level of human control over weapons systems. The actual content of both (1) and (2) – and especially of obligations of the latter kind – depends upon the values that one assigns to a number of variables relating to what mission the weapons system is entrusted with, where it is deployed and how it performs its tasks. A differentiated and context-sensitive approach will therefore be needed, albeit grounded on the common ethical and legal basis described in Section 2. Such a principled approach will lay the groundwork for elaborating a set of ‘bridge rules’, which will establish a legal nexus between these what/ where/how variables and the kind of MHC required for each single use of a weapons system. Section 5 concludes.
2
THE ETHICAL AND LEGAL CASE IN FAVOUR OF THE MHC REQUIREMENT: AN OVERVIEW
Preliminary to a discussion of ethical and legal motives broadly supporting an MHC requirement on AWS, a brief methodological remark is in order. We will not delve – unlike most analyses on AWS – into the definitional issue concerning what makes a weapons system ‘autonomous’. For reasons explained elsewhere (Amoroso and Tamburrini, 2017, pp. 3–4), we adopt as an adequate starting point for the ensuing discussion the necessary condition on AWS propounded along very similar lines by the US Department of Defense (DoD) (2012, pp. 13–14), the International Committee for the Red Cross (ICRC, 2016, p. 1), as well as by the NGOs campaigning for the introduction of an MHC requirement (Campaign to Stop Killer Robots, 2013). According to this necessary condition, to be counted as autonomous, a weapon system must be able to perform the critical functions of selecting and engaging targets without human intervention.5 An MHC requirement over weapons systems would be aimed at curbing the latter’s autonomy precisely in their critical target selection and engagement functions (Bhuta et al., 2016, p. 381). That is the main reason why the above necessary condition provides a reasonable starting point for the ensuing discussion on the motives for and the contents of an MHC requirement. By the same token, it is no surprise that the main ethical and legal arguments supporting MHC, which we now briefly summarize, focus exactly on these critical target selection and engagement functions (Heyns, 2013, paras 63–81, 89–97).
48
Technology and international relations
First, AWS may be unable to comply with the principles of distinction, proportionality and precaution embedded in international humanitarian law (IHL). Roughly speaking, the principle of distinction requires belligerents to direct their operations only against military objectives, and to distinguish accordingly between civilians and civilian objects on the one hand and combatants and military objectives on the other.6 A similar rule prohibits attacks against combatants who have clearly expressed an intention to surrender or by wounds or sickness have been rendered incapable of defending themselves (hors de combat).7 The principle of proportionality bans launching attacks that may be expected to cause incidental civilian victims or to damage civilian objects (so-called ‘collateral damage’) that would be excessive in relation to the concrete and direct military advantage anticipated.8 The principle of precaution is instrumental to compliance with the other two principles, in that it requires belligerents to ‘do everything feasible’ to prevent an attack from being directed against the civilian population and objects or in any case from causing disproportionate collateral damage.9 The development of AWS thoroughly fulfilling the principles of distinction and proportionality at least as well as a competent and conscientious human soldier presupposes the solution to many profound research problems in artificial intelligence (AI) and advanced robotics.10 Furthermore, it is questionable whether the complete elimination of human supervision is compatible with the obligation to take all possible precautions to prevent (disproportionate) damage to the civilian population, insofar as the desired behaviour of AI and robotic systems is upset by unpredicted dynamic changes occurring in warfare and other unstructured or only partially structured environments. Moreover, systems developed by means of advanced machine-learning technologies (e.g., deep learning) have been extensively demonstrated by adversarial testing to be prone to unexpected, counterintuitive and potentially catastrophic mistakes that a human operator would easily detect and avoid (Szegedy et al., 2014). Second, AWS are likely to determine an accountability gap, in the light of epistemic limitations affecting the commander’s capability to make robust predictions about AWS behaviours, and especially so in cluttered and unstructured battlefield situations – involving, for example, sustained interactions among fighting soldiers, AWS and other artificial agents from opposing parties. In these and other warfare scenarios, one cannot exclude that AWS will commit errors amounting – at least materially – to war crimes. Who will be held responsible for such conduct? The list of potentially responsible persons in the decision-making chain includes the military commander in charge and those overseeing the AWS operation, in addition to manufacturers, robotics engineers, software programmers, and those who conducted the AWS weapons review. People in this list may cast their defence against responsibility charges and criminal prosecution in terms of their limited decision-making
Autonomy in weapons systems and its meaningful human control
49
roles, as well as of the complexities of AWS systems and their unpredictable behaviour on the battlefield.11 Cases may occur where it is impossible to ascertain the existence of the mental element (intent, knowledge or recklessness), which is required under international criminal law (ICL) to ascribe criminal responsibilities. Consequently, no one would be held criminally liable, even when the conduct at issue clearly amounted to an international crime. This outcome is hardly reconcilable with the obligation of military commanders and operators to be accountable for their own actions, as well as with the related principle of individual criminal responsibility under ICL.12 Third and finally, the principle of human dignity would dictate that decisions affecting the life, physical integrity and property of human beings involved in an armed conflict should be entirely reserved for human operators and cannot be entrusted to an autonomous artificial agent. Otherwise, people subject to AWS use of force would be placed in a position where any appeal to the shared humanity of persons on the other side – and thus to their inherent value as human beings – would be systematically denied a priori (Asaro, 2012, p. 689; Heyns, 2016; Sparrow, 2016; and Sharkey, 2018).13 Each one of these arguments against machine autonomy in the critical functions of selecting and engaging targets is deontological in character: an appeal to moral duties (IHL-embedded duty to protect the innocent, duty to preserve human responsibilities, duty to respect human dignity) is made therein to guide the use of weapons systems and to assess the moral worth of deploying AWS. These deontological arguments provide crucial ethical and legal reasons for the MHC requirement. Moreover, they contribute to shaping the content of MHC by pinpointing the control and supervision functions that should be exclusively attributed to humans. In addition to deontological arguments, consequentialist arguments about the moral worth of deploying AWS in the light of certain types of expected consequences have played an important role in the debate around the ethical and legal acceptability of AWS. Consequentialist appraisals of AWS, however, play a less important role in connection with the main problem that we are concerned with – that is, the problem of shaping the contents of MHC, over and above the generic consensus for MHC manifested in academic, diplomatic and public discussions of AWS. For this reason, the relationship between MHC and consequentialist appraisals of AWS is postponed until the concluding section. For the moment let us address the questions of what the legal basis of an MHC requirement under international law is and what is normatively demanded to make human control over weapons systems truly ‘meaningful’ in light of the above deontological arguments and additional principles of international law.
50
3
Technology and international relations
THE REQUIREMENT OF HUMAN CONTROL OVER WEAPONS SYSTEMS AS A GENERAL PRINCIPLE OF INTERNATIONAL LAW
During the GGE meeting in August 2018, Austria, Brazil and Chile submitted to the other parties to the CCW a proposal aimed at establishing ‘an open-ended Group of Governmental Experts to negotiate a legally-binding instrument to ensure meaningful human control over critical functions in lethal autonomous weapon systems’ (Austria, Brazil and Chile, 2018). What is the legal novelty, if any, of this proposal that there be a treaty (notably, a protocol in a CCW) having such an object? On the one hand, the proposed treaty may be viewed as an attempt to ‘inject new law’ into the international legal framework by introducing a requirement, that of MHC, complementing those already provided by existing norms (e.g., the prohibition of means of warfare resulting in superfluous injury or unnecessary suffering).14 On the other hand, a protocol on MHC may be conceptualized as a clarification of an international legal prescription that is already in force (AIV/CAVV, 2015, p. 51; Marauhn, 2018). The perspective on MHC as something already required – albeit only implicitly so – by existing legal norms is more conducive to wider consensus, and thus to the success of the initiative prompted by Austria, Brazil and Chile. Indeed, states are often reluctant to accept additional constraints on their freedom of action in the military field, while they may consider more favourably a legal instrument that limits itself to unfolding the implications of extant laws. This widespread attitude among states may explain why those making the case for MHC are generally eager to underscore that human control over weapons is already required, albeit implicitly, by this or that norm of international law (Asaro, 2016, pp. 376–7; Chengeta, 2017, pp. 842–4; Ulgen, 2018, para. 13). However, this more conservative approach to MHC needs to be better substantiated as regards the specific legal sources from which the MHC requirement would stem. It is our contention that human control over weapons systems is imposed by a general principle of international law. General principles of international law have been aptly described as ‘concise normative statement[s] referring to abstract concepts rich in meaning’ (Iovane, 2018, pp. 6–7). Unlike ‘rules’, which either order or prohibit – in an all-or-nothing fashion – a particular behaviour (Dworkin, 1967, p. 25), principles are placed at a more general normative level, indicating ‘a direction to be followed, a tendency to be respected, and a value that must be taken into consideration’ (Iovane, 2018, p. 7). These loose intimations provide a legal starting point from which more specific rules are subsequently derived, by the formulation of treaty provisions, the consol-
Autonomy in weapons systems and its meaningful human control
51
idation of state practice or the pronouncement of a court of justice (Iovane, 2017, pp. 368–91). The proper methodology for identifying general principles is admittedly a subject for substantial controversy. According to one approach, principles are to be ‘inferred by way of induction and generalization from conventional and customary rules’ (Cassese, 2005, p. 189, fn 3). Less formalist approaches, on the other hand, tend to construe general principles through the prism of normative expectations of the international community lato sensu, which include views expressed by international experts and NGOs in a large variety of documents (Pisillo Mazzeschi and Viviani, 2018, p. 132). This is clearly not the place to address and take a stand on this time-honoured debate. It is interesting to highlight, however, that a general principle imposing human control over weapons systems may be identified in the light of both lines of argument, which in this respect play a mutually reinforcing role. To begin with, all IHL rules governing attacks are formulated as implying that the lawful use of weapons must be ensured by human combatants. There is no need to draw up a long list of provisions whose language plainly points in this direction. To put it in the words of the Law of War Manual of the US Department of Defense, this much is entailed – quite obviously indeed – by the fact that IHL norms ‘do not impose obligations on the weapons themselves’, which are ‘inanimate object[s]’, but ‘it is persons who must comply with the law of war’ (US DoD, 2016, pp. 353–4). More interesting indications on human involvement come from treaty and customary regimes banning certain categories of weapons (Human Rights Watch [HRW], 2016, pp. 9–11). In the first place, it is important to note that the 1997 Ottawa Treaty confines itself to banning victim-activated mines, while not affecting the use of remote-detonated types. This choice was made on the assumption that the lawfulness of a weapon depends, among other things, on the existence of a causal nexus between the release of force and a human deliberation: in a nutshell, a relationship of control (ibid., 2016, p. 11). The relevance of the 1997 Ottawa Treaty for our present purposes is additionally emphasized by the fact that the banned victim-activated mines satisfy the necessary condition for autonomy recalled at the beginning of Section 2, for a victim-activated mine – albeit in a very primitive way – exerts without any human intervention the critical functions of selecting and engaging targets: its pressure sensor approximately discriminates between items that are below or else above a certain weight threshold (target selection) and triggers the explosion above the threshold (target engagement). The need to preserve such a relationship, moreover, is at the core of the customary prohibition on weapons that are by nature indiscriminate (including incendiary, biological and chemical weapons, unguided long-range missiles and balloon-borne bombs). As well evidenced by the ICRC in its study on
52
Technology and international relations
customary IHL, a weapon is considered by states as indiscriminate primarily by reason of its uncontrollability – that is, because its effects ‘escape in time or space from the control of the user’ (US Air Force, 1976, para. 6-3(c)).15 Last, a valuable (albeit far from recent) example of an international regulation based on the idea of human control is provided by the 1907 Hague Convention (VIII) relative to the Laying of Automatic Submarine Contact Mines. Its Article 1, indeed, forbids laying ‘unanchored automatic contact mines, except when they are so constructed as to become harmless one hour at most after the person who laid them ceases to control them’ as well as ‘anchored automatic contact mines which do not become harmless as soon as they have broken loose from their moorings’. In addition to confirming that the requirement of human control over weapons systems is anything but new in the international legal discourse, this provision offers an insight that will be useful in the next section: control may also be exerted by equipping the weapon with operational constraints concerning time (‘one hour’) and space (‘moorings’). In addition to these treaty provisions, it was noted above that the principle whereby human control over weapons should be retained has been gaining widespread consensus within the international community at large. We may recall, in this regard, that the various delegations taking part at the CCW meetings, while not necessarily concurring on the MHC formula, agreed as to the need to keep humans involved to a certain extent, as reflected in the Possible Guiding Principles adopted by consensus at the GGE meeting of August 2018. Significantly enough, this view was expressly endorsed – although with non-negligible differences – by the five permanent members of the UN Security Council.16 To this, one may add the authoritative opinion of the International Committee of the Red Cross (ICRC, 2018) and that of the UN Secretary General (2018);17 the parliamentary initiatives specifically addressing autonomy in weapons systems (Campaign to Stop Killer Robots, 2017, 2018); the official stance taken by the European Union on the matter (EU, 2018, p. 1);18 the reports issued by international human rights supervisory bodies (African Commission on Human and Peoples’ Rights, 2015, para. 35; Heyns, 2013; Kiai and Heyns, 2016, para. 67(f); Human Rights Committee, 2018, para. 65) and other international committees (UNESCO World Commission on the Ethics of Scientific Knowledge and Technology [COMEST], 2017, paras 87–101); the (qualified) concerns voiced in a number of open letters signed by renowned experts in the fields of robotics and AI, and by founders and CEOs of AI and robotics companies;19 as well as the opinion surveys showing a worldwide hostility to non-human lethal decision making (Ipsos, 2019). These various pronouncements and aspirations of the international community have been motivated on a variety of ethical and legal considerations, crucially including the deontological arguments of Section 2.
Autonomy in weapons systems and its meaningful human control
53
In the light of the foregoing, there are substantial grounds for arguing that a human control requirement not only may be inferred by way of abstraction from individual provisions of IHL and weapons law, but also matches the aspirations of the international community or, in the words of the much-celebrated Martens Clause, the ‘dictates of public conscience’.20 This conclusion, however, does not come with a definite answer to the question of what makes human control over weapons systems truly ‘meaningful’, viz. normatively acceptable. An answer to this question may only come with the provision of rules specifying what MHC amounts to. These rules should be ideally incorporated into a Protocol VI of the CCW, or introduced by state uniform practice, or, albeit less likely, by the International Court of Justice through an advisory opinion.
4
SHAPING THE CONTENT OF THE REQUIREMENT OF (MEANINGFUL) HUMAN CONTROL
In this section, we offer some suggestions towards the crafting of rules specifying what MHC amounts to, taking into due account the ethical and legal arguments recalled in Section 2, and related motivations for the treaties and pronouncements of the international community mentioned in Section 3. To be ethically and legally sound, rules determining the obligations accruing from the MHC requirement should guarantee that the functions normatively assigned to human control are properly fulfilled. In the light of the above, it appears that human control must play a threefold role. First, it constitutes a fail-safe mechanism, which is meant to prevent a malfunctioning of the weapon from resulting in a direct attack against the civilian population and objects, or in excessive collateral damage (Scharre, 2017, p. 154). Second, it represents a catalyst for accountability, insofar as it secures the legal conditions for responsibility ascription should a weapon follow a course of action that is in breach of international law (Chengeta, 2017). Third, it ensures that it is a moral agent, and not an artificial one, that takes decisions affecting the life, physical integrity and property of people (including combatants) who are involved in an armed conflict (ICRC, 2018, paras 23–26; Santoni de Sio and Van den Hoven, 2018, pp. 9–11; Scharre, 2017, p. 154). The performance of these various roles and functions in relation to increasingly autonomous weapons systems presupposes we solve two crucial problems: (1) how to ensure a proper quality of human involvement; (2) how to establish proper kinds of shared human–weapon control policies. In this respect, it is important to note that neither problem is properly addressed by considering only present battlefield deployment and use, as humans are involved in different capacities throughout ‘the entire life cycle of the weapons
54
Technology and international relations
system’ (GGE, 2018, para. 21(b)), which includes training of commanders and operators, research and development (R&D), as well as weapon testing, evaluation and certification (T&E). Let us start by focusing on requirements specifically aiming to ensure a sufficient quality of human involvement. In the first place, military personnel training should foster awareness of both ascertained and likely limits in the proper autonomous functioning of weapons systems, and related human predicaments in the ability to predict and control their behaviour (Margulies, 2017, p. 441).21 Well-known performance degradation factors originate in task environment changes that are difficult to model, notably including unpredicted competitive interactions with other autonomous artificial agents – for example, other AWS endowed with kinetic capabilities and software agents performing cyber-attacks. Awareness-building efforts concerning limitations in proper AWS functioning should be part of more encompassing training efforts, whereby the military personnel are trained to use advanced technologies without forfeiting human judgment and critical sense, and without succumbing to so-called automation biases (Cummings, 2006, referred to in International Committee for Robot Arms Control [ICRAC], 2018, p. 4). If humans are expected not to blindly trust the machine, moreover, they should be put in a position to get a sufficient amount of humanly understandable information about machine data processing (interpretability requirement), and to additionally obtain an account of the reasons why the machine is suggesting or going to take a certain course of action (explainability requirement). Both interpretability and explainability requirements must be addressed by R&D and T&E teams. To fulfil the interpretability requirement, one should map machine data and information processing into domains that humans can make sense of (Montavon, Samek and Müller, 2018). Accordingly, AWS should be designed to provide commanders and operators with ‘access to the sources of information’ handled by the system (Breton and Bossé, 2003, pp. 10–11) in a way that allows humans to take in and process data ‘at the level of meaning’, rather than ‘in a purely syntactic manner’ (Hew, 2016, p. 230). In general, military technological advances should empower human combatants, by enhancing their situational awareness, rather than substituting artificial agents for human understanding and judgement (US Air Force Chief Scientist, 2015, p. 8). To fulfil the explainability condition, AWS should be equipped to provide explanations of courses of actions that are being suggested or undertaken. On account of the interpretability requirement, these explanations must be cast in terms that are cognitively accessible to human users.22 Meeting the explainability requirement might prove particularly demanding in relation to AWS endowed with machine-learning capabilities, since currently used learning technologies are often based on sub-symbolic data representations
Autonomy in weapons systems and its meaningful human control
55
and information processing that are not transparent to human users. Notably, deep neural networks are currently achieving outstanding classification and decision-making results, but are mostly unable to fulfil interpretability and explainability requirements (Holzinger et al., 2017). The development of AI systems that are capable of providing humanly understandable explanations for their decisions and actions is the focus of the rapidly expanding XAI (eXplainable AI) research area. Scientifically challenging issues in XAI are, by no coincidence, central themes of research programmes supported by the US Defense Advance Research Project Agency (DARPA).23 Pending significant breakthroughs in XAI, one can but acknowledge the present technological difficulty of ensuring sufficient levels of system interpretability and explainability contributing to establish MHC on AI-based weapons systems. Let us now turn to the second problem – that is, the problem of establishing proper kinds of shared human–weapon control policies. It should be noted that several attempts have so far been made – by scholars, states and NGOs – to define the shared human–weapon control policies dictated by the MHC requirement. While significantly different from each other, these various proposals generally suffer from a common weakness: they aspire to achieve optimal partnership with one formula, which is supposed to apply uniformly to all kinds of weapons systems and to each of their possible uses (US, 2018a, para. 9). This flaw is particularly evident as regards the so-called ‘wider loop’ approach, advocated by the Dutch government, whereby MHC would in fact be exerted by human commanders at the planning stage of the targeting process (AIV/CAVV, 2015). Such an approach may have some limited relevance with regard to deliberate targeting, which envisages targets that are ‘known to exist in an area of operations and can be mapped to decisive points on line(s) of operation’ (Houston, 2009, para. 1.11). It is, however, a largely unhelpful approach with regard to dynamic targeting, which pursues targets of opportunity. When deployed in a scenario populated by civilians, moreover, equipping the weapon with humanly unrestrained autonomy after deployment appears to be deeply problematic in that it drives a wedge between the state owing a duty of care towards the civilian population and the actual possibility of complying with that duty by influencing the course of events through its agents (Akerson, 2013, p. 87). This is especially true for AWS endowed with the capability of ‘loitering’ for sustained periods of time in search of enemy targets, for the conditions licensing activation of a loitering AWS by human operators may rapidly change in many warfare scenarios characterized by erratic dynamics and surprise-seeking behaviour. The overly permissive guideline sketched and advocated by the Dutch government is located at one end of the spectrum of MHC construal. At the other end of the spectrum one finds overly restrictive endeavours to define the MHC requirement in rigorous terms and in an all-encompassing manner,
56
Technology and international relations
laying down a uniformly applicable form of human control over each and every kind of AWS and use thereof (Chengeta, 2017). While undoubtedly praiseworthy for their attention to humanitarian concerns, these attempts run the risk of being counterproductive, by alarming and alienating the more technologically advanced military powers, which should instead be fully involved in negotiations to ensure the effectiveness of a regulatory regime on this matter. Diplomatic and political discontent about an overly restrictive MHC requirement might be fuelled not only because one would end up banning weapons whose lawfulness has so far gone undisputed (Horowitz and Scharre, 2015, pp. 9–10), but also because milder forms of human control might be equally able to resolve the ethically and legally problematic implications of AWS in certain limited operational environments. Significant cases in point are the already deployed Israeli Iron Dome24 and the German Nächstbereichschutzsystem (NBS) MANTIS,25 when both are used as intended – that is, as protective shields from incoming shells and rockets. As we shall point out below, reflection on these systems strongly suggests that the autonomy of weapons systems is not invariably incompatible with a proper exercise of MHC. In brief, along with NGO-driven humanitarian concerns and motivated restrictions, state-driven military motivations for granting limited forms of critical autonomy to weapons systems should likewise be taken into consideration, as long as the latter do not jeopardize the fail-safe, accountability and moral agency properties that we have identified above as core components of the MHC requirement. To this end, we suggest giving up the quest for a one-size-fits-all solution to the issue of MHC in favour of a suitably differentiated approach, which is nonetheless based on the common ground provided by the converging ethical and legal principles outlined above. In our view, application of these overarching principles in concrete situations must be facilitated and given concrete operational content by the formulation of a set of rules bridging the gap between ethical and legal principles on the one hand, and specific sorts of weapon systems and their concrete uses on the other. These ‘if-then’ bridge rules should be able to express the fail-safe, accountability and moral agency conditions for exercising genuinely MHC over weapons systems in context. The ‘if part’ of these rules should include properties concerning what mission the weapons system is involved in, where it will be deployed and how it will perform its tasks. The ‘what properties’ relate to the operational goals (defensive vs offensive), the targeting modes (deliberate vs dynamic), and the nature of targets to be engaged (human combatants, manned vehicles, and inhabited military objects vs unmanned vehicles and uninhabited military objects). The ‘where properties’ concern the dynamic features of the operational environment, including interactions with other autonomous artificial
Autonomy in weapons systems and its meaningful human control
57
agents and having special regard to the presence/absence of civilians, civilian objects and friendly forces. The ‘how properties’, finally, concern the capabilities that the system puts to work to carry out its mission and that may affect its overall controllability and predictability. Loitering, learned decision making and ‘swarming’ abilities, which may be increasingly implemented on future AWS, are the most significant cases in point. The ‘then part’ of bridge rules should establish what kind of shared human– machine control would be legally required on each single use of a weapons system. Following a taxonomy proposed by Noel Sharkey (only slightly modified below) (Sharkey, 2016, pp. 34–7),26 one may sensibly consider five basic types of human–machine interaction for the ‘then part’ of bridge rules, ordered according to decreasing levels of human control and increasing levels of machine control in connection with critical target selection and engaging tasks: L0. A human engages with and selects targets, and initiates any attack. L1. A program suggests alternative targets and a human chooses which to attack. L2. A program selects targets and a human must approve before the attack. L3. A program selects and engages targets, but is supervised by a human who retains the power to override its choices and abort the attack. L4. A program selects targets and initiates attack on the basis of the mission goals as defined at the planning stage, without further human involvement. Bridge rules should establish what level is required to grant fulfilment of the normative functions of human control, as well as the values of the what/where/ how properties (or combinations thereof) that justify identification of a specific level in the above list. In other words, bridge rules must map combinations of what/where/how values into the L0–L4 set of weapons autonomy levels. In light of the ethical and legal arguments examined above, we suggest imposing, as a general default policy – that is, in the absence of any other internationally agreed bridge rule to the contrary – that higher levels of human control (L0 and L1) be exerted. This is the distinctive prudential aspect of the present approach to MHC. Within this framework of high levels of human control granted by default, lower levels of human control may become acceptable only as exceptions, allowed by specific bridge rules that are internationally agreed on in legally binding documents. Thus, deviations from the general default policy should be properly crafted and internationally approved as bridge rules, by taking into account (at least) the following observations: 1. The deployment of a weapons system in a dynamic environment, characterized by the presence of civilians or friendly forces or the active search for targets of opportunity, is a compelling factor inclining towards application of higher levels of human control (L0 and L1). The same goes for
58
Technology and international relations
the use of capabilities that may reduce the overall predictability of AWS behaviour (such as loitering, learned decision making, swarming). 2. Deliberate targeting by AWS may be pursued at a lower level of human control (L2), since targeting decisions have actually been taken by humans at the planning stage: the human operator, therefore, has only to confirm that there have not been changes in the battlespace that may affect the lawfulness of the operation. A similar level should be required, as a minimum, in relation to AWS programmed to engage human and/ or inhabited targets in structured scenarios, where civilians and civilian objects are not present, so that it is essential that there be a human on the attacking end who can verify whether there are persons hors de combat and take appropriate measures accordingly. A case in point is the legacy South Korean robotic sentry SGR-A1, deployed on the south-end side of the Korean demilitarized zone (Kumagai, 2007). 3. Human supervision and veto (L3) might be deemed as an acceptable level of control in cases of AWS with exclusively defensive functions. This is so with the Israeli Iron Dome and the German NBS MANTIS, as befits their customary use as protective shields from incoming shells and rockets. 4. Full autonomy (L4) should be considered incompatible, as a matter of principle, with the MHC requirement. One exception to this might be introduced exclusively for cases where human control (even in the mildest form of supervision) would not only be impracticable but also likely to imperil the life and physical integrity of human beings. Reference is made, in particular, to the hypothesis where a commander decides to switch a defensive system (like Iron Dome and MANTIS) from supervised autonomy (L3) to full autonomy (L4), in order to protect humans as well as inhabited vehicles and buildings from sudden and/or saturation attacks, in relation to which a split-second response may be required – that is, one that is incompatible with human reaction times. To be ethically and legally acceptable (viz. to preserve the ‘meaningfulness’ of human control), however, a clause of this kind should strictly circumscribe the space and time frames in which the AWS is authorized to operate in full autonomous mode (Amoroso et al., 2018, pp. 44–5).
5 CONCLUSIONS The foregoing analysis started from a brief overview of the main ethical (more specifically deontological) and legal reasons broadly supporting the MHC requirement. We have additionally tried to show the existence of a general principle of international law requiring human control over weapons systems. Furthermore, we have attempted to distil from this legal principle and the
Autonomy in weapons systems and its meaningful human control
59
attendant ethical arguments a number of core conditions ensuring the ‘meaningfulness’ of such control. These conditions concern the fail-safe role of human operators, the preservation of human accountability and the kind of moral agency to exert in targeting and engagement decisions. We have also elaborated on the proper ways to ensure performance of these various human roles and functions in relation to increasingly autonomous weapons systems. In particular, we distinguish between two kinds of policies: those that are necessary to preserve the quality of human involvement and those enabling one to identify the normatively required form of shared human– weapon control partnership. The former include obligations to appropriately train AWS end users (operators and commanders) about ascertained and likely limitations of machine autonomy, and to preserve the application of meaningful human judgement by imposing human interpretability and explainability of machine data representations and information processing. The latter comprise sets of bridge rules, which should establish – for the various categories of AWS and use thereof – the level of human–machine interaction that strikes the best balance between (NGO-driven) humanitarian concerns and (state-driven) military motivations for granting limited forms of critical autonomy to weapons systems, having particular regard to the preservation of the fail-safe, accountability and moral agency properties that were identified as core components of the MHC requirement. In general, by giving a central role to bridge rules linking general legal and ethical principles to concrete MHC implementations, one relinquishes the quest for a one-size-fits-all solution to the MHC issue in favour of a suitably differentiated approach, which is nonetheless based on the common ground provided by the above converging ethical and legal principles. Moreover, by imposing the higher levels of human control by default, this differentiated approach is also prudential in character. It has been suggested that these bridge rules might crystallize through uniform state practice or be laid down by the International Court of Justice in an advisory opinion. However, a more convenient approach is that of codifying these bridge rules into a Protocol VI of the CCW, as this would better serve both the clarity and the binding character of the legal prescriptions posited therein.27 Two additional remarks are in order here. First, current trends in cyber-conflicts suggest that AWS software will be increasingly subject to cyber-attacks, which may lead them to behave in unpredictable and undesired ways. Accordingly, cyber-attacks are a major threat to MHC, deeply undermining the ‘meaningfulness’ of human control. The latter, therefore, should be preserved not only through the adoption of suitable legal rules, but also by embedding strong cyber-resilience features in weapons systems (US Air Force Chief Scientist, 2015, p. 23) – a duty incumbent upon all those involved at the R&D and T&E stages, including military procurement officers.
Technology and international relations
60
Second, while ultimately based on deontological reasons, the MHC requirement – at least as set out in this chapter – would also prove effective from the consequentialist perspective in normative ethics, as mentioned at the end of Section 2. On the one hand, a differentiated approach to the best form of shared human–weapon control partnership would take stock of the humanitarian advantages usually seen as a consequentialist driver towards autonomy in weapons systems (i.e., better performance, in terms of IHL, on individual battlefields) (Arkin, 2009, pp. 127–33; US, 2018b), without giving up the threefold normative functions of human control (fail-safe mechanism, catalyst for responsibility, warrant of moral agency). On the other hand, by enforcing the MHC requirement in the ways unfolded here, one connects the tempo of military attacks to human cognitive capacities and reaction times (with the notable exception of certain uses of defensive AWS), thereby mitigating the widespread concern that autonomy in weapons systems might lead to an acceleration in the pace of war that is incompatible with the limitations of the human cognitive and sensory-motor coordination system (Altmann and Sauer, 2017).
NOTES 1. 2. 3.
4. 5.
6. 7. 8. 9. 10. 11.
‘It is DoD policy that: a. Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force’ (emphasis added). For an earlier use of this expression in the context of robotic warfare, see Adams (2001), p. 67. But see also, though in a radically different perspective, the official position of the Dutch government, which is based on Advisory Council on International Affairs (AIV) and Advisory Committee on Issues of Public International Law (CAVV) (AIV/CAVV) (2015). Thus, at both the 2018 and 2019 GGE meetings, an agenda item was specifically devoted to the ‘consideration of the human element in the use of lethal force’ (see Singh Gill, 2018, 6(b) and Gjorgjinski, 2019, 5(b)). Remarkably enough, a constraint of this kind is satisfied by some currently operating weapons systems, their limited target baskets notwithstanding, including stationary robotic sentinels, loitering munitions and fire-and-forget systems. For a selective overview of existing weapons captured by this crucial necessary condition on the autonomy of weapons systems, see Amoroso and Tamburrini (2017), pp. 3–4. Additional Protocol (I) to the Geneva Conventions, 1977, Arts 48, 51(2) and 52(2). Additional Protocol (I) to the Geneva Conventions, 1977, Art. 41(2). Additional Protocol (I) to the Geneva Conventions, 1977, Art. 51(5)(b). Additional Protocol (I) to the Geneva Conventions, 1977, Art. 57. This is also acknowledged by roboticists who, in principle, are in favour of autonomy in weapons systems. See Arkin (2013), p. 4. It is worth noting that AWS unpredictability has many sources. In particular, it does not depend solely upon the complexity of the system’s architecture and the
Autonomy in weapons systems and its meaningful human control
61
AI built into it, but also on the features of the AWS operational environments. See Tamburrini (2016), pp. 127–8. 12. See also for further references, Amoroso and Giordano (2019) (addressing also state and corporate responsibility gaps for AWS’s misdoings). 13. For an overview of this debate, and for further references, see Amoroso (2020), pp. 161–215. 14. The same might be said for the view whereby the MHC requirement would stem from an emerging norm of customary international law. The emergence of a customary norm on MHC is (critically) discussed, for instance, in Lewis, Blum and Modirzadeh (2016), pp. 55–9. See also Asaro (2016). 15. See, in general, the practice regarding Rule 71 (Weapons That Are by Nature Indiscriminate), accessed 9 July 2020 at https://ihl-databases.icrc.org/customary -ihl/eng/docs/v1_rul_rule71. 16. See China (2018), para. 3 (‘Discussions on Human–Machine Interaction should… define the mode and degree of human involvement and intervention. Concepts such as meaningful human control and human judgment are rather general and should be further elaborated and clarified’; France (2018) para. 13 (‘The use of force remains an inherent responsibility of human command, particularly in cases of violations of international humanitarian law’); Russian Federation (2018), para. 11 (‘We do not doubt the necessity of maintaining human control over the machine’); United Kingdom (2018), para. 6 (‘Operation of our weapons will always be under human control as an absolute guarantee of human oversight and authority, and of accountability for weapons usage’). As to the US official position, see above note 1. 17. ‘[M]achines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law’. 18. ‘We firmly believe that humans must make the decisions with regard to the use of lethal force, exert control over lethal weapons systems they use, and remain accountable for decisions over life and death. Appropriate human control is essential to ensure compliance with fundamental IHL principles’. 19. For a compilation of the open letters signed so far, see https://autonomousweapons .org/compilation-of-open-letters-against-autonomous-weapons/, accessed 9 July 2020. See also the pledge signed by an increasing number of AI companies and researchers, to ‘neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons’ (Future of Life Institute, 2018). 20. On the Martens Clause, whose text is also reproduced in the fifth preambular paragraph of the CCW, see Cassese (2000). For a discussion of the relevance of this clause in debate on AWS, see Asaro (2016). 21. The provision of a training obligation would not be a novelty within the CCW system. See Article 2 of Protocol IV of the CCW on Blinding Laser Weapons. 22. See US Department of Defense (2012), requiring the interface between people and machines to ‘be readily understandable to trained operators’ (para. 4.3.a). 23. See M. Turek (n.d.), ‘Explainable artificial intelligence (XAI)’ at https://www .darpa.mil/program/explainable-artificial-intelligence, accessed 9 July 2020. 24. See ‘Iron Dome air defence missile system’, at https://www.army-technology .com/projects/irondomeairdefencemi/, accessed 9 July 2020. 25. See ‘NBS MANTIS air defence protection system’, at https:// www .army -technology.com/projects/mantis/ accessed 9 July 2020. 26. Deviations concern, notably, levels L3 and L4.
62
Technology and international relations
27. For a first attempt to identify the ‘key elements’ of such treaty, see Amoroso and Tamburrini (2019); Campaign to Stop Killer Robots (2019).
REFERENCES Adams, T.K. (2001), ‘Future warfare and the decline of human decision-making’, Parameters, 31(4), 57–71. Advisory Council on International Affairs (AIV) and Advisory Committee on Issues of Public International Law (CAVV) (2015), Autonomous Weapon Systems: The Need for Meaningful Human Control, No. 97 AIV/No. 26 CAVV, The Hague: AIV-CAVV. African Commission on Human and Peoples’ Rights (2015), ‘General Comment No. 3 on the African Charter on Human and Peoples’ Rights: the right to life (Article 4)’, 57th Ordinary Session, 4–18 November. Akerson, D. (2013), ‘The illegality of offensive lethal autonomy’, in D. Saxon (ed.), International Humanitarian Law and the Changing Technology of War, Leiden and Boston, MA: Martinus Nijhoff, pp. 65–98. Altmann, J. and F. Sauer (2017), ‘Autonomous weapon systems and strategic stability’, Survival, 59(5), 117–42. Amoroso, D. (2020), Autonomous Weapons Systems and International Law: A Study on Human–Machine Interactions in Ethically and Legally Sensitive Domains, Napoli and Baden-Baden: ESI/Nomos. Amoroso, D. and B. Giordano (2019), ‘Who is to blame for autonomous weapons systems’ misdoings?’, in N. Lazzerini and E. Carpanelli (eds), Use and Misuse of New Technologies: Contemporary Challenges in International and European Law, Heidelberg: Springer, pp. 211–32. Amoroso, D., F. Sauer and N. Sharkey et al. (2018), Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy, Berlin: Heinrich Böll Foundation. Amoroso, D. and G. Tamburrini (2017), ‘The ethical and legal case against autonomy in weapons systems’, Global Jurist, 17(3), 1–20. Amoroso, D. and G. Tamburrini (2019), ‘Filling the empty box: a principled approach to meaningful human control over weapons systems’, ESIL Reflections, 8(5), 1–9. Arkin, R. (2009), Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL: CRC Press. Arkin, R. (2013), ‘Lethal autonomous systems and the plight of the non-combatant’, AISB Quarterly, No. 137, 1–9. Article 36 (2013), ‘Killer robots: UK government policy on fully autonomous weapons’, 19 April, policy paper. Asaro, P. (2012), ‘On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making’, International Review of the Red Cross, 94, 687–709. Asaro, P. (2016), ‘Jus nascendi, robotic weapons and the Martens Clause’, in R. Calo, A.M. Froomkin and I. Kerr (eds), Robot Law, Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing, pp. 367–86. Austria, Brazil and Chile (2018), ‘Proposal for a mandate to negotiate a legally binding instrument that addresses the legal, humanitarian and ethical concerns posed by emerging technologies in the area of lethal autonomous weapons systems (LAWS)’,
Autonomy in weapons systems and its meaningful human control
63
submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 30 August (UN Doc. CCW/GGE.2/2018/WP.7). Bhuta, N., S. Beck and R. Geiss (2016), ‘Present futures: concluding reflections and open questions on autonomous weapons systems’, in N. Bhuta, S. Beck and R. Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 347–83. Brehm, M. (2015), ‘Meaningful human control’, paper presented to the Informal Meeting of Experts on Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons (CCW), Geneva, 14 April. Breton, R. and E. Bossé (2013), ‘The cognitive costs and benefits of automation’, in NATO (ed.), RTO Meeting Proceedings MP-088: The Role of Humans in Intelligent and Automated Systems, pp. 1–11. Campaign to Stop Killer Robots (2013), ‘Urgent action needed to ban fully autonomous weapons: non-governmental organizations convene to launch Campaign to Stop Killer Robot’, press release, London, 23 April. Campaign to Stop Killer Robots (2017), ‘Parliamentary actions’, 23 April, accessed 9 July 2020 at https://www.stopkillerrobots.org/2017/04/parliaments/. Campaign to Stop Killer Robots (2018), ‘Parliamentary actions in Europe’, 10 July, accessed 9 July 2020 at https://www.stopkillerrobots.org/2018/07/parliaments-2/. Campaign to Stop Killer Robots (2019), ‘Key elements of a treaty on fully autonomous weapons’, November 2019, accessed December 2019 at https://www.stopkillerrobots .org/publications/. Cassese, A. (2000), ‘The Martens Clause: half a loaf or simply pie in the sky?’, European Journal of International Law, 11(1), 187–216. Cassese, A. (2005), International Law, New York and Oxford: Oxford University Press. Chengeta, T. (2017), ‘Defining the emerging notion of “meaningful human control” in autonomous weapon systems’, New York Journal of International Law & Politics, 49, 833–90. China (2018), ‘Position paper’, submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 11 April (UN Doc. CCW/ GGE.1/2018/WP.7). Crootof, R. (2016), ‘A meaningful floor for “meaningful human control”’, Temple Journal of International & Comparative Law, 30, 53–62. Cummings, M.L. (2006), ‘Automation and accountability in decision support system interface design’, Journal of Technology Studies, 32(1), 23–31. Dworkin, R.M. (1967), ‘The model of rules’, The University of Chicago Law Review, 35(14), 14–46. European Union (EU) (2018), ‘Statement on agenda item 6(b)’, delivered to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, August. France (2018), ‘Human–machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 28 August (UN Doc. CCW/GGE.2/2018/WP.3). Future of Life Institute (2018), ‘Lethal autonomous weapons pledge’, 18 July, accessed 9 July 2020 at https://futureoflife.org/laws-pledge/. Gjorgjinski, L.J. (2019), ‘Provisional agenda’, submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 8 February (UN Doc. CCW/GGE.1/2019/1).
64
Technology and international relations
Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE) (2018), ‘Report of the 2018 session’, Geneva, 23 October (UN Doc. CCW/GGE.1/2018/3). Hew, P.C. (2016), ‘Preserving a combat commander’s moral agency: the Vincennes incident as a Chinese Room’, Ethics and Information Technology, 18, 227–35. Heyns, C. (2013), ‘Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions’, 9 April (UN Doc. A/HRC/23/47). Heyns, C. (2016), ‘Autonomous weapons systems: living a dignified life and dying a dignified death’, in N. Bhuta, S. Beck and R. Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 3–19. Holzinger, A., M. Plass and K. Holzinger et al. (2017), ‘A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop’, arxiv. org, 3 August. Horowitz, M. and P. Scharre (2015), ‘Meaningful human control in weapon systems: a primer’, working paper of the Center for a New American Security (CNAS), March. Houston, A.G. (2009), ‘Australian Defence Doctrine Publication 3.14 – Targeting’, 2 February. Human Rights Committee (2018), ‘General comment No. 36 on Article 6 of the International Covenant on Civil and Political Rights, on the right to life’, 30 October (UN Doc. CCPR/C/GC/36). Human Rights Watch (2016), ‘Killer robots and the concept of meaningful human control’, memorandum submitted to the Informal Meeting of Experts on Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons (CCW), Geneva, April. International Committee for Robot Arms Control (ICRAC) (2018), ‘Guidelines for the human control of weapons systems’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, April. International Committee of the Red Cross (ICRC) (2016), ‘Views of the International Committee of the Red Cross (ICRC) on autonomous weapon systems’, paper submitted to the Informal Meeting of Experts on Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons (CCW), Geneva, 11 April. International Committee of the Red Cross (ICRC) (2018), ‘Ethics and autonomous weapon systems: an ethical basis for human control?’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 29 March (UN Doc. CCW/GGE.1/2018/WP.5). Iovane, M. (2017), ‘L’influence de la multiplication des juridictions internationales sur l’application du droit international’, Recueil des cours de l’Académie de droit international de La Haye, 383, 233–446. Iovane, M. (2018), ‘Some reflections on identifying custom in contemporary international law’, Federalismi.it. Rivista di diritto pubblico italiano, comparato, europeo, No. 9, 25 April. Ipsos (2019), ‘Six in ten (61%) respondents across 26 countries oppose the use of lethal autonomous weapons systems’, 22 January, accessed 9 July 2020 at https://www .ipsos.com/en-us/news-polls/human-rights-watch-six-in-ten-oppose-autonomous -weapons. Kiai, M. and C. Heyns (2016), ‘Joint report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association and the Special Rapporteur on
Autonomy in weapons systems and its meaningful human control
65
extrajudicial, summary or arbitrary executions on the proper management of assemblies’, 4 February (UN Doc. A/HRC/31/66). Kumagai, J. (2007), ‘A robotic sentry for Korea’s demilitarized zone’, IEEE Spectrum, 1 March. Lewis, D.A., G. Blum and N.K. Modirzadeh (2016), ‘War-algorithm accountability’, Harvard Law School Program on International Law and Armed Conflict, research briefing, August. Marauhn, T. (2018), ‘Meaningful human control – and the politics of international law’, in W. Heintschel von Heinegg, R. Frau and T. Singer (eds), Dehumanization of Warfare: Legal Implications of New Weapon Technologies, Cham, Switzerland: Springer Nature, pp. 207–18. Margulies, P. (2017), ‘Making autonomous weapons accountable: command responsibility for computer-guided lethal force in armed conflicts’, in J.D. Ohlin (ed.), Research Handbook on Remote Warfare, Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing, pp. 405–42. Montavon, G., W. Samek and K.-R. Müller (2018), ‘Methods for interpreting and understanding deep neural networks’, Digital Signal Processing, 73, 1–15. Pisillo Mazzeschi, R. and A. Viviani (2018), ‘General principles of international law: from rules to values?’, in R. Pisillo Mazzeschi and P. De Sena (eds), Global Justice, Human Rights and the Modernization of International Law, Cham, Switzerland: Springer Nature, pp. 113–61. Rosert, E. (2017), ‘How to regulate autonomous weapons: steps to codify meaningful human control as a principle of international humanitarian law’, PRIF Spotlight, 6/2017. Russian Federation (2018), ‘Russia’s approaches to the elaboration of a working definition and basic functions of lethal autonomous weapons systems in the context of the purposes and objectives of the convention’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 4 April (UN Doc. CCW/GGE.1/2018/WP.6). Santoni de Sio, F. and J. van den Hoven (2018), ‘Meaningful human control over autonomous systems: a philosophical account’, Frontiers in Robotics and AI, 5(15), https://doi.org/10.3389/frobt.2018.00015. Scharre, P. (2017), ‘Centaur warfighting: the false choice of humans vs. automation’, Temple International and Comparative Law Journal, 30(1), 151–65. Sharkey, A. (2018), ‘Autonomous weapons systems, killer robots and human dignity’, Ethics and Information Technology, 12(3), 277–88. Sharkey, N. (2016), ‘Staying the loop: human supervisory control of weapons’, in N. Bhuta, S. Beck and R. Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 23–38. Singh Gill, A. (2018), ‘Provisional agenda’, submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 16 March (UN Doc. CCW/GGE.1/2018/1). Sparrow, R. (2016), ‘Robots and respect: assessing the case against autonomous weapons systems’, Ethics and International Affairs, 30(1), 93–116. Szegedy, C., W. Zaremba and I. Sutskever et al. (2014), ‘Intriguing properties of neural networks’, arxiv.org, 19 February. Tamburrini, G. (2016), ‘On banning autonomous weapon systems: from deontological to wide consequentialist reasons’, in N. Bhuta, S. Beck and R. Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 122–41.
66
Technology and international relations
Ulgen, O. (2018), ‘Definition and regulation of LAWS’, technical report submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 5 April. UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) (2017), ‘Report on robotics ethics’, 14 September (UN Doc. SHS/YES/ COMEST-10/17/2 REV). United Kingdom (UK) (2018), ‘Human machine touchpoints: the United Kingdom’s perspective on human control over weapon development and targeting cycles’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 8 August (UN Doc. CCW/GGE.2/2018/WP.1). United Nations (UN) Secretary-General (2018), ‘Remarks at “Web Summit”’, 5 November, accessed 9 July 2020 at https://www.un.org/sg/en/content/sg/speeches/ 2018-11-05/remarks-web-summit. United Nations Institute for Disarmament Research (UNIDIR) (2014), ‘The weaponization of increasingly autonomous technologies: considering how meaningful human control might move the discussion forward’, UNIDIR Resources, No. 2. United States (US) (2018a), ‘Human–machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 28 August (UN Doc. CCW/ GGE.2/2018/WP.4). United States (US) (2018b), ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’, working paper submitted to the Group of Governmental Experts on Lethal Autonomous Weapons of the CCW, Geneva, 28 March (UN Doc. CCW/GGE.1/2018/WP.4). US Air Force (1976), ‘International law – the conduct of armed conflict and air operations’, Pamphlet 110-31, November. US Air Force Chief Scientist (2015), ‘Autonomous horizons: system autonomy in the Air Force – a path to the future. Vol. I: Human autonomy teaming’, AF/ST TR 15-01, June. US Department of Defense (2012), ‘Directive 3000.09, 21 November, 2012: Autonomy in weapons systems’. US Department of Defense (2016), Law of War Manual, 13 December, Washington, DC: Office of General Counsel.
PART II
Robotics and artificial intelligence: frontiers and challenges
4. Context matters: the transformative nature of drones on the battlefield Sarah Kreps and Sarah Maxey 1 INTRODUCTION In February 2016, President Obama said that drones ‘may play a transformative role in fields as diverse as urban infrastructure management, farming, public safety…and disaster response’ (McNeal, 2015). He sidestepped the specifics of whether drones had been transformative on the battlefield, but his policies made the point clear. Between 2009 and 2016, President Obama authorized about 473 drone strikes outside areas of active hostilities, including in Yemen, Pakistan and Somalia. For some observers, this pattern of use is revolutionary. Amy Zegart (2015) of Stanford University stated that ‘Drones are going to revolutionize how nations and nonstate actors threaten the use of violence’. Indeed, the technology has appeared alarming enough to prompt calls for an outright ban (Swanson, 2013). For others, however, drones are just another technology. As former US Secretary of Defense Leon Panetta (2014, p. 388) states in his memoirs, ‘To call our campaign against Al Qaeda a “drone program” is a little like calling World War I a “machine gun program”. Technology has always been an aspect of war…what is most crucial is not the size of the missile or the ability to deploy it from thousands of miles away’ but how the munitions are used. This chapter evaluates these two competing perspectives about drones: the first that drones are so transformative that they should be banned; the second that they are just another technology. It does that by outlining these two perspectives in more detail. It then argues, based on an evaluation of drones in five major contexts – counterterrorism, interstate, intrastate, humanitarian and non-state actors – that the reality is more nuanced than either of the perspectives suggest. Current-generation drones stand to be transformative for counterterrorism operations and domestic control in authoritarian regimes by lowering the cost of using force. In humanitarian operations, drones have significant, untapped transformative potential because the lowered cost of 68
Context matters: the transformative nature of drones on the battlefield
69
force makes rapid responses more feasible. Additionally, in these operations the relevance of drones’ surveillance and aid delivery capabilities extend opportunities for transformation to a wide range of state and non-state actors. For interstate and intrastate wars, however, drones are more likely to be just another technology. The chapter closes by discussing implications, including the way future drones – stealthier, speedier, swarming, or smaller – might change the battlefield.
2
TWO VIEWS OF DRONES
In the last several years, the debate about drones has moved from a discussion of whether the United States’ use of drones is ethical, legal and effective, to one that looks at the broader consequences of this technology in a world of proliferation. The evolution of the debate makes good sense insofar as the United States, once the preponderant user of drones on the battlefield, is no longer hegemonic. Indeed, in the last decade, the United States was the most prolific user of armed drones, deploying them not just in battlefield contexts such as Afghanistan and Iraq, but more controversially in areas where the United States was not technically involved in active hostilities, such as Pakistan, Yemen and Somalia. In the last few years, a number of countries have announced their plans to catch up. The UK, Israel, Pakistan, Nigeria and Iraq have used armed drones in combat and China and Iran have armed drones that they have not yet used in combat. A number of other countries, including Russia, India, Saudi Arabia and the United Arab Emirates, have stated that they are trying to import or indigenously produce armed drones. These developments have created fertile grounds for debating how the proliferation of armed drones would affect regional and international security. Participants in this debate can be divided into two general camps based on how they interpret the implications of continued drone proliferation. The first perspective is quite pessimistic. According to this view, drones lower the cost of using force because their use does not impose any risks. United Nations rapporteur Christof Heyns noted that ‘drones make it not only physically easier to dispatch long-distance and targeted armed force, but the proliferation of drones may lower social barriers in society against the deployment of lethal force and result in attempts to weaken the relevant legal standards’ (Kreps, 2016, p. 66). Lowering the threshold for using force would mean that states are more likely to use drones. Proponents of this view point to the frequency with which the United States has carried out drone strikes, suggesting that the number of targeted killings would be a fraction of the total number in the absence of this technology.
70
Technology and international relations
Proliferation to other countries, according to this view, would be undesirable because of the adverse consequences for regional and international security. In regions such as East Asia and the Middle East, which are already prone to conflict and also populated with countries that either already have or are trying to acquire drones, the consequences would then be particularly destabilizing (Zenko and Kreps, 2014, p. 12). The second perspective is more optimistic. Observers in this camp see drones as just another platform, in which case the prospect of drone proliferation does not pose more serious risks than if, for example, countries acquired more manned platforms such as an F-16. In 2012, General Norton Schwartz, chief of staff of the US Air Force, summarized the position that drones are just another platform when he said that ‘if it is a legitimate target, then I would argue that the manner in which you engage that target, whether it be close combat or remotely, is not a terribly relevant question’ (ibid., p. 8). In other words, the way drones are used is more relevant than the fact that they are drones, which is why some observers have called for the banning of targeted killings rather than the platform itself. Not only are drones merely the platform, according to this view, they are actually a less capacious one, an additional reason why the prospect of proliferation should not be one that causes concern. Drones do not win wars, cannot hold territory and have not produced decisive victories for the United States, either tactically or strategically (Horowitz, Kreps and Fuhrmann, 2016, p. 16). They fly low and slow and are therefore especially vulnerable in a contested environment with sophisticated air defences. As Gilli and Gilli (2016) note, the prospect of proliferation is not only not a destabilizing one, it is also unlikely. Advanced armed drones are expensive and the technology is likely out of reach for all but the most advanced militaries.
3
A THIRD WAY ON DRONE PROLIFERATION: CONTEXT MATTERS
While these two perspectives offer a valuable starting point in the debate about drone proliferation and provide key insights, they are both incomplete. For example, the pessimistic perspective captures the intuition for why drones might lower the threshold for the use of force. Theoretical accountability linkages hinge on the connection between the burden citizens bear in terms of blood and treasure and their ability to hold leaders responsible. Insofar as drones eliminate all burden in blood, at least to the side using them, the populace has few incentives to reign in the government’s use of this technology. It is for these reasons that scholars have suggested that the post-9/11 conflict has continued without obvious limits (Goldsmith and Waxman, 2016; Kaag and Kreps, 2013).
Context matters: the transformative nature of drones on the battlefield
71
On the other hand, optimists are correct to point out the limitations of drones. As one US Air Force general described them, drones are ‘useless in a contested environment’ in which a country would face advanced air defence systems (Majumdar, 2013). Moreover, it is not clear that drones would offer operational advantages compared to a more capacious manned fighter aircraft, the A-10 for close air support, or even helicopters, further limiting the transformative nature of the technology (Horowitz et al., 2016, pp. 17–18). Drawing on some of these insights and observations about the use of drones thus far, this analysis argues that whether drones are transformative depends on the strategic context in which they are used. We draw on Horowitz et al.’s (2016) analysis of the contexts in which drones might be game changers, elaborating on several contexts and adding the setting of humanitarian intervention. Table 4.1 summarizes the contexts and our conclusions about the transformative potential of drones. Table 4.1
Contexts and transformative potential of drones
Context
Consequences for Current-generation Drone Proliferation
Counterterrorism operations
High
Domestic control/repression
High
Humanitarian operations
High – unarmed Moderate – armed
Use by non-state actors
Moderate
Interstate wars
Low
Intrastate wars
Low
3.1 Counterterrorism The United States’ experience with drones in the service of counterterrorism points to the ways in which other countries are likely to find drones attractive as well. President Obama has often said that drones kill the individuals who are trying to kill Americans. Or as the former Intelligence Director Michael Hayden (2016) put it in an editorial, ‘to keep America safe, embrace drone warfare’. Indeed, even critics of drone strikes acknowledge that ‘they can protect the American people from attacks in the short term’ (even if creating blowback in the longer term) because they can eliminate militants suspected of planning attacks (Cronin, 2013, p. 44). Insofar as the president in a democratic populace thinks short term – indeed, President Obama was said to think ten days ahead, not ten years ahead (Savage, 2016) – then drone strikes are likely to be politically appealing. Since there are no risks to the pilot and no
72
Technology and international relations
body bags, democratic leaders can generate tremendous political dividends by eliminating these individuals while not causing domestic backlash from a population put off by its own casualties. A number of American leaders have conceded that other countries look to American experiences with drones as precedent for their own behaviour. CIA Director John Brennan (2012) observed in a speech that ‘we are establishing precedents that other nations may follow…and not all of them will be nations that share our interests or the premium we put on protecting human life, including innocent civilians’. In her research on drones and targeted killing, Rosa Brooks (2013) lamented that ‘instead of articulating norms about transparency and accountability, the United States is effectively handing China, Russia and every other repressive state a playbook for how to foment instability and – literally – get away with murder’. Already, Israel has frequently used drones for the purposes of counterterrorism – for example, with a drone strike in Sinai in 2014 that targeted militants suspected of trafficking weapons from Egypt into Gaza (Al-Monitor, 2014). The United Kingdom has also used drones for counterterrorism, with its most visible strike occurring in August 2015 when it used a Reaper to kill a British citizen in Syria. Countries such as Russia and China, with their domestic ‘terrorist’ situations, would also likely find drones attractive for counterterrorism. China has already considered using drones to kill a suspected drug trafficker who was in hiding across the border in Myanmar, with the order rescinded because Chinese officials valued capturing the individual for intelligence purposes. With its opposition movement in Chechnya, or even across borders in Georgia or Ukraine, Russia could conceivably label groups ‘terrorists’ and use force, drawing on the US experience (Zenko and Kreps, 2014, pp. 10–11). To be sure, countries may still experience limits to being able to carry out these strikes. For example, Russia’s armed drone programme has thus far met with dismal failure. Its early efforts at developing a medium altitude, long-endurance drone like the Predator crashed and burned as it embarked on its first flight (Rawnsley, 2011). It therefore does not have longer-range attack drones but instead operates small tactical drones largely for surveillance until it can develop or import a strategic capability (Bendett, 2016). Longer-range drone strikes might therefore face technological limitations that have not been imposed on the US because of the latter’s large investment in armed drones, its wide array of military bases and its sophisticated systems integration that enabled the technical successes of its drone programme. Countries such as Germany also have domestic political environments poorly suited to the development of armed drones, let alone their deployment across borders for the purposes of counterterrorism (Kreps and Zenko, 2014, p. 68). These examples suggest that while the United States may have established precedents highlighting the politically appealing nature of drone strikes, some countries
Context matters: the transformative nature of drones on the battlefield
73
will be able and interested in following in their footsteps while others will not. Nonetheless, the United States’ experience is illustrative of the potentially transformative nature of drones for counterterrorism. 3.2
Interstate Conflict
If drones have been transformative in counterterrorism, they are likely to be far less relevant for interstate conflict. One argument might suggest that in regions such as East Asia and the Middle East, ripe for conflict, the introduction of a new weapon system with uncertain rules of engagement might be a recipe for misperception and escalation. The prospect is certainly plausible. How should states respond to another side’s counter-drone tactics? Should they shoot down a drone in their airspace? And what is the appropriate (or legal) counter-counter-drone tactic? The uncertainty in this regard is certainly not helpful in contexts such as East Asia where states do have drones and are already disputing territory (Zenko and Kreps, 2014, pp. 11–12). On the other hand, at least current-generation drones are likely to have low operational value in these same regions because of the sophisticated air defences on each side. Since drones fly low and slow, they would be easily picked off by air defences. In fact, as will be discussed in Section 4, small drones that can sneak through air defences might actually be more useful, although they would have a lower payload, cancelling out many of their benefits. The drones that might be more useful are surveillance and reconnaissance drones, which could help create transparency and reduce misperceptions, thus making conflict less likely. The United States’ sale of the Global Hawk to Japan therefore made good sense in this regard. Taken together, the vulnerability of drones makes them less valuable in an interstate context compared to a counterterrorism context such as Waziristan, which would be bereft of air defences. The British experience with drone strikes in Iraq and Syria is indicative of these vulnerabilities. Although the conflict in Iraq and Syria is not a traditional interstate conflict, it does bear certain similarities – for example, the presence of Syria’s somewhat advanced air defence systems. Indeed, in March 2015, Syria allegedly shot down an American Predator. Data on strikes by the UK’s Royal Air Force in 2015 reflect the fact that drones have not been the proverbial silver bullet in the Middle East. In 2015, the Royal Air Force carried out a total of 527 airstrikes in Syria and Iraq, of which less than 40 per cent were with the Reaper, compared with 61 per cent with the Tornado or Typhoon. Data for the United States are less available but are likely similar. These data would also overstate the likely US reliance on drone strikes in an interstate conflict involving a country with a more sophisticated military such as China.
74
Technology and international relations
This is not to say that current-generation drones are not valuable in an interstate context. Indeed, they already have an important role to play in logistics operations that are crucial in the support of military operations. In 2011, the US Marines used an unmanned helicopter to deliver supplies to a base in Afghanistan, marking the first time that a drone had been used for resupply. The impact of this type of logistics application is enormous. It is more cost effective because troops do not have to drive supplies to far-flung locations and most importantly, it helps with force protection. Robotic jeeps, which can be airlifted to remote patrol bases, can also serve these resupply functions (Reed, 2011). While these functions are helpful in a battlefield context insofar as they save blood and treasure, they are less directly transformative than the types of drones involved in lethal force, playing more of a support role (even if an important one). 3.3
Intrastate Conflict and Domestic Repression
Between 2015 and 2016, three states began using drones in an intrastate context. Iraq, Pakistan and Nigeria all appear to have used some version of Chinese-made drones to fight domestic insurgent populations on their territory. While this may suggest some utility to the use of drones for intrastate settings, there is another side of the story. Each of these countries had been involved in a conflict against domestic militant groups for several years before acquiring drones, which at least suggests that drones were not the reason these governments resorted to military force. Pakistan, for example, had been involved in a counterinsurgency campaign since 2004 (Al Jazeera, 2014) and had killed about 3400 militants (while losing 488 soldiers). Only in the fall of 2015 did it begin using what appeared to be a Chinese-made, or Chinese-assisted drone. Drones did not cause the counterinsurgency, but simply allowed Pakistan to continue its campaign in the federally administered tribal areas without having to deploy as many infantry or manned aircraft to do so. In its initial strikes with the newly acquired drone, ‘several’ Taliban militants were killed compared to 22 in a previous airstrike that month, suggesting that in fact the drone was less capable than the manned counterpart. Much of the fight against the Taliban is also carried out by the Pakistani army, suggesting an additional limitation to the use of airpower in this context, where the goal is to clear or hold territory. Air strikes with the use of drones appear to do more to ‘soften’ targets before army components mount ground offensives in the tribal regions (The Express Tribune, 2015). The Nigeria case tells a similar story. For many years, Nigeria has fought Boko Haram with ground forces rather than its Air Force. The Obama administration resisted exporting even manned aircraft such as the Super Tucano to Nigeria because of its human rights abuses. The introduction of Chinese-made
Context matters: the transformative nature of drones on the battlefield
75
drones in the fall 2015 made it possible for Nigeria to carry out strikes with drones, which was a valuable if not game-changing development (Gaffey, 2016). The experience in Nigeria suggests that conflict was possible with and without drones. Once available, drones simply became another tool in its toolkit for fighting Boko Haram. The intrastate setting where drones might be more useful, even transformative, is within the context of domestic repression, particularly by authoritarian governments. One of the obvious barriers to authoritarian leaders when they consider using force against their domestic dissidents is that they must find ways to co-opt the military into firing on citizens. Authoritarian leaders also tend to lack efficient forces for defeating these groups. Drones could be a useful instrument in this context. As Deputy Secretary of Defense Robert Work (2015), suggested, ‘authoritarian regimes who believe people are weaknesses in the machine, that they are the weak link in the cog, that they cannot be trusted…they will naturally gravitate towards totally automated solutions’. Drones could provide more automated solutions for cracking down on dissident groups without having to enlist an entire army that might actually be sympathetic to the dissidents rather than to the leader himself. 3.4
Humanitarian Operations
In addition to domestic repression, drones also have the potential to transform humanitarian operations directed at either interstate or intrastate conflicts. Humanitarian operations focus primarily on protecting civilians by stopping human rights abuses or by safeguarding tenuous peace. These operations can take the form of humanitarian interventions – in which states deploy military force across borders to coerce the target state to end egregious human rights abuses (Finnemore, 2003, p. 53; Pattison, 2010, p. 28). They also include peace operations, which pursue the broad objectives of protecting international peace and security with the consent of the target state and generally without resorting to the use of force (United Nations, 2008). This subsection considers the distinct and transformative potential of humanitarian drones in three steps. First, it highlights the political and logistical consequences of the central role that domestic and target state consent play in humanitarian operations. Politically, consent reduces concerns about lowered barriers to military action. Logistically, it mitigates the risk that lowand slow-flying drones will be targeted by air defence systems. Second, in both humanitarian interventions and peace operations, the relevance of surveillance and aid-delivery capabilities make even unarmed drones game-changing tools. Third, the remainder of the subsection considers recent attempts at and challenges to humanitarian drone innovation.
76
Technology and international relations
Turning first to the role of domestic and target state consent in humanitarian operations. Humanitarian interventions aim to protect foreign civilians and often occur in regions where the sender state has no clear security or economic interests. As a result, bolstering the domestic political will for action presents a challenge. Journalist and former US Ambassador to the United Nations Samantha Power (2002, p. xviii) attributed US inaction in humanitarian crises to the difficulty of domestic mobilization, noting: ‘It is in the realm of domestic politics that the battle to stop genocide is lost. American political leaders interpret society-wide silence as an indicator of political indifference. They reason that they will incur no costs if the United States remains uninvolved but will face steep risks if they engage’. A growing body of evidence suggests that domestic audiences feel a sense of moral obligation in the face of human rights abuses and are willing to support humanitarian interventions (Kreps and Maxey, 2018). Even when public support is possible, however, mobilizing the political will to intervene requires considerable time and effort. On the one hand, leaders may be hesitant to expend political capital to generate support for interventions that do not address clear national interests, especially if they believe the public will be unwilling to tolerate any costs or casualties. This was the case in Rwanda in 1994 where, following the loss of 18 US Army Rangers in Somalia the previous year, the Clinton administration did not attempt to mobilize domestic support for action to prevent mounting atrocities. On the other hand, even when leaders intend to intervene, bolstering support for sustained operations takes time. During this time attacks on foreign civilians continue unchecked – each day that atrocities continued in Rwanda saw the death toll increase by thousands (Power, 2002, p. 353). Drone technology is well poised to address both of these domestic challenges. First, by taking soldiers out of the equation, drones significantly reduce the costs and risk of casualties involved in humanitarian operations. Second, because low-risk, low-cost operations attract less domestic oversight, unmanned technologies facilitate rapid responses to emerging humanitarian crises, making it possible for states to act before it is too late. As Beauchamp and Savulescu (2013, pp. 110–11) argue, ‘drones will both (1) make it more likely that states launch justified humanitarian interventions; and (2) improve the conduct, morally speaking, of such interventions once launched’. Recent cases of humanitarian intervention suggest states recognize the potential benefits of incorporating drones into these operations. The US, for example, relied on Predator drones in its 2011 intervention in Libya aimed at protecting civilians from government attacks. In this case, drones provided precision capabilities for targeting government forces in civilian areas (Shanker, 2011). Because states rarely engage in humanitarian interventions against evenly matched adversaries, the target states in these cases are less likely to have
Context matters: the transformative nature of drones on the battlefield
77
sophisticated air defence systems capable of shooting down unmanned aerial vehicles (UAVs) at an effective rate. In sum, drone technology is transformative in this context because it places new humanitarian options on the table where timing and anticipated costs previously made inaction the default. In contrast to humanitarian interventions, peacekeeping operations aim to enforce or protect – rather than create – an existing peace. To this end, they are most often tasked with implementing existing peace agreements (Howard, 2015). Peace operations led by the United Nations are governed by three interrelated principles: (1) consent of the parties; (2) impartiality; and (3) non-use of force except in self-defence and defence of the mandate (United Nations Peacekeeping, n.d.). States that consent to peace operations thus consent to the use of military force within their borders to meet the limited objectives of the peacekeeping mandate. Target state consent magnifies the potential utility of drone technology because it assuages concerns about air defences. As a result, drones used in the context of peacekeeping operations have a strong chance of completing their mission. Turning next to the utility of unarmed humanitarian drones, in both humanitarian interventions and peacekeeping operations, the surveillance and aid delivery capabilities of unmanned technology are as – if not more – transformative than their armed capabilities. Drones have a long track record in this context, dating back to the US deployment of the Gnat 750 as part of its humanitarian intervention in Bosnia in 1994 (Sandvik and Lohne, 2014, pp. 151–5). The North Atlantic Treaty Organization (NATO), the European Union and the UN have since relied on drones to provide surveillance in support of their peace operations, beginning with their 2006 use in the UN Organization Stabilization Mission in the Democratic Republic of the Congo (MONUC, now MONUSCO) (Sandvik and Lohne, 2014). Drones with surveillance capabilities help troops and peacekeepers gather up-to-date information in hard-to-reach areas. Information collected by drones may include troop movements, crisis mapping to provide early warnings of violence and the conditions of routes used by aid convoys. The wide-ranging benefits of this technology are noted by former UN Under-Secretary-General for Peacekeeping Operations Hervé Ladsous: ‘UAVs do a better job in protecting civilians because they provide real-time pictures of situations as they develop on the ground. You can act more quickly and decisively. They also provide better security to our people because you get prior warnings that an ambush or an attack is about to happen’ (Tafirenyika, 2016). UAVs are also increasingly invaluable tools for responses to natural disasters and have been used to map damage and plan rescues following hurricanes, floods and earthquakes, including use by the International Atomic Energy Agency to monitor radiation during the Fukushima nuclear disaster in 2011 (United Nations News, 2017).
78
Technology and international relations
In addition to surveillance, drones also have the potential to transform the delivery of humanitarian aid. Each necessary point of contact in the delivery chain can delay or intercept aid intended for vulnerable populations. Drones can streamline this delivery process, reducing the number of steps between agencies and recipients and bypassing treacherous points in the supply chain (Chow, 2012). By providing an alternative to aid convoys, drone deliveries also mitigate risks to aid workers. If the delivery of aid does not require agencies to send their staff into dangerous locations, humanitarian operations can continue where violence once forced relief teams to retreat (ibid.). UN agencies, as well as states and non-governmental organizations, are invested and actively working to expand drones’ aid delivery capacity. The UN Children’s Fund (UNICEF) is currently working with a number of partners – including the World Food Programme (WFP) and Office of the UN High Commissioner for Refugees (UNHCR) – to encourage innovative uses of the technology across agencies (United Nations News, 2017). In June 2017, UNICEF partnered with Malawi to launch an air corridor committed to testing humanitarian applications of UAVs (UNICEF, 2017). The corridor is designed to promote innovation in three areas: (1) imagery, including the collection and use of aerial images during humanitarian crises; (2) connectivity, focused on extending Wi-Fi and cell phone service; and (3) transport, including the delivery of low-weight supplies (ibid.). Such initiatives demonstrate that the potential applications of drone technology in humanitarian operations are both extensive and recognized by public and private actors. Finally, turning to future opportunities for innovation and likely challenges, drones have transformative potential in humanitarian operations because they make it possible for states to initiate or sustain humanitarian action where the costs and risks of operations were previously prohibitive. The scope of this transformation is amplified by the fact that – compared to advanced armed drones that are out of reach for most states – unarmed drones with surveillance and delivery capabilities are available to a wider range of actors. As UNICEF’s humanitarian corridor in Malawi demonstrates, drone technology has the potential to reshape the humanitarian operations of states, international and non-governmental organizations and private donors alike. The vast and largely untapped humanitarian potential of drones is acknowledged by both the humanitarian sector and the drone industry itself. For the industry, promoting humanitarian functions carries the dual benefit of expanding the market while increasing its legitimacy (Sandvik and Lohne, 2014, p. 150). As a result, public–private partnerships focused on humanitarian innovation are likely to grow over time. Despite this transformative potential, the use of drones in humanitarian operations faces two sets of challenges. First, for non-state actors, the costs and importance of forming relationships with international governments
Context matters: the transformative nature of drones on the battlefield
79
create obstacles to the widespread implementation of drone-based aid delivery (Regan, 2016). Second, proponents of humanitarian drones within the UN must square their enthusiasm for the technology’s aid and surveillance capabilities with criticism of the lethal use of drones by UN human rights experts (United Nations News, 2017). To date, UN officials have reconciled these two positions by emphasizing that UAVs used in peace operations are for surveillance purposes only, are not intended to replace attack helicopters and will remain unarmed (Tafirenyika, 2016). However, the tension between these positions highlights the need for clear definitions and regulations (Sandvik and Lohne, 2014, p. 154). 3.5
Non-state Actors
The last major strategic context to consider is that of violent non-state actors, particularly either groups or individuals who would consider carrying out a terrorist attack using drones. On some level, a legitimate question might be ‘why would militant groups need them’ when they have other, less expensive devices such as nail bombs and explosives (Horowitz et al., 2016, p. 34)? One reason is that drones, like suicide terrorists, can wait and target areas that are densely populated in ways that a vehicle cannot. Similarly, an improvised explosive device risks going off at a time when it does not do significant damage. A drone operator, by contrast, can wait and then guide the drone to do the most damage possible. In a battlefield context, this could mean flying into an enemy’s territory and detonating even rudimentary explosives. In October 2015, the Islamic State did just that, taking a very basic drone, retrofitting it with explosives and killing two Kurdish soldiers in the attack. The emerging battlefield threat has reached the highest levels of the US Department of Defense and Central Intelligence Agency, with funds requested for counter-drone measures (Schmidt and Schmitt, 2016). Some countries are already investing in counter-drone systems. China has developed a laser weapon system designed to shoot down small drones flying at low altitudes. Chinese officials were particularly worried about security risks associated with small drones, as they are cheap and easy to use (making them ideal for terrorists). Their laser has roughly a 1 km range and is effective on UAVs flying at a speed of 50 m/s. In testing, the system successfully shot down 30 drones with a 100 per cent success rate (The Guardian, 2014). Similarly, in 2013, the US military successfully tested a truck-mounted laser, also designed to lock onto and shoot down small UAVs (Panda, 2014). If these drones are potentially lethal on the battlefield, they can also do damage in a civilian context and for similar reasons: an operator can manoeuvre the drone to do the most possible damage. In its most extreme incarnation, Senator Diane
80
Technology and international relations
Feinstein envisioned drones as ‘the perfect assassination weapon’ (Zenko and Kreps, 2014, p. 12). Already, there are concerns about the ways in which rudimentary drones could be used in the service of terrorism. A recent headline pointed to the alarming prospect that the Islamic State would be able to produce mustard gas and use drones to launch a chemical attack on Western countries. Ingredients for the bombs are not difficult to obtain and the type of drone onto which these could be fitted would not have to be sophisticated (Dinham, 2016). Already, the Islamic State has shown that it can weaponize a small ‘civilian’ drone with grenades. It used the drone to hover over Iraqi forces and then dropped the grenade, demonstrating the ability – at low cost and sophistication – to kill a handful of enemy soldiers on a battlefield or civilians on the homeland (Delany, 2016). In September 2013, German Chancellor Angela Merkel also encountered this possibility when a drone landed on the platform in front of her during a campaign event. The drone was operated by the German Pirate Party who wanted to make a political point about the unsettling nature of drone surveillance. The outcome was benign but as the Islamic State showed, it is not difficult for other groups with malicious intentions to retrofit explosives onto a drone and cause damage. These types of manoeuvres would be almost impossible to prevent. In 2015, a government bureaucrat operated a hobby drone onto the lawn of the White House, showing just how easy it could be to circumvent the usual security restrictions that would obstruct either people or vehicles. The producer of the quadcopter installed a software known as geofencing (Poulsen, 2015) to prevent drones from flying around the White House, but there are infinite scenarios for which drones could nonetheless be used to get close to or target high-value individuals, as the case involving Chancellor Merkel illustrates.
4
FUTURE DEVELOPMENTS
The preceding sections suggest that claims about the transformative nature of drones are best examined across varying strategic contexts. Drones are unlikely to be transformative in interstate and intrastate wars. However, they are more likely to be influential in the contexts of counterterrorism, as the United States has shown, domestic repression by authoritarian regimes, humanitarian operations and non-state actors interested in carrying out strikes on battlefields and domestic terrorist attacks. This evaluation is contingent on both a particular strategic context and the type of drone, which is assumed to be a low- and slow-flying drone such as the Reaper, or an even smaller, practically garage-made drone that a non-state actor could manufacture.
Context matters: the transformative nature of drones on the battlefield
81
These conclusions could change significantly, however, based on future developments. Several different developments would stand to broaden the reach of drones: stealth, speed, swarms and size. A current source of vulnerability, in addition to their altitude and speed, is that smaller UAVs must be linked by radio to their controller and the datalinks can be easily jammed and disabled. Countries such as the United States have sought to provide self-protection mechanisms, but they can only do so at a cost. The Global Hawk, for example, lacked adequate defences against Russian-made air defence systems such as the S-300 (Aviation Week, 2014). Upgrades that would bring the Global Hawk’s capabilities in line with its manned counterpart, the U-2, were estimated to cost $1.9 billion.1 A stealth drone would provide an additional set of defences against both air defences and jamming, which would make them more useful, but would also be prohibitively expensive for many countries. Equipped with this technology, however, stealth drones would become more valuable in interstate contexts insofar as they would overcome the vulnerabilities that limit their current usefulness. Second, advances in the speed of drones could improve their utility especially in an interstate context. Again, what limits the utility of drones in an interstate setting is that they are vulnerable to enemy fire, mostly because they fly far more slowly than manned fighters such as an F-16. Improvements in speed (and manoeuvrability) could address some of these limitations. In principle, the early vision for the unmanned combat aerial vehicle (UCAV) would have offered this feature (as well as being low observable2), with speeds approaching high subsonic (Mach 0.9) (McKinney, 2012) but the US military cancelled the Navy drone that had intended to incorporate Northrop Grumman’s UCAV (X-47) proof of concept and Boeing ended its X-45 equivalent programme in 2006, with the proof-of-concept aircraft bequeathed to a museum. Current-generation drones do not have either the high speed or low observable features that the UCAVs would have. More generally, the prospect of carrier landings raises the question of whether that particular feature would have game-changing consequences. While the ability to land on a carrier would increase the range of a drone and remove the need for forward-operating bases, the magnitude of the advancement would have to be put in context, much as has been urged by this chapter as a whole. To be sure, the ability of a robot to land on a carrier is an impressive technological feat, one that the US Navy ranked as second only to the introduction of naval aircraft itself in 1911. However, the utility of those robots would only be as consequential as the technology itself. Without greater speed and stealth, the aircraft would still be vulnerable to air defences, a fixture in an interstate context. In addition, developing the capability would cost the United States $1.4 billion, a financial commitment that would remain out of reach for most countries (Estes, 2013). Indeed, the United States appears to have
82
Technology and international relations
determined that the combination of financial commitment plus the potential redundancy of an armed (if maritime) drone was not a practical investment and instead channelled the money into a carrier-launched refuelling tanker. A Pentagon official suggested that there were insufficient funds to develop anything other than a near-term requirement in the form of tankers that would be able to free up Navy fighter aircraft to do strike missions rather than refuelling (Freedberg, 2016). A third innovation that would increase the impact of drones is the use of swarms. Small, even unarmed, drones could be used as a swarm to override enemy air defences. While the size could imply a lack of sophistication, the ability to swarm – flying in formation over reasonably long distances – is no trivial matter. As mechanical engineer, Vijay Kumar, characterized the problem, ‘These devices take hundreds of measurements each second, calculating their position in relation to each other, working cooperatively toward particular missions and just as important, avoiding each other despite moving quickly and in tight formations’ (O’Connor, 2014). Flying this many drones in formation would require sophisticated hardware and software that would be out of reach for many countries (Gilli and Gilli, 2013). Nonetheless, the ability to overcome enemy air defences could create an opening for other drones, such as the Reaper, to enter airspace with less risk. Indeed, the offensive value of drones such as this is that they are almost impervious to traditional sensor systems such as joint surveillance target attack radar system (JSTARS) that are typically oriented toward larger assets. The technology challenge arises from the fact that any sensor must be sensitive enough to detect these smaller drones but not so sensitive that they detect everything that moves (Tucker, 2014). Indeed, in the Balkan wars, JSTARS frequently mistook windmills for targets, a typical ‘false positive’ problem with targeting. An additional development that could confer important tactical gains is the turn to miniaturized drones. As one technology reporter observed (Neal, 2013), ‘in keeping with its vision for a “smaller and leaner” military that’s agile, flexible, fast and cutting-edge, the US Department of Defense will work on “miniaturizing” drones and drone weapons to make them smaller, lighter and less energy-consuming’. That comment draws on the US Department of Defense’s 25-year Unmanned Aerial Vehicles Roadmap 2000–2025, which observes that by going in the direction of miniaturization it will also make the systems more affordable (ibid.). Many of the technologies mimic nature in order to ‘hide in plain sight’, such that they can move into an open window, land on a wall and collect surveillance video (Kreps, 2016, p. 122). In his review of micro-drone technology, Adam Piore (2014) writes that ‘until recently, inventors lacked the aerodynamics expertise to turn diagrams into mechanical versions of something as quotidian as a fly or a bee’. Now they have it.
Context matters: the transformative nature of drones on the battlefield
83
Again, because of their small size, miniaturized drones would not have the direct lethal impact, but would be quite effective in terms of surveillance. They are also virtually unstoppable, as it would be nearly impossible to regulate their use, despite animated calls for just that (Ball, 2013). Indeed, the Missile Technology Control Regime (MTCR) founded in 1987 was intended to limit the proliferation of nuclear-capable missiles and related technologies and only considered drones as an afterthought in an Annex. It also included arbitrary thresholds for the prohibited export items, considering Category I items to be those drones whose range exceeds 300 km. However, many of the countries that would and do use drones intend to use them in settings where they would not need to exceed 300 km anyway, such as against insurgents (e.g., Pakistan or Nigeria). This threshold also excludes many non-state actors that would use very small armed drones that would not do massive amounts of damage but could still be lethal.
5 CONCLUSION Drones and other unmanned platforms lower both the costs and risks associated with the use of force. While the use and proliferation of drones has garnered widespread attention, there remains considerable debate over the extent to which this technology is transformative – enabling actors to successfully use force in ways that would not otherwise be possible. On the one hand, drones could revolutionize warfare by extending strikes beyond recognized areas of active hostilities and freeing leaders and governments from the constraints imposed by public opinion. Alternatively, drones may represent just another technology in states’ toolkits, not distinctively different from previous developments. These competing perspectives generate divergent interpretations of the consequences of drone proliferation. If drones are uniformly transformative and their proliferation is dangerous for international and regional security, then states would be well served to restrict drone proliferation. If they are merely another platform, then countries such as the United States should actually capitalize on the prospect of drone exports and sell the technology more liberally. As this discussion suggests, the question of whether drones are transformative requires a more discerning treatment than most analyses have offered thus far. When subjected to greater scrutiny, it becomes clear that the impact of drones depends on the context in which they are used. While offering nuance, the analysis also makes it more difficult to generate policy recommendations. The implications of this ‘third way’ are that countries that tend to be producers of drones should consider the trade-offs in the specific context in which drones might be used. Countries such as the United States could consider exporting drones more liberally to trusted allies with histories of democratic rule, as it has done with Italy and France (Vilmer, 2017; Zenko and Kreps, 2014, p. 24).
84
Technology and international relations
It could also consider exporting even unarmed drones in greater numbers to countries involved in international or regional peacekeeping efforts. But it should also consider the implications of drones that are not currently regulated at all under the international regime that controls drone proliferation, the smaller drones that fall below the MTCR’s thresholds. Whether and how those drones could be regulated has not been given sustained scrutiny, nor has whether countermeasures would be instead better suited to address smaller drones. As drones move from the current generation to the future stealthier, speedier, swarming and small drones, these questions will warrant more careful examination. While the analysis presented in this chapter focused on the development and proliferation of drones, our call to consider the context of the transformation extends to new technologies more broadly. Autonomous weapon systems, in particular, have the potential to carry the appeal of unmanned technology a step further. Unlike drones, which are still piloted remotely, autonomous weapon systems further reduce the risks and costs of action because ‘once activated, [they] are designed to select and engage targets not previously designated by a human’ (Horowitz, 2016, p. 27). As autonomous weapons and the wide range of emerging technologies – including recent headline grabbers such as hypersonic weapons (Boyd, 2019; Mehta, 2018) – evolve, evaluating their utility across contexts will be key to understanding the implications of proliferation for the battlefields of the future.
NOTES 1. This includes a camera that has a wider panorama than the sensors that are currently on the Global Hawk as well as an airborne electro-optical sensor that can survey seven parts of the spectrum. See Seth Robson (2014). 2. LO, that is, technology used to make personnel, aircraft, ships, submarines, satellites and so on, less visible to radar, infrared, sonar or other detection methods.
REFERENCES Al Jazeera (2014), ‘Pakistan army “kills dozens of Taliban”’, 23 December, accessed 23 May 2019 at https://www.aljazeera.com/news/asia/2014/12/pakistan-army-kills -dozens-fighters-20141219102451610876.html. Al-Monitor (2014), ‘Did an Israeli drone strike militants in Egypt?’, 5 August, accessed 23 May 2019 at https://www.al-monitor.com/pulse/originals/2014/08/ israel-involved-attacks-against-armed-groups-sinai.html. Aviation Week (2014), ‘U-2 has the edge over Global Hawk’, 10 March, accessed 23 May 2019 at https://aviationweek.com/awin/u-2-has-edge-over-global-hawk. Ball, James (2013), ‘Drones should be banned from private use, says Google’s Eric Schmidt’, The Guardian, accessed 23 May 2019 at https://www.theguardian.com/ technology/2013/apr/21/drones-google-eric-schmidt.
Context matters: the transformative nature of drones on the battlefield
85
Beauchamp, Zack and Julian Savulescu (2013), ‘Robot guardians: teleoperated combat vehicles in humanitarian military intervention’, in Bradley Jay Strawser (ed.), Killing by Remote Control: The Ethics of an Unmanned Military, Oxford: Oxford University Press, pp. 106–25. Bendett, Samuel (2016), ‘Russia’s rising drone industry’, The National Interest, 27 July, accessed 23 May 2019 at https://nationalinterest.org/blog/the-buzz/russias -rising-drone-industry-17146. Boyd, Iain (2019), ‘U.S., Russia, China race to develop hypersonic weapons’, CBS News, 1 May, accessed 23 May 2019 at https://www.cbsnews.com/news/us-russia -china-race-to-develop-hypersonic-weapons/. Brennan, John O. (2012), ‘The efficacy and ethics of U.S. counterterrorism strategy’, The Woodrow Wilson Center, 20 April, accessed 23 May 2019 at https://www .wilsoncenter.org/event/the-efficacy-and-ethics-us-counterterrorism-strategy. Brooks, Rosa (2013), ‘The constitutional and counterterrorism implications of targeted killing: hearing before the S. Judiciary Subcomm. on the constitution, civil rights and human rights, 113th Cong., April 23, 2013’, The Scholarly Commons, accessed 23 May 2019 at https://scholarship.law.georgetown.edu/cong/114/. Chow, Jack C. (2012), ‘Predators for peace: drones have revolutionized war. Why not let them deliver aid?’, Foreign Policy, 27 April, accessed 23 May 2019 at https:// foreignpolicy.com/2012/04/27/predators-for-peace/. Cronin, Audrey Kurth (2013), ‘Why drones fail: when tactics drive strategy’, Foreign Affairs, 92(4), 44–54. Delany, Max (2016), ‘Hand grenade drone adds to IS arsenal around Mosul’, Agence France Presse, 14 November. Dinham, Paddy (2016), ‘ISIS fighters returning from Syria and Iraq “are capable of using drones to carry out a chemical weapon attack in Britain”’, Mail Online, 4 December, accessed 23 May 2019 at https://www.dailymail.co.uk/news/article -3998532/Isis-use-drones-chemical-weapon-attack-Britain.html. Estes, Adam Clark (2013), ‘The X-47B drone has landed on a carrier and war may never be the same’, Gizmodo, 10 July, accessed 23 May 2019 at https://gizmodo .com/the-x-47b-drone-has-landed-on-a-carrier-and-war-may-ne-733010880. Finnemore, Martha (2003), The Purpose of Intervention, Ithaca, NY: Cornell University Press. Freedberg, Sydney J. (2016), ‘Good-bye, UCLASS; hello, unmanned tanker, more F-35Cs in 2017 budget’, Breaking Defense, 1 February, accessed 23 May 2019 at https://breakingdefense.com/2016/02/good-bye-uclass-hello-unmanned-tanker -more-f-35cs-in-2017-budget/. Gaffey, Conor (2016), ‘U.S. gives Nigeria 24 armored vehicles to fight Boko Haram’, Newsweek, 8 January, accessed 23 May 2019 at https://www.newsweek.com/us -gives-nigeria-24-armored-vehicles-fight-boko-haram-413124. Gilli, Andrea and Mauro Gilli (2013), ‘Attack of the drones: should we fear the proliferation of unmanned aerial vehicles?’, paper presented at the American Political Science Association Annual Conference, Chicago, IL. Gilli, Andrea and Mauro Gilli (2016), ‘The diffusion of drone warfare? Industrial, organizational and infrastructural constraints: military innovations and the ecosystem challenge’, Security Studies, 25(1), 50–84. Goldsmith, Jack L. and Matthew C. Waxman (2016), ‘The legal legacy of light-footprint warfare’, The Washington Quarterly, 39(2), 7–21.
86
Technology and international relations
Hayden, Michael V. (2016), ‘To keep America safe, embrace drone warfare’, The New York Times, 19 February, accessed 23 May 2019 at https://www.nytimes.com/2016/ 02/21/opinion/sunday/drone-warfare-precise-effective-imperfect.html. Horowitz, Michael C. (2016), ‘The ethics & morality of robotic warfare: assessing the debate over autonomous weapons’, Daedalus, 145(4), 25–36. Horowitz, Michael C., Sarah E. Kreps and Matthew Fuhrmann (2016), ‘Separating fact from fiction in the debate over drone proliferation’, International Security, 41(2), 2–42. Howard, Lise Morjé (2015), ‘Peacekeeping, peace enforcement and UN reform’, Georgetown Journal of International Affairs, 16(2), 6–13. Kaag, John and Sarah E. Kreps (2013), ‘Drones and the democratic peace’, Brown Journal of World Affairs, XIX(2), 97–109. Kreps, Sarah E. (2016), Drones: What Everyone Needs to Know, New York: Oxford University Press. Kreps, Sarah E. and Sarah Maxey (2018), ‘Mechanisms of morality: sources of support for humanitarian interventions’, Journal of Conflict Resolution, 62(8), 1814–42. Kreps, Sarah E. and Micah Zenko (2014), ‘The next drone wars’, Foreign Affairs, 93(2), 68–79. Majumdar, Dave (2013), ‘Air Force future UAV roadmap could be released as early as next week’, USNI News, 13 November, accessed 23 May 2019 at https://news.usni .org/2013/11/13/air-force-future-uav-roadmap-released-early-next-week. McKinney, Brooks (2012, 19 December), ‘Unmanned combat air system carrier demonstration (UCAS-D)’, Northrop Grumman. McNeal, Gregory S. (2015), ‘What you need to know about the federal government’s drone privacy rules’, Forbes, accessed 23 May 2019 at https://www.forbes .com/sites/gregorymcneal/2015/02/15/the-drones-are-coming-heres-what-president -obama-thinks-about-privacy/#1a1c56263a98. Mehta, Aaron (2018), ‘3 thoughts on hypersonic weapons from the Pentagon’s technology chief’, Defense News, 16 July, accessed 2 May 2019 at https://www .defensenews.com/air/2018/07/16/3-thoughts-on-hypersonic-weapons-from-the -pentagons-technology-chief/. Neal, Meghan (2013), ‘The Pentagon’s vision for the future of military drones’, Vice, 28 December, accessed 23 May 2019 at https://www.vice.com/en_us/article/ pgaywz/the-pentagons-vision-for-the-future-of-military-drones. O’Connor, Mary Catherine (2014), ‘Here come the swarming drones’, The Atlantic, 31 October, accessed 23 May 2019 at https://www.theatlantic.com/technology/archive/ 2014/10/here-come-the-swarming-drones/382187/. Panda, Ankit (2014), ‘China develops anti-drone lasers’, The Diplomat, 5 November, accessed 23 May 2019 at https://thediplomat.com/2014/11/china-develops-anti -drone-lasers/. Panetta, Leon (2014), Worthy Fights: A Memoir of Leadership in War and Peace, New York: Penguin Books. Pattison, James (2010), Humanitarian Intervention and the Responsibility to Protect: Who Should Intervene?, New York: Oxford University Press. Piore, Adam (2014), ‘Rise of the insect drones’, Popular Science, 29 January, accessed 23 May 2019 at https://www.popsci.com/article/technology/rise-insect-drones. Poulsen, Kevin (2015), ‘Why the US government is terrified of hobbyist drones’, Wired, 5 February, accessed 23 May 2019 at https://www.wired.com/2015/02/white -house-drone/.
Context matters: the transformative nature of drones on the battlefield
87
Power, Samantha (2002), A Problem from Hell: America and the Age of Genocide, New York: Basic Books. Rawnsley, Adam (2011), ‘It’s a drone’s world. We just live in it’, Wired, 28 November, accessed 23 May 2019 at https://www.wired.com/2011/11/drone-world/. Reed, John (2011), ‘Marines get first-ever combat resupply by drone’, Defense Tech, 21 December, accessed 23 May 2019 at https:// www .military .com/ defensetech/ 2011/12/21/marines-get-first-ever-resupply-by-drone. Regan, Michael D. (2016), ‘Humanitarian efforts benefit from drones as ethical debate continues’, PBS.org, 23 October, accessed 23 May 2019 at https://www.pbs.org/ newshour/world/drone-use-humanitarian-aid. Robson, Seth (2014), ‘Air Force plans drone upgrade to replace U-2 planes’, Stars and Stripes, 15 March, accessed 23 May 2019 at https://www.stripes.com/news/air-force -plans-drone-upgrade-to-replace-u-2-planes-1.272289. Sandvik, Kristin Bergtora and Kjersti Lohne (2014), ‘The rise of the humanitarian drone: giving content to an emerging concept’, Millennium: Journal of International Studies, 43(1), 145–64. Savage, Charlie (2016), ‘Harsher security tactics? Obama left door ajar and Donald Trump is knocking’, The New York Times, 13 November, accessed 23 May 2019 at https://www.nytimes.com/2016/11/14/us/politics/harsher-security-tactics-obama -left-door-ajar-and-donald-trump-is-knocking.html. Schmidt, Michael S. and Eric Schmitt (2016), ‘Pentagon confronts a new threat from ISIS: exploding drones’, The New York Times, 11 October, accessed 23 May 2019 at https://www.nytimes.com/2016/10/12/world/middleeast/iraq-drones-isis.html. Shanker, Thom (2011), ‘Obama sends armed drones to help NATO in Libya war’, The New York Times, 21 April, accessed 24 August 2018 at https://www.nytimes.com/ 2011/04/22/world/africa/22military.html. Swanson, David (2013), ‘50 organizations seek ban on armed drones’, Roots Action, 10 November, accessed 2 May 2019 at https://rootsaction.org/news-a-views/717-50 -organizations-seek-ban-on-armed-drones. Tafirenyika, Masimba (2016), ‘Drones are effective in protecting civilians’, Africa Renewal Online, accessed 18 December 2017 at http://www.un.org/africarenewal/ magazine/april-2016/drones-are-effective-protecting-civilians. The Express Tribune (2015), ‘Pakistan’s indigenous armed drone conducts first night-time strike’, 22 October, accessed 23 May 2019 at https://tribune.com.pk/ story/977517/21-militants-killed-in-airstrikes-near-pak-afghan-border/. The Guardian (2014), ‘China unveils laser drone defence system’, 3 November, accessed 23 May 2019 at https://www.theguardian.com/world/2014/nov/03/china -unveils-laser-drone-defence-system. Tucker, Patrick (2014), ‘The military wants new technologies to fight drones’, Defense One, 6 November, accessed 23 May 2019 at https://www.defenseone.com/ technology/2014/11/military-wants-new-technologies-fight-drones/98387/. UNICEF (2017), ‘Africa’s first humanitarian drone testing corridor launched in Malawi by government and UNICEF’, accessed 23 May 2019 at http://unicefstories.org/ 2017/06/29/africas-first-humanitarian-drone-testing-corridor-launched-in-malawi -by-government-and-unicef/. United Nations (2008), United Nations Peacekeeping Operations: Principles and Guidelines, Department of Peacekeeping Operations, accessed 23 May 2019 at https://www.un.org/ruleoflaw/files/Capstone_Doctrine_ENG.pdf.
88
Technology and international relations
United Nations News (2017), ‘Does drone technology hold promise for the UN?’, UN News, 6 September, accessed 18 December 2017 at http://www.un.org/apps/news/ story.asp?NewsID=57473#.Wjfz4FQ-d68. United Nations Peacekeeping (n.d.), ‘Principles of peacekeeping’, accessed 23 May 2019 at https://peacekeeping.un.org/en/principles-of-peacekeeping. Vilmer, Jean-Baptiste Jeangène (2017), ‘The French turn to armed drones’, War on the Rocks, 22 September, accessed 23 May 2019 at https://warontherocks.com/2017/09/ the-french-turn-to-armed-drones/. Work, Robert O. (2015), ‘Deputy Secretary of Defense speech’, presented at the CNAS Defense Forum, JW Marriott, Washington, DC, 14 December, accessed 23 May 2019 at https://dod.defense.gov/News/Speeches/Speech-View/Article/634214/cnas -defense-forum/. Zegart, Amy (2015), ‘The coming revolution of drone warfare’, Wall Street Journal, 18 March, accessed 2 May 2019 at http://www.wsj.com/articles/amy-zegart-the -coming-revolution-of-drone-warfare-1426720364. Zenko, Micah and Sarah E. Kreps (2014), ‘Limiting armed drone proliferation’, Council on Foreign Relations Center for Preventive Action, accessed 24 May 2019 at https://www.cfr.org/report/limiting-armed-drone-proliferation.
5. Artificial intelligence: a paradigm shift in international law and politics? Autonomous weapon systems as a case study1 Luigi Martino and Federica Merenda 1 INTRODUCTION Technological progress in robotics and IT engineering, particularly in the most recent decades, led to such landmark developments in the field of artificial intelligence (AI) that the idea of our era as that of the ‘AI revolution’ is currently riding high. While science fiction introduced the human–robot relation to pop culture far before most of the ethical, socio-economic and legal challenges posed by AI became a recognized topic for scientific debate, nowadays scholars and students in any research field cannot ignore the transformative impact of AI on all aspects of our public and private life. While automated machines of different generations have been progressively introduced in all areas of industrial production to perform mechanical tasks once fulfilled by human workers throughout the course of the twentieth century, AI technology today has reached a stage that makes it possible for robotic systems to perform even complex intellectual tasks. Physicians, judges and soldiers are already assisted in their day-to-day work by AI systems specifically programmed to help with diagnosis, case-law review and military operations. In the healthcare sector, AI machines are being introduced to perform various important tasks, ranging from automated medical diagnostics related to complex conditions where the computational capacities and image-recognition abilities of AI machines enable physicians to identify crucial information, to robot-assisted surgery and assistive robots for elderly patients who need continuous support in their daily activities. In the judicial sector, experiments are being performed in order to fine-tune the employment of AI machines assisting courts and tribunals overburdened with applications by possibly entrusting such systems with pre-screening the cases brought before them for admissibility. In the military sector, autonomous 89
90
Technology and international relations
weapon systems (AWS) are being developed that, unlike drones, may not even need piloting from a distance by a human operator to initiate and finalize a military action. Meanwhile, fully autonomous self-driving cars (Level 5 of the Society of Automotive Engineers’ Levels of Driving Automation) are expected to be commercialized by 2021 (Faggella, 2020). How will these technological developments impact on our society at large? How will they change the dynamics of international politics as we know them? Would it be acceptable for us to allow AI to take decisions regarding the life and death of human beings? Do we need an entirely new legal framework to deal with this process? In the present chapter we will deal with challenges raised by this dimension, focusing on one specific application of AI among those mentioned; chosen as our case study, it poses crucial questions for international relations experts since it concerns the most sensitive area of international politics – that is, real armed conflict: the case of AWS.
2
SOME PRELIMINARY CONCEPTS REGARDING AI
Without going into too much detail regarding the cognitive-behavioural architecture of each prototype of AI machines, a small introduction is needed to some of the basic differences between different kinds of AI systems – relevant whatever their field of employment to both civilian and military applications – to fully understand aspects of their functioning that are considered controversial for the legal and ethical implications of AI employment in sensitive fields. For the purposes of the present chapter, we will generally refer to the distinction between knowledge-based systems and machine-learning systems. Knowledge-based systems are AI systems that use a given set of data called a knowledge base that they combine with automated inference mechanisms: once they are programmed and provided with a knowledge base as well as with the rules to combine those data, they are able to process huge amounts of information and execute very complex calculations (Swain, 2013). Knowledge-based systems are considered the ‘old’ generation of AI as compared to the ‘new’ generation of machine-learning systems. One quotation may help us grasp the crucial evolution from knowledge-based to machine-learning technology: In the early days of artificial intelligence, the field rapidly tackled and solved problems that are intellectually difficult for human beings but relatively straightforward for computers – problems that can be described by a list of formal, mathematical rules. The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally – problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images’ (Goodfellow, Bengio and Courville, 2016, p. 1).
AI: a paradigm shift in international law and politics?
91
Machine-learning technology started to be developed recently precisely to deal with such ‘tasks that are easy for people to perform but hard for people to describe formally’. To do that, instead of relying on a given knowledge base, machine-learning systems are able to acquire their own data and to autonomously extract patterns from data themselves. The quality of their performance in extracting such patterns depends heavily on the way they represent the information collected (ibid., p. 2). The more layers of representation they are able to process, the more refined their outcome will be. The presence of neural networks counting on multiple layers of representation is called deep learning (ibid., p. 5). These broad definitions are already enough to suggest that these two different kinds of AI give rise to different orders of questions and consequences, which we will examine with regard to the case of AWS.
3
(LACK OF) A DEFINITION OF AWS
First, we need to clarify what AWS are and why we chose them as a paradigm case for the technological challenges that have been reshaping international law and politics in recent years. At present we lack an internationally agreed definition of AWS,2 though some attempts are being made, particularly by the Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), which has been meeting within the framework of the Convention on Certain Conventional Weapons (CCW) of the Disarmament Conference in Geneva. Yet, some operative definitions – albeit without legal value per se – have been produced by stakeholders that are variously concerned with their development: states, non-governmental organizations (NGOs) working in the sector, scholars as well as authoritative institutions such as the International Committee of the Red Cross (ICRC) have all started a definition-building process that is still ongoing. From a chronological perspective, the first attempt is that of the United States Department of Defense (US DoD), which back in 2012 defined AWS as ‘a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, “self-governing”’ (US DoD, 2012a). In 2016, the ICRC provided a more detailed definition, stating that AWS correspond to ‘any weapon system with autonomy in its critical functions. That is, a weapon that can select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention’ (ICRC, 2016).
92
Technology and international relations
According to these and other tentative definitions elaborated so far, we can identify some common points constituting the dimensions to be taken into account when evaluating whether a weapon can be considered an AWS: 1. First, it must be a system. This means that a ‘simple’ landmine is not to be considered an AWS because it does not possess the computing characteristics allowing it to take decisions over when and how to be activated. The complexity necessary for an AWS to be considered as such, in the definition given by Sartor and Omicini (2016, pp. 42–3), must bear in mind three dimensions: its capability and organizational independence, its cognitive skills and its cognitive-behavioural architecture. 2. Second, its degree of autonomy. The truly revolutionary characteristic of AWS, especially if compared to non-autonomous drones is the fact that they potentially need no human involvement once deployed. We will see that it is around the human in-the-loop/out-of-the-loop question that much of the negotiations are revolving. 3. Third, the range of activities they can carry out in autonomy, which corresponds to all the different phases of selection and engagement of the objective. Autonomy in selection as well as in engagement of the objective is what distinguishes AWS from semi-autonomous weapon systems (ibid.). AWS are thus potentially able not just to execute very complex tasks but also to take decisions on whether and how to initiate an attack, thus substituting for human soldiers even with regard to strategic and deliberative tasks. Such characteristics may totally revolutionize the basic formula that survived centuries of technological development in the military industry: that of a subject using a weapon to launch an armed attack against an objective. AWS would not necessarily need a human subject to initiate an action. As a result, could they be considered a subject themselves from both a moral and a legal perspective? How will the laws and the ethics of war respond to the challenge of a weapon that can become the subject of military action?
4
THE DEBATE SO FAR
The possible substitution of human soldiers with AWS has given rise to a lively and divisive debate involving scholars and officials working in different areas. In this section we will try to give an account of the main arguments brought forward both by those in favour of employing AWS and by those against.
AI: a paradigm shift in international law and politics?
4.1
93
Some Arguments for AWS
Among those arguing in favour of introducing AWS in conflict scenarios are not just military officials from countries that already possess – or are developing – the relative technology but also several international humanitarian law (IHL) experts who have expressed support for the employment of AWS mainly on consequentialist grounds. IHL – that is, the international legal regime that comes into force in armed conflict situations – is based upon a delicate equilibrium between the principle of humanity and military necessity; the consequences of using AWS are thus being evaluated with reference to those factors. From a consequentialist viewpoint, certain features of AWS are thought to make them more appealing than human soldiers in minimizing the negative impact of armed conflicts. Both from a strategic military point of view and in order to minimize collateral damage, deploying robotic machines instead of human soldiers would eliminate the risk of so-called ‘human error’: AWS are endowed with computing abilities that allow them to execute very complex calculations in much less time than needed by an average human being. Again, unlike humans, robots are not driven by emotions like fear or anger that could cloud their rational thinking. This could allow them to be more precise in their attack and to act at the last minute, thus preventing mistakes causing unnecessary death or suffering (Heyns, 2016; Sassóli, 2014). Furthermore, due to their lack of emotions, robots do not act out of instinct or hatred: they do not rape nor are they driven by survival instinct, and thus their deployment would allegedly result in a reduction of cruelty and possible crimes against humanity.3 4.2
Some Arguments Against AWS
Notwithstanding the reasonableness of these arguments, competing perspectives equally deserve to be taken into account. Among these, the former UN Special Rapporteur on extrajudicial, summary or arbitrary killing, Christof Heyns (2016) warns about the possible violation of the right to life that AWS deployment could cause, given the lack of accountability should mistakes occur. Such an accountability vacuum would arise because of the absence of human involvement. The qualifying characteristic of AWS, as we have seen, is the fact that they would not technically need human assistance even in the deliberative phase of the armed attack – that is, in the process leading to the decision whether to initiate the action. Who would be responsible for such action if no human were directly involved? What would happen if IHL were violated by an AWS? And even if no actual violation occurred, could the employment of AWS itself be considered a violation of IHL or human rights? According to Heyns (2016), in this human out-of-the-loop scenario, the ‘lack of deliberative process’ built into
94
Technology and international relations
robotic ‘reasoning’ would mean that the deaths caused by AWS ‘could constitute an arbitrary deprivation of life’. The lack of accountability for the use of force by AWS could thus be considered as a violation of the right to life per se. Unlike the arguments in favour of AWS here analysed, Heyns’s concerns are based mainly on deontological reasons intrinsic to international human rights law that stem from the special value attributed to human dignity and to the right to life. Within this view he states that ‘there may be an implicit requirement in terms of international law and ethical codes that only human beings may take the decision to use force against other humans’ (Heyns, 2016, p. 10). Some further consequentialist arguments against the deployment of AWS have also been brought forward. Most of these revolve around the issue of deterrence, which is a traditional paradigm in international relations theory. Due to the resources and technology needed, at least in a first phase of development, AWS would just be available to a restricted number of actors for whom resorting to war would be less costly in terms of human lives than to everyone else. Particularly in democratic contexts it might conceivably be less costly in terms of political consensus for governments deploying AWS to participate in armed conflicts, there being no requirement to convince public opinion that soldiers’ lives need be risked. Adopting a consequentialist perspective could thus entail impossible calculations as to the lives saved by AWS through the absence of human mistakes, and those spared because of conflicts that, without such technology, countries would not have started at all. This is one of the reasons why international negotiations on the regulation of LAWS focus on various different dimensions. 4.3
Disclaimer: Knowledge-based and Machine-learning Systems
Though not expressly stated, let us note that all the arguments examined here seem to have knowledge-based AWS in mind. Both the efficiency of AWS in processing complex information and data rapidly and the advantages and problems of a purely logical kind of reasoning are strictly linked to the way in which knowledge-based systems work – that is, by applying certain encoded rules to datasets that are inscribed in the system by programmers. However, as we have seen, a new generation of AI – machine-learning systems – has been developed. These present features that require careful specific consideration. They have not yet been addressed thoroughly by the debate so far introduced. Even though these machines are more flexible in adapting to unstable scenarios such as armed conflict by changing their behaviour according to the new information they autonomously collect from the environment, would they be preferable to knowledge-based AI in this area? Would we be ready to allow machines to take decisions over the life and death of individuals without even
AI: a paradigm shift in international law and politics?
95
being able to reconstruct the deliberative process that led to that decision? And could we accept delegation of such decisions to machines that might act in unforeseeable ways, however remote the possibility? These are still open questions related to the interplay between technology and IR that are expected to deeply affect the scientific debate in years to come and that go beyond the domain of AWS, as similar questions arise in discussions on any use of AI systems.
5
AWS BETWEEN FICTION AND REALITY: AWS IN PRACTICE
To further delve into the case of military AI, it must be noted that in recent years there has been a dramatic increase in the use of ‘intelligent systems’ by military forces to conduct present-day military operations. As we saw in the debate so far analysed, investigating AWS lends itself to speculation from both a conceptual and an operational point of view, in particular regarding the real capacity (or not) of such weapon systems to act in complete autonomy from human actions. The relevance of this new generation of weapons, from an analytical point of view, lies especially in their (either real or not yet developed) targeting capacity and, ultimately, in their ability to use force autonomously with a low level of human supervision. In this respect, the Stockholm International Peace Research Institute (SIPRI) identified 277 military systems (out of the 336 that can be deemed mobile in the dataset used) that include functions that allow the system to move around in its operating environment autonomously, without needing the direct involvement of a human operator. According to SIPRI, autonomy is used in at least 154 systems to support some, if not all, of the steps of the targeting process (at the tactical level), from identification to tracking, prioritization and selection of the targets, and – in some cases – even target engagement itself (Boulanin and Verbruggen, 2017). At the same time, even if many weapons equipped with AI components do not yet involve decision making by machine algorithms, the potential for them to do so will soon become a reality. As the US DoD has indeed stressed in its ‘Artificial Intelligence Strategy’ released in 2018, ‘technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force’ (US DoD, 2018, p. 20). The particular consequence of such potential is the ‘weaponization of AI’ because, as stated by Pandya (2019): the reality today is that artificial intelligence is leading us toward a new algorithmic warfare battlefield that has no boundaries or borders, may or may not have humans involved, and will be impossible to understand and perhaps control across the
96
Technology and international relations
human ecosystem in cyberspace, geospace and space (CGS). As a result, the very idea of the weaponization of artificial intelligence, where a weapon system that, once activated across CGS, can select and engage human and non-human targets without further intervention by a human designer or operator, is causing great fear.
In this respect, in a research paper entitled ‘Artificial intelligence and the future of warfare’, Cummings (2017) has clearly stressed how the usage of AI in military contexts gives rise to open questions implicating the ‘responsibility and ambiguity’ paradigm. She states: The key question for any autonomous system engaged in a safety-critical task (such as weapons release) is whether that system can resolve ambiguity in order to achieve acceptable outcomes. It is conceivable that an autonomous drone could be given a mission to hit a static target on a military installation with a high probability of success. Indeed, many countries have missiles that can do just that. However, could an autonomous drone targeting an individual understand from its real-time imagery that a specific person has been found and that releasing a weapon will kill only that person and no innocent bystanders? Currently, the answer to this question is no. (Cummings, 2017, p. 7)
As far as the military implications of AWS are concerned, according to the current state of relevant academic debates, the main threat they pose relates to the perils of a ‘hyperwar’ (Allen and Husain, 2017). In particular, as stressed by Horowitz (2018), AI and AWS applied in the battlefield ‘have the potential to shape how countries fight in several macro ways. On the broadest level, autonomous systems, or narrow AI systems, have the potential to increase the speed with which countries can fight… Even if humans are still making final decisions about the use of lethal force, fighting at machine speed can dramatically increase the pace of operations’. It is important to note that employment of AI in the military context offers some interesting political incentives for governors. For instance, with regard to democratic regimes, AI can decrease the relative burden of warfare on public opinion: thanks to remotely piloted systems (i.e., drones) it is possible to reduce the use of human personnel and thus prevent the death of soldiers (ibid.).
6
ABOUT EXISTING AWS
In light of the above, though the debate is constantly evolving, we should try to address the question whether AWS already exist or not. Certain relevant case studies allow us to extrapolate some features and differences that together constitute the ‘ideal type’ of AWS we are looking for in present practice. Despite the fact that there is no evidence of completely autonomous weapons being employed on current battlefields, most of the largest arms-producing
AI: a paradigm shift in international law and politics?
97
countries – the USA, the UK, Russia, France, Italy, Japan, Israel, South Korea, Germany and India and China – have already identified AI and robotics as important key drivers (Boulanin and Verbruggen, 2017, pp. 118ff). Nonetheless, as reported by SIPRI, the United States is the only country that has released a standalone military strategy on autonomy (ibid., p. 117). It must be said that collecting data in this field has proved difficult in many cases, especially details about the military application of dual-use technology. Nevertheless, a recent PAX report stated that, due to involvement of the tech sector in LAWS efforts, ‘it is deeply concerning that…tech companies, especially those working on military contracts, do not currently have any public policy to ensure their work is not contributing to lethal autonomous weapons’ (Slijper et al., 2019, p. 47).4 But this revolution has already occurred: the transition from an entirely ‘human’ weapon system to one that is increasingly more artificially ‘intelligent’ has already happened and it will never happen again in the history of humanity. For instance, exploration of autonomy is taking place in South Korea where the Samsung SGR-A1 sentry gun deployed on its border with North Korea can track movement and fire without human intervention (NBC News, 2014). During a training session in Twentynine Palms, the Marines moved through the streets while the robot provided situational awareness, responding to human commands but operating autonomously unless called upon (Gault, 2019). The Israeli Harpy system is a ‘fire and forget’ autonomous weapon, launched from a ground vehicle behind the battle zone (Israel Aerospace Industries, 2014). At the same time, the United States and Russia are developing robotic tanks that can operate autonomously (Marr, 2018). And, in January 2018, the Russian bases at Khmeimim and Taurys were attacked by a swarm of 13 homemade drones (deployed by an unidentified Syrian rebel group) carrying small submunitions and equipped with altitude and levelling sensors as well as programmed GPS to guide them to a predetermined target (MacFarquhar, 2018; see also Russian Ministry of Defense, 2018). In fact, swarming attacks could be one of the most devastating terrorist applications of AWS, which ‘can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed’ (Tegmark, 2019, p. 130), or they could be used in strategic targeting, such as the attempt to kill Venezuela’s Nicolás Maduro in August 2018 (Parkin Daniels, 2018) or to cripple critical infrastructures, as occurred on Saudi oil facilities in September 2019 (Hubbard, Karasz and Reed, 2019).5 From a survey based on three main criteria: (1) development of technology that could be relevant for lethal autonomous weapons; (2) work on military projects; and (3) commitment to not being involved in the development of lethal autonomous weapons, PAX (Slijper et al., 2019, pp. 24–5) has categorized 50 tech companies from 12 countries as ‘best practice’ (seven compa-
98
Technology and international relations
nies), of ‘medium concern’ (22), and of ‘high concern’ (21). These categories ranged from a clear commitment (e.g., through internal policies or contracts) to ensure that the (‘best practice’) company develops technologies that will not be used in turn to develop or produce LAWS, down to a (‘high concern’) company developing relevant technologies and/or working on military projects without any commitment not to favour the development or production of these weapons.6 In this vein, the report reviewed the debate in the tech sector fired by widespread awareness of technological contributions to the development or production of LAWS.7 For instance, in 2018 Google, classified as ‘best practice’, published their AI principles, claiming that the company will not design or develop AI for ‘weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people’.8 Another ‘best practice’ organization is the Russian company VisionLabs, which covers pattern recognition and answered the PAX survey, stressing that the company clearly prohibits the use of their technology for military applications (ibid., p. 36). On the other hand, among the companies of greatest concern, Microsoft and Amazon were bidding for the Joint Enterprise Defense Infrastructure (JEDI) contract (US DoD, 2019b), later awarded to Microsoft and blocked by a federal court in February 2020, which, according to the chief management officer of the project ‘is truly increasing the lethality of our department’ (Employees of Microsoft, 2018; Palmer, 2020). Moreover, Aerial X produced a kamikaze drone, called ‘DroneBullet’, with AI capabilities used to autonomously identify, track and engage targets (Aerial X, 2019; Slijper et al., 2019, p. 32). Anduril Industries, working on Project Maven, designated ‘Lattice AI’ to improve the battlespace awareness, including ‘the ability to identify potential targets and direct unmanned military vehicles into combat’ (Anduril Industries, n.d.). As the report stated, a wide-ranging debate in tech companies could ‘prevent’ their contribution to development and production of lethal autonomous weapons, including (1) a public commitment; (2) the establishment of a clear internal policy; and (3) ensuring employees are informed about their work (Slijper et al., 2019, p. 47). The most successful and popular AWS project to date is the Israeli land-based mobile defence system called ‘Iron Dome’, which has been designed to intercept short-range rockets and artillery. The main ‘autonomous’ skill of the Iron Dome is due to specific sensors that are able to discriminate between rockets that threaten population areas and those that would fall harmlessly (IHS Jane, 2013; Sharp, 2015, pp. 9–10). In particular, each Iron Dome battery system plays an important role at a strategic level in the Israeli multi-layered defence system, to protect Israel from short-range missiles, mortars and rockets fired from all boundaries. In this sense, the Iron Dome should be classified as an
AI: a paradigm shift in international law and politics?
99
appropriate example of AWS, because of its high level of ‘discrimination’ that is autonomous from the human decision-making process. Another relevant example comparable to the above-mentioned ‘Iron Dome’ for its implications and potential impact is the ‘Sea Hunter’ project. Originally designated by the Defense Advanced Research Projects Agency (DARPA) as an anti-submarine warfare continuous trail unmanned vehicle (ACTUV), the Sea Hunter is designed ‘to travel the oceans for months at a time with no onboard crew, searching for enemy submarines and reporting their location and findings to remote human operators. If this concept proves viable, swarms of ACTUVs may be deployed worldwide, some capable of attacking submarines on their own, in accordance with sophisticated algorithms’ (Klare, 2019). In this respect, Fred Kennedy, the director of DARPA’s Tactical Technology Office, stressed that ‘ACTUV represents a new vision of naval surface warfare that trades small numbers of very capable, high-value assets for large numbers of commoditized, simpler platforms that are more capable in the aggregate. The U.S. military has talked about the strategic importance of replacing “king” and “queen” pieces on the maritime chessboard with lots of “pawns,” and ACTUV is a first step toward doing exactly that’ (ibid.). In addition to the Sea Hunter project, the US DoD developed an AWS project (headed by the US Air Force), which includes software-based components to enable ‘fighter pilots to guide accompanying unmanned aircraft toward enemy positions, whereupon the drones will seek and destroy air defense radars and other key targets on their own’ (PAX, 2019). All the above-mentioned cases thus show that the concept of ‘autonomous’ applied to the military sphere implies a low level of ‘human factor’ with respect to decision making as well as to operational capabilities. In other words, as explained by the report entitled The Arms Industry and Increasingly Autonomous Weapons, published by the NGO PAX organization (PAX, 2019, p. 34), the main implication of AWS is the capability of these weapons to exclude human involvement from the decision-making process given that: ‘[w]orking on an ever-expanding range of military systems, in the air, on the ground and at sea, these systems can operate in larger numbers, for longer periods and in wider areas, with less remote control by a human. This raises serious questions of how human control is guaranteed over these weapon systems’. As we know, the consequence of any such decision to delegate lethal force to a machine – thus possibly entrusting an algorithm with the decision whether to kill human beings, including civilians – raises enormous ethical and legal questions.
100
7
Technology and international relations
ADDITIONAL CONCERNS
The controversial aspects related to the development and possible use of AWS and particularly of lethal AWS (LAWS) here addressed have raised widespread concern that led to the ‘Campaign to Stop Killer Robots’, a coalition of NGOs aiming to ban fully autonomous weapons and thereby retain meaningful human control over the use of force (Campaign to Stop Killer Robots, n.d.), which managed to get the support of some states’ representatives in a multilateral forum (United Nations, 2019). As should now be clear, in LAWS human out-of-the-loop scenarios we would potentially have situations in which ‘the software running the drone will decide who lives and who dies’ (Horton, 2018). To cite the evocative concept by Chamayou (2015), in such death scenarios there will not even be room for any intimacy between the hunter and the hunted. Such ability by machines to autonomously engage humans as targets has been defined as the third revolution in warfare (BBC News, 2017), after gunpowder and nuclear weapons, and is expected to shape and shove conflict and strategy no less than the previous ones. As already mentioned in analysing the consequentialist arguments against AWS and as noted by Jacob Ware (2019), when thinking about the effects of the introduction of LAWS in warfare, we should not just focus on states as parties to conflicts: terrorist groups will also be interested in lethal autonomous weapons for three reasons: cost, traceability and effectiveness. In primis, LAWS are cost saving in terms of the human capital required for a terrorist attack with a greater degree of damage than can ‘be done by a single person’ (Brundage et al., 2018, p. 40). Second, autonomous weapons would reduce the signature left behind by terrorists (Ware, 2019). Finally, killer robots would make operations ‘essentially invulnerable’ (Horton, 2018). However, as Singer (2012) pointed out: ‘the biggest ripple effect [of AWS] is in reshaping the narrative in that most important realm of war. We are seeing a reordering of how we conceptualize war, how we talk about it, and how we report it’. In this respect, as Kissinger, Schmidt and Huttenlocher (2019) have pointed out: In the nuclear age, strategy evolved around the concept of deterrence. Deterrence is predicated on the rationality of parties, and the premise that stability can be ensured by nuclear and other military deployments that can be neutralized only by deliberate acts leading to self-destruction; the likelihood of retaliation deters attack. Arms-control agreements with monitoring systems were developed in large part to avoid challenges from rogue states or false signals that might trigger a catastrophic response. Hardly any of these strategic verities can be applied to a world in which AI plays a significant role in national security. If AI develops new weapons, strategies and tactics by simu-
AI: a paradigm shift in international law and politics?
101
lation and other clandestine methods, control becomes elusive, if not impossible. The premises of arms control based on disclosure will alter. Adversaries’ ignorance of AI-developed configurations will become a strategic advantage – an advantage that would be sacrificed at a negotiating table where transparency as to capabilities is a prerequisite. The opacity (and also the speed) of the cyberworld may overwhelm current planning models.
8
FINAL REMARKS: THE GGE NEGOTIATIONS AND THE WAY FORWARD
It should be now clear how huge the impact of employing such technological innovations could be on armed conflict scenarios and therefore on international relations at large. Awareness of the risks led the international community to address the issues of development and employment of AWS, and particularly of LAWS, even before their actual realization, through the establishment of the already mentioned Group of Governmental Experts (GGE) on LAWS, which has been meeting within the framework of the Convention on Certain Conventional Weapons of the Disarmament Conference in Geneva. Following a series of informal meetings of experts held between 2014 and 2016 within the same framework convention, the GGE was established in 2017 and held its latest meeting to date on 15 November 2019. Over these years, the GGE negotiations have focused on deciding which kind of legal instrument (if any) to produce in order to regulate the development and employment of LAWS, and also on the content of any such regulations, with discussions revolving around the issues here examined. As we have seen, while among legal scholars different arguments have been put forward either in favour or against the employment of AWS, all agree that should such technologies be employed, this would have to comply with all the relevant regimes of international law. The question thus becomes whether the existing normative framework would already suffice in regulating their employment for such compliance to be ensured; this is one of the crucial aspects being negotiated in Geneva during recent years. The four alternatives under consideration in the GGE negotiations are: a legally binding instrument, a political declaration, a clarification of implementation of existing obligations under international law, and the decision to elaborate no further instrument. A legally binding instrument would be the most direct alternative way of tackling the issue, though perhaps the least politically viable; indeed, to achieve adoption of a binding instrument would first require a consensus over whether to adopt a general ban treaty to impede any development and employment of such technology or to draft an international treaty setting specific limitations that would need to be individually identified and agreed upon by the states participating in the process. Such a process
102
Technology and international relations
would be particularly hard with reference to an area of technology that is still evolving. On the other hand, a political declaration would on the one hand be more viable in terms of the political consensus needed and would carry the advantage of not needing to go into too much detail; at the same time, it would lack binding force, thus constituting a weak means to ensure state compliance. The third alternative, that is, a clarification on the implementation of existing obligations, is a compromise solution tending to shift the discussion onto an interpretive level. Finally, though such a list is not necessarily an exhaustive one, the idea of not developing any further instrument of any kind (neither soft law nor binding) is being urged by those arguing that current existing norms already imply the limits of AWS employment, which would therefore need no further specification in order to stay in force, whatever the technological development. In a way, favouring such an option would shift to ex post facto evaluation – and thus to a judicial level – the burden of evaluating AWS compliance with international law. At the time this chapter is being written, not all the documents of the latest GGE meetings are available and negotiations are still ongoing.
NOTES 1. While the present chapter is the result of continuous dialogue between the two authors and thus constitutes a consistent intellectual product, Sections 2, 3, 7 and 8 are by Federica Merenda and Sections 4, 5 and 6 by Luigi Martino. The introduction was written jointly by the two authors. 2. As Paul Scharre and Michael Horowitz wrote: ‘There is no internationally agreed-upon definition of what constitutes an “autonomous weapon,” making clear communication on the topic more difficult… This lack of clarity in terminology is further compounded by the fact that some are calling for autonomous weapons to be regulated or banned even before consensus exists on how to define the category. Thus, at present, definitions are tied up in debates over the technology itself… [This] lack of clarity on basic terminology itself is a recipe for disaster’ (Scharre and Horowitz, 2015, p. 3). For further detail on the debate about the definition of ‘killer robot’, see Eleveth (2014). 3. ‘A robot cannot hate, cannot fear, cannot be hungry or tired and has no survival instinct… Robots do not rape. They can sense more information simultaneously and process it faster than a human being can’ (Sassóli, 2017). 4. The survey published is about 50 technology companies in 12 countries that reviewed their current activities (ibid.). 5. A worst-case scenario is best represented in a viral YouTube video, ‘Slaughterbots’, which describes the release of thousands of small munitions into British university halls. The drones attack individuals who have shared a certain political social media post; see https://www.youtube.com/watch?v=9CO6M2HsoIA (last accessed 10 July 2020). 6. The companies classified as ‘best practice’ are: Animal Dynamics (UK), Arbe Robotics (Israel), General Robotics (Israel), Google (US), HiveMapper (US), Softbank (Japan) and VisionLabs (Russia). The companies classified as ‘high
AI: a paradigm shift in international law and politics?
103
concern’ are: AerialX (Canada), Airspace Systems (US), Amazon (US), Anduril Industries (US), Blue Bear Systems (UK), Citadel Defense (US), Clarifai (US), Corenova Technologies (US), EarthCube (France), Heron Systems (US), Intel (US), Microsoft (US), Montvieux (UK), Oracle (US), Palantir (US), Roboteam (Israel), SenseTime (China), Shield AI (US), SparkCognition (US), Synesis (Belarus) and Yitu (China). ‘Medium concern’ companies working on military/ security applications of relevant technologies answered that they were currently not working on LAWS. The companies classified as ‘medium concern’ are: Airobotics (Israel), Alibaba (China), Apple (US), ATOS (France), Baidu (China), Cambricon (China), Cloudwalk Technology (China), DeepGlint (China), Dibotics (France), Facebook (US), IBM (US), Innoviz (Israel), Megvii (China), Naver (South Korea), Neurala (US), Orbital Insight (US), Percepto (Israel), Samsung (South Korea), Siemens (Germany), Taiwan Semiconductor (Taiwan), Tencent (China) and Tharsus (UK) (Slijper et al., 2019, pp. 24–5). 7. In 2015, an open letter was signed by several AI and robotics experts, including Stephen Hawking and Elon Musk, claiming that superhuman artificial intelligence could end the human race; see https://futureoflife.org/ai-open-letter/?cn-reloaded =1 (last accessed 10 July 2020). In 2017, another letter was signed by 116 tech sector CEOs asking the UN to ban LAWS; see https://futureoflife.org/autonomous -weapons -open -letter -2017/(last accessed 10 July 2020). In 2018, over 240 AI-related organizations and nearly 3100 individuals pledged not to be involved in the development of LAWS; see https://futureoflife.org/autonomous-weapons -open-letter-2017/(last accessed 11 July 2020). 8. On the other hand, ‘we [Google] want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe’. For more detailed information see https://www.blog.google/technology/ai/ai-principles/ (last accessed 11 July 2020).
REFERENCES Aerial X (2019), ‘DroneBullet – counter-drone systems’, YouTube, 26 April, accessed 11 July 2020 at https://www.youtube.com/watch?v=5_6X5Is916I. Allen, John R. and Amir Husain (2017), ‘On hyperwar’, Naval Institute Proceedings, 143(373), 30–36. Anderson, Kenneth, Daniel Reisner and Matthew Waxman (2014), ‘Adapting the law of armed conflict to autonomous weapon systems’, International Law Studies, 90, 386–412. Anduril Industries (n.d.), ‘Lattice AI Platform’, accessed 10 July 2020 at https://www .anduril.com/. Asaro, Peter (2012), ‘On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making’, International Review of the Red Cross, 94(886), 687–709. Asaro, Peter (2014), ‘Determinism, machine agency, and responsibility’, Politica & Società, III(2), 265–92.
104
Technology and international relations
BBC News (2017), ‘Killer robots: experts warn of “third revolution in warfare”’, BBC. com, 21 August, accessed 10 July 2020 at https://www.bbc.com/news/technology -40995835. Bench-Capon, Trevor and Sanjay Modgil (2017), ‘Norms and value-based reasoning: justifying compliance and violation’, Artificial Intelligence and Law, 25, 29–4. Boulanin, Vincent and Maaike Verbruggen (2017), Mapping Development of Autonomy in Weapon Systems, report of the Stockholm International Peace Research Institute (SIPRI). Branting, Karl L. (2017), ‘Data-centric and logic-based models for automated legal problem solving’, Artificial Intelligence and Law, 25, 5–27. Brundage, Miles, Shahar Avin and Jack Clark et al. (2018), The Malicious Use of Artificial Intelligence, accessed 10 July 2020 at https://arxiv.org/ftp/arxiv/papers/ 1802/1802.07228.pdf. Campaign to Stop Killer Robots (n.d.), accessed 10 July 2020 at: https:// www .stopkillerrobots.org/. Chamayou, Gregoire (2015), Drone Theory, London: Penguin Books. Congressional Research Service (2018), U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence: Considerations for Congress, R45392, 20 November, accessed 10 July 2020 at: https://crsreports.congress.gov/product/pdf/R/ R45392. Contissa, Giuseppe, Francesca Lagioia and Giovanni Sartor (2017), ‘The ethical knob: ethically-customisable automated vehicles and the law’, Artificial Intelligence and Law, 25, 365–78. Convention on Certain Conventional Weapons (CCW), Group of Governmental Experts (GGE) (2018), Report of the 2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems, advanced draft, 31 August, CCW/GGE.2/2018/3. Cummings, Mary L. (2017), ‘Artificial intelligence and the future of warfare’, research paper, Chatham House, Royal Institute of International Affairs, accessed 10 July 2020 at https://www.chathamhouse.org/sites/default/files/publications/research/ 2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf. Davison, Neil (2018), ‘Autonomous weapon systems: an ethical basis for human control?’, ICRC.org, 3 April, accessed 9 July 2020 at https:// www .icrc .org/ en/ document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control. Eleveth, Rose (2014), ‘So what exactly is a “killer robot”?’, The Atlantic, 20 August, accessed 10 July 2020 at https://www.theatlantic.com/technology/archive/2014/08/ calling-autonomous-weapons-killer-robots-is-genius/378799/. Employees of Microsoft (2018), ‘An open letter to Microsoft: don’t bid on the US military’s project JEDI’, Medium.com, 13 October, accessed 10 July 2020 at https://medium.com/s/story/an-open-letter-to-microsoft-dont-bid-on-the-us-military -s-project-jedi-7279338b7132. Faggella, Daniel (2020), ‘The self-driving car timeline – predictions from the top 11 global automakers’, Emerj.com, 14 March, accessed 11 July 2020 at https:// emerj.com/ai-adoption-timelines/self-driving-car-timeline-themselves-top-11 -automakers/. Gaeta, Paola (2016), ‘Autonomous weapon systems and the alleged responsibility gap’, in International Committee of the Red Cross (ICRC), Report of the Expert Meeting on Autonomous Weapon Systems, Geneva: ICRC, pp. 44–5. Garapon, Antoine and Jean Lassègue (2018), Justice digitale, Paris: Presses Universitaires de France.
AI: a paradigm shift in international law and politics?
105
Gault, Matthew (2019), ‘AI controlled robots are training alongside U.S. Marines’, VICE, 22 July, accessed 10 July 2020 at https:// www .vice .com/ en _in/ article/ zmpa59/ai-controlled-robots-are-training-alongside-us-marines. Goodfellow, Ian, Yoshua Bengio and Aaron Courville (2016), Deep Learning, Cambridge, MA: MIT Press. Heyns, C. (2016), ‘Autonomous weapons systems: living a dignified life and dying a dignified death’, in N. Bhuta, S. Beck and R. Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 3–19. Horowitz, Michael C. (2018), ‘The promise and peril of military applications of artificial intelligence’, Bulletin of the Atomic Scientist, 23 April, accessed 10 July 2020 at: https://thebulletin.org/military-applications-artificial-intelligence/promise-and-peril -military-applications-artificial-intelligence. Horton, Michael (2018), ‘Inside the chilling world of artificially intelligent drones’, The American Conservative, 12 February, accessed 10 July 2020 at https://www.the americanconservative.com/articles/inside-the-chilling-proliferation-of-artificially -intelligent-drones/. Hubbard, Ben, Palko Karasz and Stanley Reed (2019), ‘Two major Saudi oil installations hit by drone strike, and U.S. blames Iran’, New York Times, 14 September, accessed 10 July 2020 at https://www.nytimes.com/2019/09/14/world/middleeast/ saudi-arabia-refineries-drone-attack.html. Human Rights Watch (2012), ‘Losing humanity: the case against killer robots’ [news release], 19 November, accessed 29 December 2020 at https://www.hrw.org/report/ 2012/11/19/losing-humanity/case-against-killer-robots. Human Rights Watch (n.d.), ‘Killer robot’, accessed 10 July 2020 at https://www.hrw .org/topic/arms/killer-robots. IHS Jane (2013), ‘Arrow weapon system (AWS)’, in Christopher F. Foss and James C. O’Halloran (eds), IHS Jane’s Land Warfare Platforms: Artillery and Air Defence 2012–13, Coulsdon, UK: IHS, pp. 470–71. International Committee of the Red Cross (ICRC) (1977), Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June. International Committee of the Red Cross (ICRC) (2016), Expert Meeting on Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons, 15–16 March, Versoix, Switzerland. International Committee of the Red Cross (ICRC) (2018), Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?, Geneva, 3 April. Israel Aerospace Industries (IAI) (2014), ‘HARPY autonomous weapon for all weather’, accessed 11 July 2020 at https://www.iai.co.il/p/harpy. Kalmanovitz, P. (2016), ‘Judgement, liability and the risk of riskless warfare’, in Nehal Bhuta, Susan Beck and Robin Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 145–63. Kissinger, Henry A., Eric Schmidt and Daniel Huttenlocher (2019), ‘The metamorphosis’, The Atlantic, August, accessed 10 July 2020 at https://www.theatlantic.com/ magazine/archive/2019/08/henry-kissinger-the-metamorphosis-ai/592771/. Klare, Michael T. (2019), ‘Autonomous weapons systems and the laws of war’, Armscontrol.org, March, accessed 10 July 2020 at https://www.armscontrol.org/act/ 2019-03/features/autonomous-weapons-systems-laws-war.
106
Technology and international relations
MacFarquhar, Neil (2018), ‘Russia says its Syria base beat back an attack by 13 drones’, New York Times, 8 January, accessed 10 July 2020 at https://www.nytimes .com/2018/01/08/world/middleeast/syria-russia-drones.html. Marr, Bernard (2018), ‘Weaponizing artificial intelligence: the scary prospect of AI-enabled terrorism’, Forbes, 23 April, accessed 10 July 2020 at https://www .forbes.com/sites/bernardmarr/2018/04/23/weaponizing-artificial-intelligence-the -scary-prospect-of-ai-enabled-terrorism/#20818d0d77b6. Nash, Thomas (2015), ‘Informal expert meeting on lethal autonomous weapons systems’, Convention on Certain Weapons, Geneva, 15 May, accessed 9 July 2020 at www.unog.ch/80256EDD006B8954/%28httpAssets%29/26033D398111B 4E8C1257CE000395BBB/$file/Article36_Legal+Aspects_IHL.pdf. NBC News (2014), ‘Future tech? Autonomous killer robots are already here’, NBCnews.com, 15 August, accessed 10 July 2020 at https://www.nbcnews.com/ tech/security/future-tech-autonomous-killer-robots-are-already-here-n105656. O’Connell, Mary Ellen (2014), ‘Banning autonomous killing: the legal and ethical requirement that humans make near-time lethal decisions’, in Matthew Evangelista and Henry Shue (eds), The American Way of Bombing: Changing Ethical and Lethal Norms from Flying Fortresses to Drones, Ithaca, NY: Cornell University Press, pp. 225–99. Palmer, Annie (2020), ‘Judge temporarily blocks Microsoft Pentagon cloud contract after Amazon suit’, CBNC.com, 13 February, accessed 10 July 2020 at https://www .cnbc.com/amp/2020/02/13/amazon-gets-restraining-order-to-block-microsoft-work -on-pentagon-jedi.html. Pandya, Jayshree (2019), ‘The weaponization of artificial intelligence’, Forbes, 14 January, accessed 10 July 2020 at https://www.forbes.com/sites/cognitiveworld/ 2019/01/14/the-weaponization-of-artificial-intelligence/#2e8c57c13686. Parkin Daniels, Joe (2018), ‘Venezuela’s Nicolás Maduro survives apparent assassination attempt’, The Guardian, 5 August, accessed 17 January 2021 at https:// www.theguardian.com/world/2018/aug/04/nicolas-maduros-speech-cut-short-while -soldiers-scatter. PAX (2019), Slippery Slope. The Arms Industry and Increasingly Autonomous Weapons, accessed 10 July 2020 at https:// www .paxforpeace .nl/ publications/ all -publications/slippery-slope. Russian Ministry of Defense (2018), ‘#SYRIA’, Facebook, 8 January, accessed 10 July 2020 at https://www.facebook.com/mod.mil.rus/posts/2031218563787556. Sartor, Giovanni and Andrea Omicini (2016), ‘The autonomy of technological systems and responsibilities for their use’, in Nehal Bhuta, Susan Beck and Robin Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 39–74. Sassóli, Marco (2014), ‘Autonomous weapons and international humanitarian law: advantages, open technical questions and legal issues to be clarified’, International Law Studies, 90, 308–40. Sassóli, Marco (2017), ‘Statement’, Autonomous Weapons Systems: Law, Ethics, Policy, Geneva Academy of International Humanitarian Law and Human Rights Conference, Geneva. Scharre, Paul (2018), ‘Human judgment and lethal decision-making in war’, Blogs.icrc. org, 11 April, accessed 9 July 2020 at http://blogs.icrc.org/law-and-policy/2018/04/ 11/human-judgment-lethal-decision-making-war/. Scharre, Paul and Michael C. Horowitz (2015), ‘An introduction to autonomy weapon systems’, Center for New American Security, working paper.
AI: a paradigm shift in international law and politics?
107
Sharkey, Noel (2014). ‘Towards a principle for the human supervisory control of robot weapons’, Politica & Società, III(2), 305–24. Sharp, Jeremy M. (2015), U.S. Foreign Aid to Israel, Congressional Research Service, accessed 10 July 2020 at https://www.fas.org/sgp/crs/mideast/RL33222.pdf. Singer, Peter W. (2012), ‘The robotics revolution’, Brookings.edu, 11 December, accessed 10 July 2020 at https://www.brookings.edu/opinions/the-robotics -revolution/. Slijper, Frank, Alice Beck, Daan Kayser and Maaike Beenes (2019), Don’t Be Evil? A Survey of the Tech Sector’s Stance on Lethal Autonomous Weapons, PAX research report. Swain Martin (2013), ‘Knowledge-based system’, in Werner Dubitzky, Olaf Wolkenhauer, Kwang-Hyun Cho and Hiroki Yokota (eds), Encyclopedia of Systems Biology, New York: Springer, accessed 20 June 2020 at https://link.springer.com/ content/pdf/10.1007%2F978-1-4419-9863-7_596.pdf. Tamburrini, Guglielmo (2016), ‘On banning autonomous weapons systems: from deontological to wide consequentialist reasons’, in Nehal Bhuta, Susan Beck and Robin Geiss et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge, UK: Cambridge University Press, pp. 122–42. Tegmark, Max (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Alfred A. Knopf. United Nations (2019), ‘States call for enhanced arms control strategies to regulate “killer robots”, stem rising tide of illegal weapons, delegates tell First Committee’, press release, accessed 10 July 2020 at https://www.un.org/press/en/2019/gadis3635 .doc.htm. US Department of Defense (US DoD) (2012a), Task Force Report: The Role Of Autonomy in DoD Systems, Defense Science Board, accessed 9 July 2020 at http:// fas.org/irp/agency/dod/dsb/autonomy.pdf. US Department of Defense (US DoD) (2012b), Directive 3000.09: Autonomy in Weapon Systems, 21 November, Glossary Part II, accessed 9 July 2020 at www.dtic .mil/whs/directives/corres/pdf/300009p.pdf. US Department of Defense (US DoD) (2018), Unmanned Systems Integrated Roadmap 2017–2042, 28 August, accessed 10 July 2020 at https://www.defensedaily.com/wp -content/uploads/post_attachment/206477.pdf. US Department of Defense (US DoD) (2019a), Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance our Security and Prosperity, 12 February, accessed 10 July 2020 at https://media.defense.gov/2019/ Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF. US Department of Defense (US DoD) (2019b), ‘Contracts for Oct. 25, 2019. Washington Headquarters Service’, 25 October, accessed 10 July 2020 at https:// www.defense.gov/newsroom/contracts/contract/article/1999639/. Ware, Jacob (2019), ‘Terrorist groups, artificial intelligence, and killer drones’, Warontherocks.com, 24 September, accessed 10 July 2020 at https://warontherocks .com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/.
PART III
Space and cyberspace: intersection of two security domains
6. The use of space and satellites: problems and challenges Luciano Anselmo 1 INTRODUCTION International relations and technology change are an integral part of the development of space activities since their beginning, in 1957. In fact, outer space is international by definition, because any action carried out in orbit around the Earth has implications for many, if not all, countries. Moreover, space was the stage for a fierce technology competition, with military, civilian and commercial fallouts. This started before the end of World War II (1939–45). Although the boundary between atmosphere and outer space is still not clearly defined, either technically or legally (McDowell, 2018), Germany’s V-2 guided ballistic missiles crossed it for the first time during the war; in the final phases of the conflict, while the Germans retreated and the allies advanced, Americans and British in the west and Soviets in the east, a race developed on the two fronts to seize the Nazi missiles, technologies, production plants and, above all, the engineers and technicians who had created them under the direction of Wernher von Braun (Baland, 2007; Burrows, 1998; Heppenheimer, 1997). After the conflict and the almost immediate escalation of geopolitical tensions between the Soviet and Western blocs that led to the Cold War (1946–91), the combination of nuclear weapons and emerging ballistic missile technologies became central to gaining and maintaining global strategic supremacy. The development of powerful and reliable rockets for long-range delivery of nuclear bombs therefore received high priority in both the Soviet Union and the United States. As a result of these efforts, around the mid-1950s both superpowers found themselves in possession of the technology to launch satellites into orbit, but the Soviet Union, due to the initial requirement of transporting much heavier1 thermonuclear warheads over intercontinental distances, was already making substantial progress on a much more powerful rocket than their American competitors, the so-called R-7 or Semiorka, devel109
110
Technology and international relations
opment of which was managed by Sergei P. Korolev (Baland, 2007; Burrows, 1998; Heppenheimer, 1997). It was thus thanks to the R-7, to Korolev’s bold vision and to the extreme caution of the Eisenhower administration – concerned about the legal status of a satellite and how to avoid it being interpreted as a hostile act (Taubman, 2003) – that the Soviet Union entered history as the initiator of the Space Age by launching the first artificial satellite round the Earth, Sputnik 1, on 4 October 1957. This date also marks the beginning of the Space Race between the two superpowers, a Cold War propaganda and technology ‘battle’ fought out by ‘space firsts’ (first satellite around the Earth, first probe to the moon, first man in orbit, first probes to Venus and Mars, first extravehicular activity, first docking in space, first human landing on the moon, and so on) to capture the admiration and respect of the whole world, because, as pointed out by one of its most influential supporters on the American side, the senator, vice-president and then president, Lyndon B. Johnson, at that time in the eyes of the people being first in space meant being first, full stop (Baland, 2007; Burrows, 1998; Erickson, 2018; Heppenheimer, 1997). The first successful human moon landing carried out by NASA on 20 July 1969 and crowning the challenge launched by President John F. Kennedy just eight years earlier, concluded the hot phase of the Space Race, but keen competition between the United States and the Soviet Union continued until the dissolution of the latter and the end of the Cold War in 1991. During the following decades there were signal examples of deep cooperation between Americans and Russians, such as the development, assembly, maintenance and operation of the International Space Station, which also involved major European, Japanese and Canadian participation. The Russian Soyuz spaceships also represented the sole provider of access to space for the American astronauts between withdrawal of the Space Shuttle in 2011 and entry into service of the American commercial space capsules Crew Dragon, in 2020, and CST-100 Starliner, in 2021. This kind of cooperation will probably continue in the 2020s with the development of a space station in orbit around the moon, the Gateway, to support human and robotic access to the lunar surface, and deep space exploration as well. However, with the revival of Russian ambitions after the collapse of the Soviet Union, competition in space, especially in the military and intelligence fields, has progressively returned to the fore. Concerning the new superpower of the twenty-first century, China, space is certainly important, but it does not seem a top priority as it was for the Soviet Union and the United States in the 1950s and 1960s. The Asian giant has ambitious long-term goals, but it is pursuing them without haste, allocating just the resources necessary for long-duration sustainability of the programmes.
The use of space and satellites: problems and challenges
111
An example of this is their replication, at a quite slow and steady pace, of the main Soviet and American technical space accomplishments of the 1960s and 1970s after a lapse of several decades (50 years or more). However, compared with the pioneers of half a century ago, China has the great advantage of already knowing the right answers and technical solutions, avoiding the waste of time and resources needed to explore various alternatives. Moreover, even though the Chinese ultimately want to gain full control and in-house reproducibility of the technologies they are using, these last have often initially been obtained by copying or buying foreign designs, hence saving a lot of time and effort. This methodical, determined, slow, but steady pace, supported by a huge number of engineers by western standards, is paying off in terms of building a solid and ever-broadening base of skills and capabilities. This also translates into a growing presence in space, with a steadily improving family of launchers and a wide variety of spacecraft and space missions. In 2018, for example, China carried out more orbital launches, 39, than any other country, as compared with 31 by the United States, 20 by Russia and eight by Europe. It already spends more on its space projects than Russia and Japan, but still less than Europe combined, and much less than the United States. In any case, it is quite easy to predict that during the 2020s China will become the United States’ main contender in space, in particular concerning the military and intelligence fields. However, the competition among ‘superpowers’ that characterized so much of the first Space Age might not be so prevalent or relevant in the future. Today, a growing number of important applications and critical functions depend on nearly 2000 spacecraft, and this number might increase by one order of magnitude in the 2020s, including a lot of public and private players, mostly newcomers. The many ambitious new private commercial players that are entering the field promise a technological revolution and a significant shift of equilibria in orbit, from public to private entities, and from large to small satellites. The challenges of the past could therefore pale in comparison with the new problems to be faced in order to guarantee safe and responsible use of a resource that must be preserved for future generations. After a brief overview of the current utilization of space, this chapter focuses its attention on the greater risks that loom over any ordered continuation or steady progress of space activities around the Earth, with possible serious repercussions on security and safety at a national, regional and global level. Space technologies, and the terrestrial applications that increasingly depend on them, are in fact potentially very vulnerable to human actions, be they deliberate hostile acts or unintended consequences of inappropriate behaviour, but also to natural events that humanity cannot control, but can only hope to foresee better and mitigate in the future. The challenges ahead, the problems
112
Technology and international relations
pending, and the strategies being devised to properly manage the crucial and rapidly evolving near-Earth space environment are therefore examined here.
2
THE CURRENT USE OF OUTER SPACE AROUND THE EARTH
The Union of Concerned Scientists (UCS) maintains a satellite database listing the active satellites in orbit around the Earth – that is, the satellites either manoeuvring and/or communicating. This excludes a few functional spacecraft maintained in a hibernation status and passive satellites used, for instance, for laser ranging and radar calibration, but the overall picture provided is 99 per cent representative of the current population of functional spacecraft. As of 1 May 2018, the UCS Satellite Database (UCS, 2018) included nearly 1900 active satellites. As detailed in Table 6.1, nearly one-half were operated by the United States, witnessing the great dependence of that nation on space systems. China and Europe (excluding Russia) operated a comparable number of spacecraft, around 13 per cent, followed by Russia, Japan, India, Latin America and Canada. Approximately one-tenth of the active satellites in orbit were left to the rest of the world. It should, however, be emphasized that there is a by no means negligible fraction of satellites operated by multinational entities or collaborations, more than 8 per cent, and a growing number of countries and commercial players directly involved in space activities and applications. In 1966, only the Soviet Union, the United States, Canada, France, Italy and the United Kingdom operated satellites. Today most countries do so, though this is not yet the case with most of Sub-Saharan Africa and the many small nations of Oceania. Regarding the uses for which the satellites have been launched (see Table 6.2), around 44 per cent belong to the private commercial sector, mainly active in the fields of communications and Earth imaging. This includes data relay, direct television and radio broadcasting, navigation and geolocation services, global wireless communications, and medium- to high-resolution images of the Earth’s surface sold to private and public customers for a wide range of applications, from the search of mineral deposits and oil fields to agricultural planning and land management. In the 2020s, thousands of commercial satellites are planned to provide new worldwide Internet broadband services to individual customers, further increasing the share of privately operated commercial spacecraft in orbit. The satellites for civil applications and science directly owned and/or operated by government entities – that is, public and not private – still represent more than one-quarter of the total. The scientific spacecraft in orbit around the Earth include astrophysical observatories for the study of the sky over a large fraction of the electromagnetic spectrum, particle detectors for the
The use of space and satellites: problems and challenges
113
study of cosmic rays, probes for the study of the magnetosphere and circumterrestrial space, and a quite heterogeneous variety of satellites devoted to the comprehensive study of the Earth and the environment. Most of the spacecraft operated by government entities for civilian purposes are, however, committed to applications, as meteorology, navigation, communications among public agencies, remote sensing, search and rescue, disaster relief, and so on. Table 6.1
Distribution of active satellites among countries or geographical areas
Satellite Operator
Fraction of Active Satellites (%)
United States of America
45.6
China
13.3
Europe (excluding Russia)
12.6
Russia
7.7
Japan
3.8
India
2.9
Latin America
2.0
Canada
1.8
Rest of the world
10.3
Multinational entities or co-operations (including the European Space Agency)
Table 6.2
8.4
Use of active satellites
Satellite Utilization
Fraction of Active Satellites (%)
Commercial
43.8
Government civil applications and science
27.7
Military
21.2
Academic and amateur applications
7.3
During the first 30 years of the Space Age, the military satellites accounted for most of the launches. From the mid-1960s to the second half of the 1980s, at the peak of the Cold War, approximately three-quarters of the satellites put in space had military applications. Today they account for about one-fifth of the total. Nevertheless, it would be completely wrong to conclude that there is a declining role and relevance for these systems: the opposite is true. In fact,
114
Technology and international relations
the satellites mainly devoted to intelligence, national security and military operations are often very expensive, reliable and capable systems, with long, useful operational lifetimes. Their replacement rates are therefore much lower compared with the past, highly reducing the number of new launches needed for the maintenance of full operations. Moreover, during the Cold War, these systems were the almost exclusive domain of the Soviet Union and the United States, while today many countries operate military satellites, including China and several emerging nations. In addition, several spacecraft used for communications, data relay, global navigation and Earth surface imaging are indeed ‘dual use’ systems, sharing both civilian and military customers. Therefore, for all these reasons, never before have so many military satellites been operated at same time, and never before have so many countries been so critically dependent on satellite intelligence data. Military satellites are essential in many fields (Anselmo, 2012; Villain, 2009): photo (in visible light and in the infrared) and radar reconnaissance of the ground (that is, Earth imaging) at medium and high resolution (below a few tens of centimetres), early warning of ballistic missile launches (both tactical and strategic) and ballistic missile defence, space surveillance and covert close inspection of foreign satellites in orbit, detection and localization of nuclear explosions, surveillance of the oceans for ships and submarines, surveillance of electronic signals (for instance, radar beams), interception of ground and satellite communications, sensitive data relay, encoded communications between command centres and with operational theatres, real-time teleoperation of unmanned aerial vehicles (long-range strategic drones) on the other side of the world, accurate global navigation (three-dimensional position and velocity) for people and vehicles on the ground, at sea, in the air and in space, missile and bomb precise guidance, high-fidelity digital mapping of the ground, meteorological predictions, solar and geomagnetic activity forecasts, strategic and tactical situational awareness. Concerning the anti-satellite (ASAT) weapons deployed in space, their early development, carried out by the Soviet Union and the United States, was halted in the 1980s, due to the time (at least several hours) an intercepting satellite would need to manoeuvre toward its intended target, even in low Earth orbit, making the procedure unsuitable for surprise attacks and vulnerable to simple countermeasures. Since then, so far, the efforts of the United States, Russia, China and India – the only countries that have successfully carried out tests of this type – have concentrated on missiles launched from the ground, the sea or the air, intercepting the target satellite with a fast direct ascent trajectory. An alternative is the use of very powerful laser beams, or other direct-energy weapons, to damage or disturb critical satellite components. Finally, a not insignificant fraction (around 7 per cent) of the active satellites present in orbit, as of 1 May 2018, was operated by academic institutions and
The use of space and satellites: problems and challenges
115
amateur organizations. This is the result of the growing success of microsatellites and CubeSats. The latter, with a volume of a cubic decimetre (that is 1 litre) and a mass around 1 kilogram (kg), are opening up the space frontier to a number of subjects that were unthinkable until a few years ago. University departments, start-ups and private organizations can now design, build and fly in space their own devices at an affordable cost, launched as piggy-backs of large satellites, or together with several tens or a few hundreds of similar nanosatellites, in order to minimize the launch price. This trend will probably continue in the future, aiming at still smaller but even more capable spacecraft, with a mass of around 100 grams and of a size comparable with a smartphone. From what has been said so far, the outer space around the Earth is therefore a precious resource, and of growing importance to humankind. The global services used every day by governments, international and national agencies, companies and citizens for communications, navigation, meteorological forecasts, land and disaster management, environmental monitoring, transportation safety and national security have over the years become so pervasive that their sudden interruption would create a lot of problems, with even potentially catastrophic outcomes (Dawson, 2018). Peaceful access to space must therefore be open to every country and preserved for future generations as well. Any action, voluntary or accidental, leading to the disruption of space services or to a long-term degradation of the space environment would have significant adverse, and potentially destabilizing, effects on social, economic and political levels, in particular for emerging and developing countries lacking ground infrastructure and capital to fill the gap. Regarding utilization of the space resource around our planet, the satellite orbits are commonly grouped into a few categories. According to the definitions adopted by the Union of Concerned Scientists, low Earth orbits (LEOs) extend as far as 1700 kilometres (km) from the Earth’s surface, accounting for 63 per cent of operational satellites. Geosynchronous orbits (GEOs) have altitudes of approximately 35700 km, which guarantees an orbital period identical to a complete rotation of our planet, allowing these satellites to appear near-stationary when viewed from the ground. They account for 29 per cent of operational satellites. Between LEOs and GEOs there are the medium Earth orbits (MEOs), with orbital periods from approximately two to 24 hours, accounting for 6 per cent of operational satellites. The most important MEO altitude band goes from 19 000 km to 24 000 km, where most spacecraft of the main global navigation satellite systems are deployed, that is the American Global Positioning System (GPS), the Russian Global Navigation Satellite System (GLONASS), the Chinese BeiDou and Europe’s Galileo. Finally, 2 per cent of operational satellites travel along high elliptical orbits (HEOs) that intersect at least two of the previously defined regions.
116
Technology and international relations
The peaceful and fruitful exploitation of circumterrestrial space requires a high degree of responsible international cooperation, deeply involving all the players. In fact, the irresponsible behaviour of even one country or entity with the appropriate technical skills could jeopardize space activities in certain orbital regimes for periods ranging from a few hours to several years. The main threats to the peaceful exploitation of space can be categorized as follows: militarization and warfare, orbital debris and solar storms. The remaining part of this chapter will be devoted to such issues, concluding with an overview of the still small risks represented by space operations for the inhabitants of the Earth.
3
MILITARIZATION AND WARFARE
Even though a significant fraction of active satellites in orbit play important roles in the fields of military intelligence, surveillance and operations, space itself has not yet been a theatre of warfare, at least not in any evident way (Anselmo, 2012; Dawson, 2018). At the beginning of the Space Age, however, this was considered a viable option – for instance, planning the stationing and use of nuclear weapons in orbit. In 1958, as part of the Project Argus, the United States detonated three small 1.5 kiloton (kt) devices at the height of 480 km, to verify the ability to pump electrons into the Van Allen radiation belts, but the experiment did not produce the desired results. In the Starfish experiment in 1962, a 1.5 megaton (Mt) nuclear bomb was detonated at a height of 400 km over the Pacific Ocean above Johnston Island, and not only knocked out several satellites, but also caused an electric and telephone blackout in the Hawaii islands, 1300 km away. The Soviet Union likewise carried out four nuclear tests in space: two in 1961, of 1.2 kt, and two in 1962, of 300 kt. These plans, however, were soon abandoned due to their questionable military advantages and unpredictable adverse effects, leading to the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water of 1963 (Union of Soviet Socialist Republics et al., 1963) and the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies of 1967 (United Nations, 2002), which have, at least so far, effectively banned the deployment and use of weapons of mass destruction in space and on celestial bodies. Nevertheless, the intention of transforming space into a battlefield or a receptacle of offensive weapons has never completely died. Since the 1980s, proposals in this direction have periodically re-emerged, but despite the heated discussions often surrounding them, nearly all were abandoned long before even approaching a simple practical demonstration. The only (operational) space weapons devised so far are the ASATs (Dawson, 2018). The current versions consist of missiles launched from the ground, from the sea or from aircraft, intercepting their satellite targets with
The use of space and satellites: problems and challenges
117
a direct ascent trajectory (Pardini and Anselmo, 2009a, 2009b), even though the development of new space-based systems by China, Russia and the United States cannot be ruled out, it being difficult to deduce whether a satellite that sneaks up on others does so only to spy on them or might even destroy the target. Other systems under active development consist of powerful laser and radio-frequency direct-energy devices, installed both on aircraft and on the ground and potentially able to ‘blind’ or otherwise disable the enemy satellites. In the coming decades, the use of such weapons during military confrontations cannot be excluded (Dawson, 2018), in particular during a limited conflict involving a superpower like the United States, Russia or China and a country with much lower military capabilities. In such a case the superpower could easily impair the potential limited orbital assets of the adversary without risking significant collateral damage, or retaliation against its own satellites or the global services provided by the spacecraft of third parties. However, in the event of a conflict between two nuclear superpowers, transforming the space around the Earth into a battlefield would have quite serious, unpredictable and destabilizing consequences, and the availability of operational ASAT weapons might simply play a deterrent role against their use by the opponent. Apart from the possible global disruption of satellite services of great importance, as communications and navigation, the effective impairment of the worldwide intelligence and surveillance capabilities would have destabilizing effects, raising the level of the clash and dangerously reducing the distance from a strategic and nuclear confrontation. It might therefore be possible that, during a local or theatre-limited conflict involving two superpowers on opposite sides, space warfare could be subject to the same self-restraint applied to the first use of nuclear weapons. Moreover, while many of the military functions played by orbital systems could easily be replaced in the battlefield using other means and technologies, like aerial drones, inertial navigation and over-the-horizon communications, the maintenance of efficient global intelligence and surveillance capabilities through the conflict would be of paramount importance for reaching a truce, verifying an agreement and re-establishing peaceful relations. Subsequently, also for these reasons, avoiding hostile actions in space could be beneficial for all contenders and for the international community at large. Unfortunately, this could not exclude that a rogue state, equipped with nuclear weapons and missiles, might in certain circumstances launch a more or less indiscriminate attack in space, with the aim of causing as much damage as possible on a global scale.
118
4
Technology and international relations
ORBITAL DEBRIS
As of 19 September 2018, the United States, using the observations acquired by US Space Surveillance Network, consisting of about 20 ground-based radars and optical telescopes, plus some dedicated spacecraft, have maintained a catalogue of the trackable artificial bodies in orbit around the Earth with sizes larger than approximately 10 centimetres (cm), amounting to more than 19 200 objects (Space Track Organization, 2018). They were the result of more than 5400 launches, occurring in 61 years of space operations, which left more than 4850 satellites in orbit, including nearly 3000 defunct spacecraft, more than 2170 propulsion stages of the rockets used for launches, and nearly 12 200 pieces of debris produced by discarding the components having accomplished their function, by accidental and intentional collisions, by accidental and intentional explosions, and by low-energy fragmentations due to natural causes (Bonnal, 2016; Bonnal and McKnight, 2017; Klinkrad, 2006). In addition to these known objects, it has been estimated that several hundreds of thousands of pieces of debris larger than 1 cm and a few hundreds of millions of debris larger than 1 millimetre (mm), always of artificial origin and capable of seriously damaging an operational satellite in the case of impact, are present in orbit as well. The total mass present in space, about 8000 metric tons (t) – that is, slightly more than the Eiffel Tower – is not in itself very large, especially considering the enormous volume of space in which it is distributed. Moreover, even in the most populated region of space, at the height of about 800 km, 250 km3 contains, on average, only one object greater than 10 cm. What makes the difference, however, is the extremely large relative speed. In LEOs, where more than 80 per cent of the objects and 40 per cent of the mass are concentrated, the relative average speed among debris is about 10 km/s – that is, 36 000 km/h – so a separation of 250 km in space can correspond to a separation of just 25 seconds in time and small perturbations of the trajectories, due to natural causes, are sufficient to lead to breathtakingly close approaches. Every day, hundreds of close approaches at less than 1 km occur and dozens of collision warnings are issued around the world to satellite operators, often requiring the execution of evasive manoeuvres if the threatened spacecraft are able to do so. Even when nothing happens that deserves to be mentioned, there is a considerable cost in terms of space surveillance, conjunction analysis and operations disruption for the satellites requiring an avoidance manoeuvre. Accidental collisions would represent, however, a greater problem, because in the objects orbiting around the Earth, due to their relative high speeds, a huge kinetic energy is stored, equivalent to that released by three nuclear bombs such as those dropped on Hiroshima. Even millimetre-sized particle
The use of space and satellites: problems and challenges
119
of debris hitting a satellite can damage sensitive sensors or pierce a bundle of unprotected electrical cables. A centimetre-sized particle can release the energy of a hand grenade. Finally, if an intact object (satellite or rocket stage) is struck by an object of 10 cm or more, the energy released by the impact can generate hundreds or thousands of new decimetre-sized fragments, potentially lethal for other rocket stages and spacecraft. This mechanism, if unchecked, may risk causing, in the long run, a kind of destructive chain reaction, particularly in LEOs (Kessler and Cour-Palais, 1978). The cost of operating satellites in the altitude ranges most affected might then become too high, precluding the use of many particularly useful orbits for a long time. To prevent this outcome, since the 1980s, specific mitigation measures have been gradually implemented to try to limit the production of new orbital debris and using the space environment, when possible, to reduce its number and concentration with a clever exploitation of orbital perturbations – for example, residual atmospheric drag at low altitudes, luni-solar gravitational attraction, or solar radiation pressure at higher altitudes (Chobotov, 2002). Since the 1990s, the Inter-Agency Space Debris Coordination Committee (IADC), within which the main space agencies of the world are represented (IADC, 2018), has promoted coordinated studies and issued agreed recommendations (IADC, 2007), also adopted by the United Nations (United Nations, 2007, 2010), for preventing explosions and collisions in orbit, either intentional or accidental, reducing the release of debris in the course of normal operations, removing spacecraft and rocket stages at the end of their operational life from the most crowded and precious space regions, such as the geostationary orbit, reducing as much as possible, and in any case in less than 25 years, the permanence in LEOs of abandoned or non-manoeuvrable spacecraft and rocket stages, protecting the active satellites with impact shields from debris up to 1 mm (in certain cases, such as the International Space Station, up to 1 cm), and with avoidance manoeuvres from trackable debris, typically larger than about 10 cm. These mitigation guidelines, increasingly included in national and international standards (see, for example, International Organization for Standardization [ISO], 2011) and in the law of a few countries, such as France and Russia, have obtained some encouraging results, in particular in GEOs, but might not be sufficient in the long term to guarantee the desired preservation of the LEO environment. In other words, actions might be needed not only to avoid the creation of new debris from now on (so-called ‘mitigation’), but also to actively remove what has been launched in the past (so-called ‘remediation’) – for instance, progressively and selectively cleaning up the most critical regions of LEO space from the largest mass objects, which represent the major sources of potential new fragmentation debris (Klinkrad and Johnson, 2013).
120
Technology and international relations
However, a comprehensive and effective debris remediation strategy aiming at the active removal of abandoned spacecraft and rocket stages would be extremely expensive with current technologies and would require deep and long-lasting international cooperation involving all the main space players to address several critical problems, such as how to share the costs among nations, and among public and private entities, the assignment of priorities to thousands of potential targets (Pardini and Anselmo, 2018), object property, confidentiality and liability issues, and so on. Currently, many studies, several development activities and a few demonstration missions are underway, by single nations or the European Space Agency (ESA). But an effective debris remediation roadmap would be impossible without worldwide coordination, planning and action. To further complicate an already fragile situation, there are plans to deploy in LEOs, during the 2020s, various so-called ‘mega-constellations’, each consisting of several hundred or thousands of satellites, basically operated at the same altitude, to provide wideband Internet services with continuous global ground coverage. Satellite deployment of the first two, OneWeb and Starlink, started in 2019. Even without resorting to complicated calculations and detailed analyses, it is quite evident that switching from a few hundred new objects launched each year to a few thousand might have a profound and long-lasting environmental impact in LEOs. Moreover, these mega-constellations are being developed and will be operated by private companies, even though subject to the regulatory regimes of the countries in which such space systems are registered or licensed. All the players involved – that is, the companies and the national regulatory agencies – have so far shown the highest degree of environmental awareness and willingness to apply any possible measure to mitigate the consequences of these systems in LEOs, but the goals of reliability to be met are really demanding and much higher than those demonstrated so far in real space operations (Anselmo and Pardini, 2019). So, the challenge is extremely ambitious and every failure might have a profoundly negative and long-lasting impact on the debris environment in LEOs. The lack of a worldwide agreed and binding set of regulations addressing the deployment, operation and end-of-life disposal of mega-constellations certainly represents an important open issue in the field of international relations.
5
SOLAR AND GEOMAGNETIC STORMS
Military operations, space warfare and orbital debris do not represent the only threat to satellites and the services provided by them; nature, in its most violent manifestations, can be just as threatening. Obviously, spacecraft are designed and built to work in an extremely hostile environment, characterized
The use of space and satellites: problems and challenges
121
by near-perfect vacuum conditions, extremes of temperature and illumination, electromagnetic and corpuscular radiation of various origins (solar, galactic and inter-galactic), impacts of micrometeoroids and small orbital debris, and so on (Tascione, 1988). Occasionally, however, special environmental conditions can occur, causing onboard malfunctions, transient problems or permanent failures. The atmosphere of the Earth does not stop suddenly at the lowest satellite altitudes, but gradually vanishes, with an approximately exponential decrease, until it merges with the prevailing conditions of interplanetary space several thousand kilometres high. This rarefied gas envelope is far from static, with temperature, density and composition changing as a function of solar activity. In fact, the high-energy electromagnetic radiation, mostly extreme ultraviolet (EUV) and X-rays, and the particles of solar origin, mainly positively charged protons and negatively charged electrons, impinge on the atoms and molecules of the upper atmosphere, with not insignificant energy transfer and consequent effects. The magnetic field of the Earth, interacting with the charged particles and the magnetic fields of solar origin – that is, with the so-called solar wind – creates around our planet a magnetic bubble wide several terrestrial radii, stretched in a very long tail in the opposite direction to the sun. The magnetosphere, as this very complex region of space is named, shields our planet and low satellite orbits from solar and cosmic charged particles, and prevents the erosion of the atmosphere by the solar wind over the long term. It is a highly dynamic structure that responds dramatically to solar variations and their interaction with the magnetic field of the Earth, either deflecting or capturing protons and electrons in the so-called Van Allen radiation belts. The variation of the environmental conditions of the space around the Earth, mainly due to solar activity and its interaction with the upper atmosphere and the magnetosphere, is commonly referred to as ‘space weather’. The sun is a quiet star, which produces and releases an enormous amount of energy at an extremely constant rate, at least over the time scale of human civilizations. For instance, over the past 1000 years, the average energy output changed by less than 0.015 per cent (Crowley, 2000). However, its activity varies according to a cycle of about 11 years, which modulates the emission only of a small fraction of the total energy, around 0.1 per cent (Frohlich, 2000; Wilson and Mordvinov, 2003), but concentrated in very energetic events (solar flares) and, in the case of electromagnetic radiation, at very high frequencies (EUV and X-rays). Solar flares are often followed by colossal expulsions into space of charged particles and the accompanying magnetic fields (coronal mass ejections). If expelled in the right direction, this matter can possibly reach the Earth within one, two or three days, interacting with the magnetosphere and triggering
122
Technology and international relations
a geomagnetic storm. The X-ray and extreme ultraviolet radiation emitted by solar flares, travelling at the speed of light, reach our planet in little more than eight minutes, disturbing the ionosphere – that is, the ionized layers of the upper atmosphere – and long-distance high-frequency (HF) radio communications. Extreme ultraviolet radiation, X-rays and even geomagnetic storms also cause heating and expansion of the upper atmosphere, changing the environmental conditions at satellite altitudes. The high-energy electromagnetic radiation emitted by solar flares can significantly affect the sunlit side of the Earth’s outer atmosphere (National Oceanic and Atmospheric Administration [NOAA], 2021). Severe events, occurring on average eight times per solar activity cycle, may cause, for one to two hours, increased errors in positioning due to low-frequency navigation signal disturbance and HF radio communication blackouts on most of the sunlit side of our planet. Much rarer extreme events (fewer than one per solar activity cycle) extend their effects over several hours, with loss of HF radio communications with mariners and aviators, outages of the low-frequency (LF) navigation signals used by maritime and general aviation systems, and increased satellite navigation errors that may also affect users on the dark side of the Earth. Solar radiation storms occur when a large-scale coronal mass ejection (CME), caused by a magnetic eruption associated with a solar flare, accelerates charged particles in the sun’s corona to very high velocities. The most abundant massive particles, protons, can be accelerated to one-third of the speed of light, reaching our planet in just 30 minutes. Due to their high speed, they are able to penetrate the magnetosphere, hitting the upper atmosphere in the Polar Regions, guided down by the lines of the Earth’s magnetic field. The severe radiation storms, occurring on average three times per solar cycle, represent a hazard for the health of astronauts during unprotected extra-vehicular activities (EVA), and for passengers and crew in high-flying aircraft at high latitudes. Satellite operations may be affected by memory device problems, noise on imaging systems and degradation of solar panels. The temporary loss of sensitive components controlling fundamental spacecraft functions may cause serious problems, such as a loss of attitude due to the outage of star-tracker sensors. Moreover, for several days, the blackout of HF radio communications through the Polar Regions and increased navigation errors are unavoidable. Less frequent extreme storms, occurring at a rate of less than one per cycle, may significantly increase the radiation hazard for astronauts in deep space or during EVA, and for the people flying high-altitude aircraft at high latitudes. Satellite operations may be heavily jeopardized, with permanent damage to solar panels and critical electronic components, serious degradation of images, temporary loss of control and possibly unrecoverable spacecraft failure.
The use of space and satellites: problems and challenges
123
Navigation operations may become extremely difficult and the HF communications through the polar regions practically impossible (NOAA, 2021). Geomagnetic storms occur when there is a very efficient transfer of energy from the solar wind to the magnetosphere, producing major changes in the electric currents, charged particles distribution and magnetic field lines. Significant storms are relatively frequent: 100 severe and four extremes, on average, per solar activity cycle (ibid.). During the latter, spacecraft may experience extensive surface charging and electrostatic discharge, mainly in GEOs, problems in maintaining the correct orientation in space, and disruption of telemetry and communications uplink and downlink. Concerning the upper atmosphere, the circulation pattern is suddenly reversed, winds increase, and the constituent species are redistributed because of the enormous growth in energy deposited at high latitudes. At those times, due to the abrupt and sizable change of atmospheric drag, the orbits of spacecraft and catalogued debris are perturbed, in particular below 1000 km, and the tracking of many objects is temporally lost, requiring from several hours to a few days for a full recovery (Pardini, Moe and Anselmo, 2012). HF radio propagation may become impossible in many areas for one to two days, satellite navigation may be degraded for days, and LF radio navigation can be suppressed for hours. Extreme geomagnetic storms also have a big impact on important ground infrastructure on which modern societies depend, such as power systems and pipelines. Regarding power systems, transformers may be damaged and some distribution grids, in particular at high boreal latitudes, may experience complete collapse or blackouts caused by widespread voltage control problems or the activation of protective systems. The storms may also induce electric currents of hundreds of amperes in pipelines (NOAA, 2021). Our civilization has become vulnerable to extreme space weather events since the development of the electrical telegraph in the 1840s. From then on, the technological progress leading to large and integrated ground infrastructure for electrical power generation and distribution, communications, navigation and transportation, and the conquest and exploitation of outer space have made human-built systems and humans themselves very susceptible to the effects of the space environment. The events previously described, occurring every few years or decades, are already the cause of considerable damage, interruption of sensitive services, costs and indirect victims. Even military units operating in remote areas of the globe were adversely affected, and sustained human losses, due to unfavourable space weather conditions disrupting navigation signals and search and rescue missions. However, it is well known that much more severe storms of solar origin could take place, with really catastrophic effects on a global scale. The largest storm recorded so far occurred in 1859 and is known as the Carrington Event (Clark, 2009). Fortunately, at that time the world was still
124
Technology and international relations
little dependent on electricity and electronics, and apart from the spectacular Northern Lights, visible even from Cuba, Italy or Hawaii, the main consequences were the crazy compasses and the interruption of telegraphic services throughout Europe and North America. If an event like this occurred today, it would be devastating, affecting almost every aspect of our modern life that rely on electronic devices, long-range communications, satellite navigation systems, power distribution, pipelines and several other vital services. Many if not most of the active satellites in orbit would fail, power blackouts might hit tens or hundreds of millions of people, fundamental services in transport, sanitation and medicine would be damaged and the global economic cost would range from a few hundred billion to a few trillion euros (Lloyd’s and Atmospheric and Environment Research, 2013; National Research Council 2008; Oughton et al., 2017). A full recovery of the pre-superstorm situation could take several years, or even more. The perspective of a new Carrington Event is far from negligible – the probability might be around 10 per cent over a decade and the evidence of even more extreme solar storms is being discovered in the relatively recent geological record (O’Hare et al., 2019). The world therefore needs a high level of preparedness and cooperation to improve the space weather prediction and alert capabilities, to mitigate the adverse effects as much as possible, to render the spacecraft in orbit and the vital infrastructures on the ground much less vulnerable, to prepare emergency plans and back-up systems, and to avoid heavy economic losses and human suffering (National Research Council, 2008; National Science and Technology Council, 2015a, 2015b).
6
GROUND RISKS OF SPACE OPERATIONS
On 24 January 1978, a Soviet military satellite for oceanic surveillance, Kosmos 954, equipped with a nuclear reactor, crashed after an uncontrolled re-entry over a sparsely inhabited area of Canada’s Northwest Territories. A huge search and recovery campaign was immediately undertaken, with the essential participation of military means and personnel from the United States, leading to the survey of 124 000 km2, to the identification of sparse debris over a 600-km-long strip and to the recovery of several large highly radioactive fragments (Angelo and Buden, 1985). Since then, the uncontrolled re-entries of potentially hazardous spacecraft and orbital stages have periodically attracted the worldwide attention of the media and the public, as in the more recent case involving the first Chinese space station, Tiangong-1, in 2018 (Pardini and Anselmo, 2019). Although launches of Earth satellites equipped with nuclear reactors were suspended in 1988, after a couple of further incidents (Kosmos 1402 and Kosmos 1900), and the deployment and use of nuclear power systems in cir-
The use of space and satellites: problems and challenges
125
cumterrestrial space is now regulated by the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (United Nations, 1992), uncontrolled re-entries pose a danger to people on the ground and to air traffic because not all the mass of returning objects is vaporized in the upper layers of the atmosphere and several fragments, in some cases of a few kilograms or more, may reach the Earth’s surface at a speed of hundreds of kilometres per hour (Pardini and Anselmo, 2019). Currently, more than 70 per cent of all satellites and orbital stages re-entering the atmosphere are not controlled, which corresponds to about 50 large intact objects and about 100 t per year (Pardini and Anselmo, 2013b). Fortunately, so far, there have been no casualties attributable to events of this kind and the relative risk will likely remain very small compared to many everyday hazards (just consider that the individual probability of being hit by a fragment of a re-entering satellite is tens of thousands of times less than being struck by lightning and a million times lower than being killed in a domestic accident). But as the human population and air traffic are increasing, and ever larger satellites are placed into orbit, the risk is nevertheless likely to grow. For this reason, and even though the Convention on International Liability for Damage Caused by Space Objects of 1971 (United Nations, 2002) already regulates the liability issues among states, several national and international guidelines and standards recommend de-orbiting all space vehicles that could harm someone around the world with a probability of 1 in 10 000, or more, in a controlled manner. If this is not possible, the orbital decay of those objects must be carefully monitored, alerting all the areas of the planet potentially affected a few hours before re-entry (Pardini and Anselmo, 2013a). Since the 2010s, this was done several times by the IADC, which promoted coordinated re-entry campaigns for some safety-sensitive objects, effectively favouring the exchange of observational data and re-entry predictions among the main space-faring countries. At the same time, innovative materials and construction techniques are being studied to make new satellites and orbital stages capable of disintegrating as much as possible in the upper layers of the atmosphere, thus limiting the production of fragments capable of reaching the ground (Heinrich, Martin and Pouzin, 2018). In addition to the risks on the ground of uncontrolled re-entries, special attention has been paid in the past to the possible adverse effects for the atmosphere of re-entry smoke particles and, especially, plume exhausts from rocket launches. The production of greenhouse gases, like carbon dioxide and water vapour, is and will remain negligible compared with industrial sources. Nevertheless, tiny particles of soot and alumina expelled by solid rocket boosters may reach the stratosphere, contributing to depletion of the ozone layer that is essential for protecting life on our planet from dangerous ultraviolet radiation of solar origin. All the studies carried out so far suggest that any signifi-
126
Technology and international relations
cant depletion of the ozone layer would require a much higher launch traffic than at present (by one or two orders of magnitude), but further refined studies might be needed to settle the matter conclusively and the issue was included in the United Nations 2018 Quadrennial Global Ozone Assessment (World Meteorological Organization [WMO], 2018). It must, however, be emphasized that the international success achieved in banning worldwide use of chlorofluorocarbons (CFCs), known for their role in the depletion of the stratospheric ozone layer, and the progressive transition to more environment-friendly rocket propellants, the so-called ‘green’ propellants, being implemented by all the major space agencies, augur well for an effective and responsible management of the problem. Finally, a few concluding remarks concerning the safety of rocket launches. The safety of launch bases and operations is obviously subject to the national laws, standards and regulations of the launching state, but the fall of discarded stages, or of failed rockets and payloads, on other countries has, of course, relevant repercussions in terms of international relations. Regarding the relations between states, the matter is generally regulated by the Convention on International Liability for Damage Caused by Space Objects of 1971 and by the Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space of 1967 (United Nations, 2002). A paradigmatic case is represented by the Russian lease (until 2050) and use of the Baikonur Cosmodrome, which found itself in Kazakhstan after the break-up of the Soviet Union. Due to the ascent trajectory, at each launch some spent rocket stage necessarily falls in Kazakh territory, with considerable environmental impact, in particular when rockets with highly toxic propellants, such as the Protons, are used. The situation may become particularly critical in the case of early launch failures, when the entire rocket, with almost all the propellants still on board, falls. So far the matter has been settled by a fixed yearly payment from Russia to Kazakhstan of $115 million per year and by periodic clean-ups of the territories involved, but the disputes between the two countries, never completely suppressed, periodically resurface.
7 CONCLUSIONS The critical role played by space activities for a growing number of applications, deeply interconnected with modern life, cannot be downplayed. Any disruption of satellite operations might in fact lead to considerable outages of vital services, affecting economy, security, safety and human lives. Orbital debris – that is, the multitude of trackable and untrackable artificial objects produced by space activities and no longer functional – could, if left unchecked, form a medium- and long-term threat to space operations, especially in low Earth orbit, due to a possibly unsustainable collision proba-
The use of space and satellites: problems and challenges
127
bility. However, a broad international cooperation, involving all the relevant actors, is in place at various levels. Since the 1990s, in several national and supranational bodies, a widespread awareness of the problem has been created among public and private entities, mitigation measures have already been recommended, sometimes enforced and partially implemented worldwide, and new ambitious mitigation and remediation initiatives are under study, discussion and development. Therefore, taking into account the good and wide level of international cooperation, and the encouraging results already obtained, there is room for lukewarm optimism. Moreover, even in worst-case scenarios, with the current and envisaged launch traffic, runaway collisional cascading (Kessler and Cour-Palais, 1978) would take at least a few decades to develop (Anselmo et al., 2001; Pardini and Anselmo, 2014), not just a few hours as suggested in the popular movie Gravity directed by Alfonso Cuarón in 2013, and only specific altitude bands in LEOs would anyway be affected. The possible outage of satellite services would therefore gradually build up over a relatively long time, many satellites providing fundamental services in navigation and communications – for instance, in MEOs and in GEOs – would remain safe, and the world would have enough time to react and recover, even though at a significant, but not catastrophic, economic cost. A limited space combat linked to a ground conflict between a superpower and a country of lesser military rank would probably have minor consequences on a global scale and no significant adverse effects on important satellite services. In the case of overt confrontation between superpowers, however, the outcome might be quite different, with extended damage to military and dual-use satellite systems and likely collateral damage to purely civilian satellites. With current and envisaged ASAT weapons, both ground based or deployed in space, knocking out all the satellites of the enemy would require at least several days without resorting to nuclear detonations, and the maintenance of very efficient command, control, coordination and operational capabilities. However, an all-out confrontation among superpowers, even if not resorting to nuclear warheads, would probably lead to some impairment of the complex capabilities needed for an effective ASAT strategy in the earliest phase of the hostilities. In a low-intensity confrontation, on the other hand, an attack in space would be interpreted as a serious destabilizing escalation, just short of a nuclear attack, so it will probably be avoided. An unrestrained space war is therefore considered neither advantageous nor probable within the strategic environment and the technology developments foreseen in the coming decades. Nevertheless, a nuclear confrontation on the ground, or a nuclear attack carried out in space by a rogue nation or entity, would necessarily reverse this optimistic prediction, even though, in case of nuclear war, the sudden loss of even most of the operational satellite systems would represent one of the lesser problems for the inhabitants of the
128
Technology and international relations
Earth. The wide and growing dependence of our civilization on vital satellite systems, and on complex and deeply interconnected technological infrastructure as well, renders modern societies particularly vulnerable to extreme space weather events, such as solar and geomagnetic superstorms. Even relatively common space weather perturbations may have non-negligible effects, and for this reason a number of satellites and interplanetary spacecraft, many ground-based sensors, a lot of research, and several collection and analysis centres are devoted around the clock to monitoring the sun and the magnetosphere, in order to provide timely short-term forecasts. Unfortunately, relatively reliable predictions can at most span a few days and not too much could be done to mitigate the catastrophic global outcome of a Carrington Event without a worldwide preparedness programme spanning many years. In conclusion, complex satellite systems and the important applications depending on them have become increasingly vulnerable to irresponsible human behaviour, as a consequence of conflicts (space wars) or future environment disregard (orbital debris). However, from a probabilistic point of view and according to current knowledge, the highest risk comes from nature (solar and geomagnetic superstorms), and not only for satellite systems but also for critical ground infrastructures. The paradox is that, in this case, it was precisely the high degree of technological progress and system integration attained in rather recent years that rendered humankind so vulnerable to extreme space weather events.
NOTE 1. In the 1950s, the development of ballistic missiles and thermonuclear weapons proceeded in parallel and the missiles had to be developed ‘before’ the warheads they were intended to carry were available. So, the missiles were designed on the basis of preliminary estimates of the bomb masses. In the Soviet Union, this prediction was quite high (several tons) and this resulted in a carrier rocket design much heavier and more capable than the American counterparts.
REFERENCES Angelo, J.A. and D. Buden (1985), Space Nuclear Power, Malabar, FL: Orbit Book Company. Anselmo, L. (2012), ‘La militarizzazione dello spazio’, in Giampiero Giacomello and Alessandro Pascolini (eds), L’ABC del Terrore: Le Armi di Distruzione di Massa nel Terzo Millennio, Milan: Vita e Pensiero, pp. 113−43. Anselmo, L. and C. Pardini (2019), ‘Dimensional and scale analysis applied to the preliminary assessment of the environment criticality of large constellations in LEO’, Acta Astronautica, 158, 121–8.
The use of space and satellites: problems and challenges
129
Anselmo, L., A. Rossi and C. Pardini et al. (2001), ‘Effect of mitigation measures on the long-term evolution of the debris population’, Advances in Space Research, 28, 1427−36. Baland, P. (2007), De Spoutnik à la Lune: L’Histoire Secrète du Programme Spatial Soviétique, Nîmes: Editions Jacqueline Chambon. Bonnal, C. (2016), Pollution Spatiale: L’Etat d’Urgence, Paris: Editions Belin. Bonnal, C. and D.S. McKnight (eds) (2017), IAA Situation Report on Space Debris – 2016, Paris: International Academy of Astronautics (IAA). Burrows, W.E. (1998), This New Ocean: The Story of the First Space Age, New York: Random House. Chobotov, V.A. (ed.) (2002), Orbital Mechanics, 3rd edition, Reston, VA: American Institute of Aeronautics and Astronautics. Clark, S. (2009), The Sun Kings: The Unexpected Tragedy of Richard Carrington and the Tale of How Modern Astronomy Began, Princeton, NJ: Princeton University Press. Crowley, T. (2000), ‘Causes of climate change over the past 1000 years’, Science, 289, 270–77. Dawson, L. (2018), War in Space: The Science and Technology Behind Our Next Theater of Conflict, Cham, Switzerland: Springer-Praxis. Erickson, A.S. (2018), ‘Revisiting the U.S.–Soviet space race: comparing two systems in their competition to land a man on the moon’, Acta Astronautica, 148, 376−84. Frohlich, C. (2000), ‘Observations of irradiance variations’, Space Science Review, 24, 15–24. Heinrich, S., J. Martin and J. Pouzin (2018), ‘Satellite design for demise thermal characterisation in early re-entry for dismantlement mechanisms’, Acta Astronautica, 158, 161–71. Heppenheimer, T.A. (1997), Countdown: A History of Spaceflight, New York: John Wiley & Sons. Inter-Agency Space Debris Coordination Committee (IADC) (2007), IADC Space Debris Mitigation Guidelines, Document IADC-02-01, Revision 1. Inter-Agency Space Debris Coordination Committee (IADC) (2018), ‘What’s IADC’, accessed 30 December 2020 at https://www.iadc-home.org/what_iadc. International Organization for Standardization (ISO) (2011), International Standard ISO 24113: Space Systems: Space Debris Mitigation Requirements, 2nd edition, Technical Committee ISO/TC 20, Subcommittee SC 14. Kessler, D.J. and B.G. Cour-Palais (1978), ‘Collision frequency of artificial satellites: the creation of a debris belt’, Journal of Geophysical Research, 83, 2637–46. Klinkrad, H. (ed.) (2006), Space Debris: Models and Risk Analysis, Berlin: Springer-Praxis. Klinkrad, H. and N.L. Johnson (eds) (2013), Space Debris Environment Remediation, Paris: International Academy of Astronautics (IAA). Lloyd’s and Atmospheric and Environment Research (AER) (2013), Solar Storm Risk to the North American Electric Grid, London: Lloyd’s. McDowell, J.C. (2018), ‘The edge of space: revisiting the Karman Line’, Acta Astronautica, 151, 668−77. National Oceanic and Atmospheric Administration (NOAA) (2021), NOAA Space Weather Scales, accessed 13 January 2021 at https:// www .swpc .noaa .gov/ noaa -scales-explanation. National Research Council (2008), Severe Space Weather Events: Understanding Societal and Economic Impacts, Washington, DC: The National Academies Press.
130
Technology and international relations
National Science and Technology Council (2015a), National Space Weather Strategy, Washington, DC: Office of Science and Technology Policy (OSTP). National Science and Technology Council (2015b), National Space Weather Action Plan, Washington, DC: Office of Science and Technology Policy (OSTP). O’Hare, P., F. Mekhaldi and F. Adolphi et al. (2019), ‘Multiradionuclide evidence for an extreme solar proton event around 2,610 B.P. (~660 BC)’, Proceedings of the National Academy of Sciences of the United States of America (PNAS), 116(13), 5961–6. Oughton, E.J., A. Skelton and R.B. Horne et al. (2017), ‘Quantifying the daily economic impact of extreme space weather due to failure in electricity transmission infrastructure’, Space Weather, 15, 65–83. Pardini, C. and L. Anselmo (2009a), ‘Assessment of the consequences of the Fengyun-1C breakup in low Earth orbit’, Advances in Space Research, 44, 545−57. Pardini, C. and L. Anselmo (2009b), ‘USA-193 decay predictions with public domain trajectory data and assessment of the post-intercept orbital debris cloud’, Acta Astronautica, 64, 787−95. Pardini, C. and L. Anselmo (2013a), ‘Satellite reentry predictions for the Italian civil protection authorities’, Acta Astronautica, 87, 163−81. Pardini, C. and L. Anselmo (2013b), ‘Re-entry predictions for uncontrolled satellites: results and challenges’, in L. Ouwehand (ed.), Proceedings of the 6th IAASS International Space Safety Conference ‘Safety is Not an Option’, ESA SP-715, Noordwijk: European Space Agency Communications. Pardini, C. and L. Anselmo (2014), ‘Review of past on-orbit collisions among cataloged objects and examination of the catastrophic fragmentation concept’, Acta Astronautica, 100, 30−39. Pardini, C. and L. Anselmo (2018), ‘Evaluating the environmental criticality of massive objects in LEO for debris mitigation and remediation’, Acta Astronautica, 145, 51−75. Pardini, C. and L. Anselmo (2019), ‘Uncontrolled re-entries of spacecraft and rocket bodies: a statistical overview over the last decade’, The Journal of Space Safety Engineering, 6(1), 30–47. Pardini, C., K. Moe and L. Anselmo (2012), ‘Thermospheric density model biases at the 23rd sunspot maximum’, Planetary and Space Science, 67, 130−46. Space Track Organization (2018), Box Score, accessed 19 September 2018 at www .space-track.org. Tascione, T.F. (1988), Introduction to the Space Environment, Malabar, FL: Orbit Book Company. Taubman, P. (2003), Secret Empire: Eisenhower, the CIA, and the Hidden Story of America’s Space Espionage, New York: Simon & Schuster. Union of Concerned Scientists (UCS) (2018), ‘UCS Satellite Database’, accessed 7 September 2018 at www.ucsusa.org/nuclear-weapons/space-weapons/satellite -database. Union of Soviet Socialist Republics, United Kingdom and United States of America (1963), Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water (Moscow, 5 August 1963), Recueil des Traités 1963, No. 6964, New York: United Nations, pp. 44−92. United Nations (1992), Principles Relevant to the Use of Nuclear Power Sources in Outer Space, Resolution A/RES/47/68, adopted by the General Assembly, 85th Plenary Meeting, on 14 December 1992, New York.
The use of space and satellites: problems and challenges
131
United Nations (2002), United Nations Treaties and Principles on Outer Space: Text of Treaties and Principles Governing the Activities of States in the Exploration and Use of Outer Space, Adopted by the United Nations General Assembly, ST/SPACE/11, New York. United Nations (2007), International Cooperation in the Peaceful Uses of Outer Space, Resolution A/RES/62/217, adopted by the General Assembly, 62nd Session, Agenda Item 31, 79th Plenary Meeting, on 22 December 2007, New York. United Nations (2010), Space Debris Mitigation Guidelines of the Committee on the Peaceful Uses of Outer Space, Office for Outer Space Affairs, Committee on the Peaceful Uses of Outer Space (COPUOS), Vienna. Villain, Jacques (2009), Satellites Espions: Histoire de l’Espace Militaire Mondial, Paris: Vuibert. Wilson, R. and A. Mordvinov (2003), ‘Secular total solar irradiance trend during solar cycles 21–23’, Geophysical Research Letters, 30, 1199–202. World Meteorological Organization (WMO) (2018), Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project – Report No. 58, Geneva.
7. Cyber attacks and defenses: current capabilities and future trends Michele Colajanni and Mirco Marchetti 1 INTRODUCTION The digital revolution has permeated everywhere, although its effects and evolution will continue for decades. Nowadays, all organizations rely on complex interconnected information systems that gather, process, store and transmit all kinds of information, ranging from recreational and social activities to the control of complex and safety-critical industrial plants, economic systems including business-to-business and business-to-consumer services, logistics and stock markets. In spite of the tremendous increases in processing capabilities and data transmission speeds since the early days of the 1960s and 1970s, the basic principles underpinning information systems have not changed much. Any computer architecture consists of a hardware platform, an operating system, various software applications, and network connections. The infinite possibilities of attack derive from an intrinsic flexibility and weakness: the computer is the only ‘machine’ that is specifically designed to behave differently depending on the software that it executes. A malicious software that an attacker is able to upload onto the ‘machine’ is executed because the computer does not distinguish between licit and illicit applications. Focusing attention on flexibility of the software and speed of the hardware without taking into account related risks represents the original sin of our community. Industry and academia knew the truth, but they were too busy spreading the digital word everywhere and seizing business opportunities. It was important always to have novel software and digital services, not good software and robust services. In the 1990s, a few enlightened people tried to disclose the dangers lurking in the digital future but were considered technophobic or paranoid. The effects of our original sin are now visible to society as a whole, although few people are aware that 90 percent of cyber vulnerabilities are due to badly implemented software. Today it is impossible to retrofit security to the digital stuff produced in the last 40 years and still supporting private and public 132
Cyber attacks and defenses: current capabilities and future trends
133
organizations, because nobody knows how to reverse it and fix all the software vulnerabilities that cyber criminals leverage for their business. On the other hand, though difficult, we think that it is not impossible to change direction. Our society is affected by three problems that society itself should solve: 1. Aggressive times to market affecting the software industry causes products to be delivered that are not completely tested, such that a frequent defect–patch cycle after sale is accepted by any customer. Actually, this is not normal at all. Indeed, no other industry is allowed by society to produce and sell defective products and, if they do, they pay in court for their errors. As software has pervaded the world, all organizations and society in its entirety should force the software industries to make robust products or pay for their mistakes. The big tech companies know how to implement robust and secure services when they are providers (e.g., Amazon Web services, Microsoft Azure, Google Cloud Platform), hence this is not a technological issue but a market issue: if a customer does not complain, a supplier will not provide. 2. Mainstream users and organizations consider software-based operations more important than secure operations. They perceive security controls as annoying constraints and are not inclined to pay more for better quality. This approach too should change, hopefully in the near future. 3. Business models prevailing in modern digital society are based on ‘free’ services that are paid through data collection. This model, which is inducing users to expose everything personal on the Net, has created a large white and gray market of data analytics, profiling and target ads, but it also facilitates bad guys who use intelligence approaches as a first step for their cyber attacks. While the basic ideas underpinning early attacks have not changed much over time, attacker profiles and strategies have evolved and adapted to the ever-changing cyber landscape. We may conveniently distinguish three epochs: (1) the early days before the advent of the World Wide Web; (2) the present era lasting from 1995 to 2019; and (3) the foreseeable future from 2020 on. In Section 2, we consider attacks and defenses of the first period and attacks in the second period. In Section 3, we focus on the defense capabilities of the second period. In Section 4, we outline future trends and capabilities.
Technology and international relations
134
2
CYBER ATTACKS
2.1
Early Days
The earliest cyber attacks on the first multiuser computing systems date back to the 1960s. In two different security incidents, users of the mainframe IBM CTSS system at MIT were able to access other users’ passwords (McMillan, 2012) (Morris and Thompson, 1979). At that time, there were no technical or legislative defenses in place. A paradigm shift in computer attacks followed the introduction of Internet connectivity among systems: the possibility of remote access to computers paved the way for remote attacks. The first software to replicate itself among networked computers was Creeper in 1971, but the first computer worm with any devastating impact was the Morris worm in November 1988. It gained attention in the major media, motivated the foundation of the first Computer Emergency Response Team (CERT) at Carnegie Mellon University and launched the cyber security business on a large scale. Before that event, the security industry was limited to PC antivirus companies, such as G Data (Germany, 1985), McAfee (US, 1987), Avira (Germany, 1988), S&S International (UK, 1988), F-PROT (Finland, 1989), Symantec (US, born in 1985, but it released its first antivirus product in 1990). Countermeasures were not limited to the technical domain. The United States adopted the first law on computer fraud within the Comprehensive Crime Control Act (Jarrett and Bailie, 2010) in 1984, which was then revised by the Computer Fraud and Abuse Act (18 U.S. Code §1030) in 1986. 2.2
Modern Attacks
In the mid-1990s, the diffusion of the World Wide Web opened the wonderful world of Internet-based services, online news and e-commerce to the entire world population (Amazon was born in 1995). The combination of a fully interconnected world of computers with the appearance of billions of non-technical users on the Internet provided fertile soil for the modern attacker. Several variants on computer worms and viruses targeted all hardware and software platforms, causing rampant infections especially among big industries, research institutions and universities. The penetration of Internet technologies throughout our modern economy and society also prompted a rapid evolution of cyber attackers: from historical hackers motivated by curiosity to modern cyber criminals and early governmental actors. The explosion of malicious activities caused a lexical evolution from virus to malware – that is, any kind of software produced with malicious intent. Before releasing new malware, modern producers test their ‘artifacts’ against the major anti-virus
Cyber attacks and defenses: current capabilities and future trends
135
software to make sure they are not detected. It is a cat-and-mouse game where there is no doubt that the mice are winning, especially because it is difficult to enforce state-based laws in a native international scenario such as the Internet. The main profiles of cyber attackers, together with their motivations and preferred targets are represented in Figure 7.1. Different attackers may achieve their different purposes by using similar technical attack vectors leveraging system and human vulnerabilities. The main difference lies in the resources available, which are typically limited for hackers, hacktivists and crackers, but can be high for financially supported hacktivists (e.g., terrorists), and extremely high for cyber criminals and state-related attackers. The cyber landscape of the 1995–2019 epoch is characterized by some fundamental trends, including the rapid evolution of Internet-based services and improvements to cyber attacker capabilities. Table 7.1 outlines the main features and the novel cyber security problems characterizing the recent past and present period.
Figure 7.1
3
The main cyber attacker profiles, motivations and targets, respectively
CYBER CAPABILITIES AND ACTORS
This section outlines the main cyber capabilities for the implementation of effective defense strategies in the current cyber landscape and outlines its main actors.
Technology and international relations
136
Table 7.1
Cyber security trends and issues of the 1995–2019 epoch
Trends
Problems
Rise of social networks
Virtually anyone willingly contributes personal
(Internet technologies)
and professional information about themselves and their close contacts, thus allowing for fine-grained surveillance of individuals and organizations Employees often present the weakest chinks for penetrating industries and organizational networks through social engineering attacks (that is, attacks leveraging human vulnerabilities – for example, through phishing and spear phishing)
Portable and wearable devices (Internet technologies)
Portable devices are increasingly used for work and personal purposes. The concept of a perimeter to enterprise and organizational networks starts to vanish, making it increasingly difficult to design proper defenses Infections of mobile devices open up new opportunities for personal surveillance and data theft
Industrialization of cyber attacks
As the cyber criminal ecosystem becomes a mature
(cyber attackers’ capability)
economy, industrialization processes drive the development of tools for automatic malware generation, causing an explosion in the number of unique malware samples. While the whole of 2015 saw about 5 million unique malwares, in 2019 we witnessed a staggering rate of half a million unique malwares per day The effectiveness of traditional antivirus software is decreasing; many vendors integrate signature approaches with modern techniques based on machine learning and behavioral detection
Advanced persistent threats
Valuable targets and high economic gains justify high
(cyber attackers’ capability)
investments in development of sophisticated, targeted and long-lasting attacks: so-called advanced persistent threats (APTs). Traditional defensive approaches are ineffective, and APTs may remain undetected for months or years
Cyber attacks and defenses: current capabilities and future trends
137
Trends
Problems
Appearance of state-related actors
State-related actors have virtually unlimited resources,
(cyber attackers’ capability)
as well as privileged access to information sources, critical infrastructures, Internet service providers and restricted information
Dark web
There is a huge market for information (passwords,
(cyber attackers’ capability)
leakage data, software vulnerabilities), cyber attack tools and service platforms on sale in the dark web. Nowadays few highly competent attackers create their tools or find vulnerabilities. Other competent attackers find it more convenient to buy stuff and adapt it to their needs. Most modern cyber criminals just buy stuff and use it
3.1
Main Defensive Capabilities
Cyber security is a continuous process involving three main phases: prevention, detection and reaction. The famous National Institute of Standards and Technology (NIST) Cybersecurity Framework uses five phases (identify, protect, detect, respond, recover), but the cyber security governance schemes tend to be similar. Each phase aims at a different objective, leveraging different management capabilities and technical solutions. The defensive capabilities are well established. There are hundreds of documents deriving from laws, standards, and best practices for any type of organization and industry, such as the ISO 27000 family, NATO, PCI DSS, HIPAA, ITIL, COBIT and many other less structured but no less valid recommendations. Heavy reliance on IT infrastructures and lack of prevention, contingency and mitigation polices are the major drawbacks of this epoch. Frustration comes from the fact that security engineers know what to do, but companies shrink from recommendation or implement only partial solutions because of costs, the impact on company operations, and lack of sensibility or prevention culture. Most C-level executives still think that cyber security is a technological problem and tend to spend on expensive remediation and recovery action rather than invest in prevention. Users and employees do not follow the basic cyber hygiene rules because they underestimate the risks and are highly resistant to change (‘We have always done things this way, why should we change?’). By way of an example, let us outline the huge number of defensive capabilities pertaining to the prevention phase and aiming to reduce the cyber vulnerability of an information system: 1. Hardening: the process of improving the security of a system (an application, a computer, a network or a complete information system) by reducing the attack surface exposed to potential attackers. Hardening is
138
2. 3.
4.
5.
6.
7.
Technology and international relations
commonplace in all security-conscious organizations, and several guidelines and checklists (NIST, 2020) exist that can guide a system administrator in developing a proper hardening process. Software patches: new vulnerabilities are continuously being discovered for any software. This problem can be mitigated (not solved!) by swiftly applying all the security patches released by software makers. Continuous updating of anti-malware software: there are hundreds of thousands of novel malware versions per day. If the defense systems are not constantly updated against the latest threats, they quickly become useless. Pen testing and vulnerability assessment: these are fundamental to identify the soft spots of the cyber defenses. They are human-intensive periodic activities requiring expert personnel. The typical once or twice a year approach is being replaced by a more efficacious process based on continuous semi-automatic vulnerability assessment. Authentication and authorization systems: most systems expose public interfaces to the Internet, and comprise a plethora of different users, each characterized by different access privileges. Robust authentication mechanisms are necessary to prevent unauthorized access to these systems and to grant proper access rights to each user. Network defenses: the internal networks that connect hosts belonging to an enterprise or an organization have to be separated from the public Internet. This segregation is achieved through the deployment of firewalls and network appliances that enforce access control rules. Other relevant capabilities for network defense are the virtual private networks (VPNs) that establish remote connections from anywhere to the internal network of an organization without exposing the organization to the risk of eavesdropping and tampering. Mobile device management: several companies now provide employees with company-owned mobile devices, while others let employees use their own mobile devices to allow them to work more effectively. This trend is referred to as BYOD that is, the bring your own device issue. Mobile devices introduce a completely new category of security issues. They can easily be lost or stolen, thus leaking corporate data. They allow access to corporate infrastructure, hence can be used by attackers to gain a first foothold inside the corporate network. Moreover, users that are not security conscious perform dangerous actions such as installing third party mobile apps without any verification, thus compromising the security of the mobile device. All these risks can be mitigated by mobile device management platforms that remotely control the configuration of mobile devices, geolocate them, impose restrictions on applications and visited websites, and lock or wipe lost or stolen devices.
Cyber attacks and defenses: current capabilities and future trends
139
8. Cryptography: the correct application of state-of-the-art cryptographic techniques is the basis for all secure communication protocols; moreover, it guarantees confidentiality of sensitive information even if its storage is outsourced to providers that are not fully trusted. Proper deployment of technical solutions based on cryptography requires a deep understanding of its working principles, and precise management of the entire cryptographic ecosystem including passwords, keys and digital certificates. The smallest mistake in cryptographic management can compromise a whole system, thus allowing attackers to decrypt, access and/or modify sensitive information at rest or in transit. 9. Segmentation at any level: the principle of least privilege has underpinned all secure design since the origin of physical security. It mandates that each user and software component of a complex system should only be granted access to the minimum subset of information and resources needed for performing their duties. In large information systems this principle can be enacted by segmenting and isolating all aspects of the system: physical resources, network architectures, operating systems, applications, information, users and duties. It is difficult to integrate a complete and valid authentication and authorization system because it introduces overheads to the management and procedures of an information system in a large organization. 10. Physical security: if an attacker is able to gain physical access to a server or a computing device, he or she will also be able to subvert any logical protection mechanism, given enough time and resources. Hence, physical security and physical access control are two fundamental characteristics of any modern datacenter. Achieving sufficient levels of physical security is more challenging in geographically distributed scenarios characterized by a high number of connected devices that cannot easily be protected from tampering. 11. Security governance: the definition and enforcement of clear policies, procedures and responsibility is a management duty, and begins from the previously cited authentication and authorization decisions. This list also identifies the main skills required to build and maintain high levels of security within any large organization. The perfect candidates have a rare mixture of technical skills, teamwork ability and the typical hacker mindset that includes a tough attitude toward problem solving and the ability to identify original solutions by thinking out of the box. Increasing awareness of cyber threats is putting a lot of pressure on human resources (HR) departments of companies and organizations that often lack specific competencies for recruiting qualified cyber security experts. According to a poll from KPMG (Temperton, 2014), about 57 percent of senior IT and HR managers in UK
140
Technology and international relations
companies struggle to retain skilled cyber security staff due to ‘aggressive headhunting’. Moreover, about 53 percent of UK companies are willing to recruit ‘convicted’ cyber attackers. This attitude is unique to the cyber domain, since physical security companies would not hire thieves as security guards, or killers as bodyguards. It is important to warn against this practice, since giving full access privileges to many security-relevant systems to a skilled person with a proven propensity to commit crimes can lead to serious damage. Remember the motto ‘a leopard cannot change its spots’. Companies and organizations should be encouraged in their efforts to build internal competencies by hiring skilled hackers with clean criminal records. It was a journalistic mistake to define cybercriminals as hackers. Ethical hackers, commonly referred to as ‘white hats’, are competent and ethical persons. Similar employees are hard to recruit, and even more difficult to retain because the hacker mindset ill accords with the hierarchical structure and ‘yes boss’ culture of large companies. The main characteristics appreciated by white hats but seldom allowed in structured workplaces are: • flexible working hours, with the possibility of accessing organizations’ assets remotely at any time; • high levels of autonomy for a result-oriented work organization; • work within small teams of people with similar skills and mutual respect; • minimization of bureaucracy and control structures; • challenging and interesting projects that do not fall within the daily routine typical of many IT jobs. It can be very difficult for a large company to offer similar working conditions, but it is a key factor in getting and retaining highly skilled IT security experts. Many white hats may even prefer a lower salary in a similar working environment to a higher salary in a more traditional, hierarchical and structured workplace. That these factors are not fully understood and even less accepted by HR departments is a significant part of the problem of organizations’ lack of defense know-how. We should be aware that although modern organizations use some or all of the defensive technologies listed, the number of cyber crimes and data leaks continues to grow. This apparent paradox is explained by the following often unsaid truth: the combination of software vulnerabilities often due to unpatched software with user (employees and supplier’s employees) weaknesses causes more than 95 percent of successful cyber attacks. Just 5 percent is due to lack of adequate defensive technologies or configuration errors by some system administrator. Hence, the cyber game should itself move from the just technological scenario to the governance scenario that is proper to the executive level. Without clear rules about business priorities, duties, priv-
Cyber attacks and defenses: current capabilities and future trends
141
ileges, procedures and control, no modern cyber defense is feasible, whatever fancy technology is being adopted. 3.2
Private Actors
Just as the economic damage of cyber attacks is continuously growing, so are investments in cyber security. The Global Cyber Security Market has been estimated at US$75 billion and is projected to reach US$243 billion by 2023, at an annual growth rate of about 18 percent. The current landscape shows that many big IT industries are actively engaged in large acquisition campaigns, aiming to incorporate smaller companies and start-ups with vertical solutions and specific capabilities. The common goal of the big IT companies is to offer a complete cyber security portfolio that can satisfy all customer needs from technical hardware and software solutions, to management and governance consultants. Figure 7.2 shows the top ten IT industries ranked according to the number of cyber security acquisitions performed since 2001 (Suiche, 2016). This ranking is dominated by the largest IT industries that did not use to have a specific focus on cyber security. On the other hand, the traditional cyber security companies, such as Symantec, Check Point Software Technologies, McAfee and Trustwave Holding rank second, sixth, seventh and eighth place, respectively. These are clear signs of a fast-evolving market where acquisitions and fusions are not yet terminated. Cisco is the IT giant that acquired most security companies, thus inheriting their know-how. Cisco is a US-based multinational mostly known for manufacturing and selling networking hardware and telecommunications equipment. Its 29 acquisitions denote a clear strategy of gaining positions in the cyber security market. To give an idea of the size of the security market, in Table 7.2 we summarize the biggest five cyber acquisitions made by Cisco for which the terms of the acquisition and price have been publicly disclosed (Cisco, 2020). Microsoft, IBM and EMC are following the same strategy (we should also note that EMC has recently been acquired by Dell for US$67 billion) (Womack and Bass, 2015). One interesting case is the Universal Protection Service. Founded in 1965 as a janitorial company, it offered physical security services but has now started to develop cyber security capabilities through six acquisitions, ranking 13th in the top cyber security acquirer list. This is an example of the trend toward the unification of physical and cyber security services. Other examples are represented by several Fortune 100 companies such as Lockheed Martin, GE Digital and Siemens.
Technology and international relations
142
Figure 7.2
IT companies ranked on the basis of the number of acquisitions of cyber security companies
Table 7.2
Latest five security acquisitions by Cisco for which the price is publicly available
Acquired Security Company
Date of Acquisition
Price of Acquisition
Duo
October 2018
$2.35 billion
CloudLock
August 2016
$293 million
Lancope
January 2016
$452.5 million
OpenDNS
August 2015
$635 million
SourceFire
October 2013
$2.7 billion
3.3
National Actors
Many countries claim to be active in developing cyber capabilities, both defensive and offensive. While the offensive arena is based on rumors and claims with no publicly available data, a more scientific approach would be to consider the defensive capabilities. One possible yardstick can be derived from the number of security start-ups that have been acquired by companies belonging to a nation. This index gives an idea of the cyber capabilities acquired by companies in each nation, as well as their commitment to investing, researching and developing expertise in the cyber security field. Figure 7.3 shows the five countries whose companies performed the highest number of cyber security acquisitions. The dominance of the USA is evident, with a number of acquisi-
Cyber attacks and defenses: current capabilities and future trends
143
tions that is more than 20 times higher than Great Britain (second). Israel ranks third with 20 acquisitions. Canada and Japan rank fourth and fifth, with 13 and ten acquisitions, respectively. Please consider that all these data are subject to continuous changes.
Figure 7.3
Top five countries ranked by cyber security acquisitions
Other interesting data can be derived from the Global Cybersecurity Index (ITU, 2018), a report released by ITU that ranks all world countries according to their legal frameworks, technical capabilities, organizational frameworks, ability to build new capacities and ability to cooperate with other countries. The United States, United Kingdom and France rank in the first positions. All these data confirm that the US is incontestably the most powerful nation in the cyber security economic field, although this conclusion may be affected by the lack of public numbers on state-related actors in other large countries, such as Russia and China. In recent years, we have observed an increase in cyber preparedness by several Pacific Asian countries, while we can confirm the positive anomaly of the Israeli nation that, with respect to its small population, is the most active in researching, developing and selling cyber security solutions.
4
FUTURE TRENDS AND CAPABILITIES
There is an emerging trend in cyber security and another vulnerability dimension: the integration of big data, artificial intelligence and cyber security, and
144
Technology and international relations
the advent of so-called cyber-physical systems, respectively. Each topic will be analyzed in the following sections. 4.1
Big Data, Artificial Intelligence and Cyber Security
In physical security, a crime is immediately noted. In the cyber world, the detection of attacks is a problem in its own right. If we exclude ransomware and sabotage, most activities carried out by motivated and resourceful cyber attackers, such as intrusion, data exfiltration and malware deployment, are specifically designed to evade traditional defenses based on the identification of known attack patterns and techniques. The availability of a huge amount of data from networks, devices, social sites, dark web, millions of incidents, and the possibility of analyzing them through artificial intelligence (AI) tools (e.g., machine learning and deep learning) are creating new opportunities for both defenders and attackers. Cyber attackers continuously tweak their malware code so that detection tools based on malware signatures are no longer able to recognize it as malicious. There are relatively few families of brand-new malware but billions of variants in the wild that can be created semi-autonomously through some AI system. Detecting every variation of malware when it is deliberately disguised is hard but modern defensive systems can themselves use machine-learning models to amplify the domain of the original signatures and to spot attacks that are close enough to their genetic family of threat. This evolving threat landscape is pushing the cyber security field towards the adoption of two novel defensive approaches for detection and for prevention, respectively: (1) anomaly detection that aims to detect previously unknown attacks; (2) threat intelligence that gathers information about attackers and attack strategies. The combination of AI and cyber security is devising a futuristic scenario consisting of automatic tools for attacks and defenses. This topic will be analyzed in a separate section. The main idea behind anomaly detection is to baseline the normal behavior of a system, so that any deviation caused by attackers’ activities can be identified. This approach relies on the ability to build and maintain a behavioral model of complex information systems, comprising heterogeneous hardware and software components, mobile devices, and humans with erratic and unpredictable behavior. A similar goal requires the adoption of advanced security analytics, in which analysis methods borrowed from several different fields (multivariate statistics, data mining, graph analysis, machine and deep learning) are adapted and applied to the security domain. Besides data analysis, another enabling factor is the ability to manage huge amounts of information. Relational databases and centralized file systems cannot cope with the amount of data produced by a modern information system. Hence, any efficacious
Cyber attacks and defenses: current capabilities and future trends
145
security analytics solution must rely on some scalable big data management platform. As a further challenge, we should consider that data from networks and systems are gathered in a continuous way, hence it is necessary to adopt computing platforms that are able to collect, manage and process data streams within some temporal bounds. Finally, a novel set of attacks targeting detectors and classifiers based on AI is emerging: adversarial examples. Through careful tweaking of minor malware features, attackers are now able to evade detection (Apruzzese et al., 2019). Effective defenses against these kinds of attacks are still an open research challenge (Apruzzese, Colajanni and Marchetti, 2020). Threat intelligence is the most recent approach to preventive defenses. It aims at gathering serviceable information that can lead to better protection of organizations and critical infrastructures. Threat intelligence mimics the mindset of adversaries that prepare their cyber attacks by collecting and analyzing several classes of data. Some sources are listed below: 1. Information related to the protected company, gathered from both open and closed (dark) sources that may be accessible to attackers. Collection of similar data as a prevention method allows a defender to understand which kind of information is available to attackers and to identify (technical and human) weak spots that an attacker may try to exploit. 2. Information related to companies in the same market sector, which can be used to compare the cyber preparedness of the protected company against its main competitors. Moreover, knowledge about attacks that have involved similar industries and are likely to target the protected company in the near future are useful to prioritize cyber security investments. 3. Information relating to new and emerging attacks and tools, such as new vulnerabilities and malware classes (called indicators of compromise or IoCs), but also information about social engineering campaigns. Some information is publicly available, others can be bought by specialized companies. Other data require access to underground markets and dark sites that are used by cyber criminals to buy, rent and sell new attack tools. To be really effective, threat intelligence should be performed continuously, and may require covert operations by some top specialists working for the organization. 4.2
Automation of Attacks and Defenses
The automation of attacks dates back to the first network scanners and botnets (networks of robot computers that stay silent until their master node awakes them). Bots drive modern cyber attack automation by covering any activity: from automatic malware generation and delivery to spamming, from
146
Technology and international relations
denial-of-service to information gathering. Moreover, botnets are useful for diverting the security team’s attention while perpetrating some subtle attacks. Over 90 percent of 300 companies surveyed by Radware (2016) had suffered a cyber attack, and half of those attacked experienced short, intensive automated attacks – as against 27 percent in 2014. If attacks are becoming less about people and more about machines, defenses should follow a similar path. The fil rouge that links together automated defense capabilities is the ability to execute them with speed, precision and continuity levels that cannot be reached by human security experts. Some automated defense capabilities that are starting to appear as features in modern cyber defense software include: • automatic attack blocking or deflection; • automatic malware analysis; • automatic retaliation. The most common reaction capability is the ability to automatically block an attack as soon as it is detected, thus shifting from intrusion detection systems to intrusion prevention systems. Other more sophisticated (and less common) automatic reaction strategies involve the deflection of identified attacks to false targets, specifically designed to make the attacker believe that the attack succeeded and to carefully record all attacker activities. Data recorded by these false targets (called honeypots) allow security experts to analyze the latest attack strategies, and comprise a precious information source for threat intelligence platforms. Another form of reaction is the ability automatically to analyze software received by an information system and labeled as malware by antivirus and antimalware solutions. Human analysis of malware samples is a complex task, often requiring weeks of work by experienced analysts that no defense company can anymore afford in a scenario of hundreds of thousands of new malwares per day. However, modern sandboxing systems automatically execute malware samples in a confined and simulated environment, instrumented to record all activities performed by the malware. This information may be used to verify if the same activities have already been performed in other systems, thus identifying infections that protection software was unable to detect. Two recent episodes evidence the trend towards cyber automation activities. First, the Google proof-of-concept result about autonomous cryptography (Abadi and Andersen, 2016). The Google Brain team, working on deep learning, developed neural networks that teach themselves how to encrypt and decrypt messages. The team created three neural networks: Alice, Bob and Eve. After 15 000 simulations, the two neural networks with the cryptographic
Cyber attacks and defenses: current capabilities and future trends
147
key (Alice and Bob) were able to send and decrypt securely, while Eve was unable to decrypt intercepted information. Second, the DARPA Cyber Grand Challenge (DARPA, 2016) organized in summer 2016 represents a milestone in automated cyber defense and cyber attack. That challenge funded several research efforts for the development of autonomous software agents that are capable of automatically identifying novel vulnerabilities in known software samples, automatically designing and applying software patches to identified vulnerabilities, and automatically interacting with other unknown software components to identify possible vulnerabilities and attack them. The most futuristic form of reaction is automatic retaliation, in which reactive countermeasures designed to execute reconnaissance activities on the apparent source of a detected attack are automatically triggered. Current proposals are limited to information gathering from open sources – for example, by geolocating the attack source and retrieving information about the network to which the apparent attack source belongs. More advanced options include the possibility of gathering information on the source by automatically interacting with it. Through this process, it is possible to detect possible vulnerabilities in the attack source that may be exploited for retaliation. However, autonomous counterattacks are hardly applicable, since the nature of the Internet prevents any ‘certifiable’ attack attribution. Nevertheless, the initial results obtained by the research groups involved in the DARPA Cyber Grand Challenge demonstrate that it is time to extend to the cyber domain debates about attribution, autonomous weapons, unsupervised learning and their legal and ethical implications. These interesting issues are beyond the scope of this chapter, but we are aware that networks and information systems are becoming too big and too complex. 4.3
Cyber-physical Systems
There is an explosion in the number of so-called ‘smart devices’ connected to the Internet. Each device is both a potential target and a cyber weapon. Attackers can use these devices to build botnets – huge attack infrastructures of compromised connected devices. The denial-of-service attack that stopped Internet services in November 2016 was caused by a botnet comprising hundreds of thousands of connected devices (Ackerman, 2016). Today we are creating an entire ‘smart’ world of cyber-physical systems (CPSs) in industries, medicine, agriculture, cities, roads, logistics. A CPS integrates physical, computational and communication components that interact with human users.
148
Technology and international relations
These systems require interdisciplinary approaches and expertise in diverse contexts that evolved independently. At minimum, a CPS comprises: • heterogeneous sensors and actuators that measure the characteristics and modify the behavior of physical components (valves, engines, relays, switches, human machine interfaces…), possibly distributed over geographical scales and interacting with human beings; • computing devices that collect measurements from sensors, making them available to software components, and translate the output of software to input for actuators; • network channels and communication protocols (both wired and wireless) used to connect the elements of the CPS among themselves and to the Internet; • ‘smart’ software components that maintain a model of the physical systems connected to the CPS and adapt their states based on the objective of the CPS and on the parameters sensed. The ability of these software elements to adapt quickly to changes and to embed some predictive capabilities distinguishes CPSs from more traditional industrial control systems. Often the control loop of CPSs includes humans, but the current trend is toward complete automation through the application of advanced analytic approaches, including complex event processing, machine learning, computer vision and semantic algorithms. This is a broad definition including the Internet of Things, smart objects and systems, and Industry 4.0 (Lee, Bagheri and Kao, 2015). CPSs represent the visible future, holding yet unexplored opportunities for benign participants like consumers and companies, but also a frontier for new potential attacks. The emerging smart world provides adversaries with powerful tools for sabotage involving not only security but also safety. The first physical effects of cyber attacks have already appeared. Damage to industrial platforms has already been documented in different cases, such as Stuxnet (Kushner 2013), BlackEnergy in Ukraine (Lee, Assante and Conway, 2016) and the attack on a German steel mill (Zeiter, 2015). Researchers have demonstrated the possible consequences of attacks to connected vehicles (Greenberg, 2015), and the Internet of Things trend opens up new possibilities for attackers interested in spying on users (Gibbs, 2015), as well as state-sponsored mass surveillance programs (Ackerman and Thielman, 2016). It is certain that future CPSs, more complex and interconnected than ever, will be misused in unpredictable and unprecedented ways. Present security approaches are either inapplicable, not viable, insufficiently scalable, incompatible or simply inadequate for addressing the challenges posed by the complexity of a CPS. To address these challenges, we hope for
Cyber attacks and defenses: current capabilities and future trends
149
and expect a paradigm shift in software design and development. The entire software development lifecycle should integrate cyber and system resilience from the first inception of a new system. This integration lies at the heart of any cyber-physical security that aims to protect a whole system using widespread sensing, communication and control to operate safely and reliably. The discussions on how to achieve such goals have already started within academia and industry (Peisert et al., 2014), because this is no longer a market game but the crucial interest of society as a whole. Products and services with a safety and security digital reputation will remain on the market; others will disappear. It is just a matter of social awareness and time.
5 CONCLUSION We have analyzed the past, present and future of the cyber security landscape, distinguishing the pre-Web epoch, the period 1995–2019 and the visible future characterizing the 2020s. We have considered the main vulnerabilities affecting modern information systems, the main defense techniques, and the most important actors. We can conclude that the main vulnerabilities of computing systems are similar to those appearing since the early 1990s where the large majority of them (95 percent) are due to software errors and human behavior. Modern cyber systems are challenged by the rise of several types of powerful attackers. Cyber criminals, terrorists and state-related actors have competences and resources to build big attack infrastructures and to conduct targeted, complex and long-lasting attack campaigns leveraging software errors and human behaviors. As it is almost impossible to modify the business model of the software production industry because of aggressive times to market and costs, we should work more on awareness, prevention, detection and reaction capabilities. The most important defensive approaches are well known because they are established by norms, standards and best practices, but they are not implemented or only partially implemented, not to mention the historical gap between compliance to norms and real security. Nonetheless, cyber security is a huge market opportunity that continues to grow. All big IT industries are striving to develop cyber defense capabilities through acquisition of other specialized companies. Their clear strategy is to bring a complete cyber security offer to any customer. As the defenders continue to lose their battles and cyber threats involve the whole of society, we argue that it is time for a disruptive change in cyber security approaches. Cyber defenses no longer represent a topic just for technology specialists but a complex scenario involving technologies, policies, procedures, controls and the entire business processes and goals. The advent of CPSs through which we are building the so-called ‘smart world’ represents the new battleground for cyber defenders and attackers. The
150
Technology and international relations
dangerous combination of security and safety risks should require products and services that are produced with a high level of embedded security. Some industrial and critical sectors are imposing important changes in this direction requiring a holistic view of cyber security issues, the application of well-known practices of engineering design and assessment, and a strong commitment of the political and industrial management level even when remedial actions may be unpopular. An important issue of cyber security is due to the lack of experts that are difficult to recruit and retain. In any case, humans alone cannot detect the subtle indicators of a competent stealthy attacker or face the automation of cyber attacks on a large scale carried out by bots. In the future we can expect to see machines fighting machines and semi-automatic defensive systems, while autonomous retaliation responses are limited by ethical and legal problems, and by limits in offender attribution schemes. It is more likely that we will see security teams armed with cyber AI-based tools, which will be crucial.
REFERENCES Abadi, Martin and David G. Andersen (2016), ‘Learning to protect communications with adversarial neural cryptography’, accessed 10 May 2020 at https://arxiv.org/ pdf/1610.06918v1.pdf. Ackerman, Spencer (2016), ‘Major cyber attack disrupts internet service across Europe and US’, The Guardian, 21 October, accessed 10 May 2020 at https://www .theguardian.com/technology/2016/oct/21/ddos-attack-dyn-internet-denial-service. Ackerman, Spencer and Sam Thielman (2016), ‘US intelligence chief: we might use the internet of things to spy on you’, The Guardian, accessed 10 May 2020 at https://www.theguardian.com/technology/2016/feb/09/internet-of-things-smart -home-devices-government-surveillance-james-clapper. Apruzzese, Giovanni, Mauro Andreolini, Michele Colajanni and Mirco Marchetti (2020), ‘Hardening random forest cyber detectors against adversarial attacks’, IEEE Transactions on Emerging Topics in Computational Intelligence, 4(4), 427–39. Apruzzese, Giovanni, Michele Colajanni and Mirco Marchetti (2019), ‘Evaluating the effectiveness of adversarial attacks against botnet detectors’, paper presented at the 18th IEEE International Symposium on Network Computing and Applications (IEEE NCA19), Cambridge, MA, USA, September. CISCO (2020), ‘Acquisitions by year’, accessed 10 May 2020 at http://www.cisco.com/ c/en/us/about/corporate-strategy-office/acquisitions/acquisitions-list-years.html. DARPA (2016), ‘DARPA Cyber Grand Challenge Competitor Portal’, accessed 10 May 2020 at https://archive.darpa.mil/CyberGrandChallenge_CompetitorSite/. Gibbs, Samuel (2015), ‘Hackers can hijack Wi-Fi Hello Barbie to spy on your children’, The Guardian, 26 November, accessed 10 May 2020 at https://www .theguardian.com/technology/2015/nov/26/hackers-can-hijack-wi-fi-hello-barbie-to -spy-on-your-children. Greenberg, Andy (2015), ‘Hackers remotely kill a jeep on the highway—with me in it’, 21 July, accessed 10 May 2020 at https://www.wired.com/2015/07/hackers -remotely-kill-jeep-highway/. ITU (2018), Global Cybersecurity Index and Cyberwellness Profiles, 2018, accessed 10 May 2020 at http://www.itu.int/.
Cyber attacks and defenses: current capabilities and future trends
151
Jarrett, Marshall H. and Michael W. Bailie (2010), Prosecuting Computer Crimes, accessed 10 May 2020 at http://www.justice.gov/criminal/cybercrime/docs/ ccmanual.pdf. Kushner, David (2013), ‘The real story of Stuxnet’, IEEE Spectrum, 36 February, accessed 10 May 2020 at http://spectrum.ieee.org/telecom/security/the-real-story-of -stuxnet. Lee, Jay, Behrad Bagheri and Hung-An Kao (2015), ‘A cyber physical systems architecture for Industry 4.0-based manufacturing systems’, Manufacturing Letters, 3, 18–23. Lee, Robert M., Michael J. Assante and Tim Conway (2016), Analysis of The Cyber Attack on the Ukrainian Power Grid. Electricity Information Sharing and Analysis Center, accessed 10 May 2020 at https://ics.sans.org/media/E-ISAC_SANS_Ukraine _DUC_5.pdf. McMillan, Robert (2012), ‘The world’s first computer password? It was useless too’, Wired, 27 January, accessed 10 May 2020 at https:// www .wired .com/ 2012/ 01/ computer-password/. Morris, Robert and Ken Thompson (1979), ‘Password security: a case history’, Communications of the ACM, 22(11), https://doi.org/10.1145/359168.359172. National Institute of Standards and Technology (NIST) (2020), ‘National Checklist Program Repository’, accessed 10 May 2020 at https://web.nvd.nist.gov/view/ncp/ repository. Peisert, Sean, Jonathan Margulies and David M. Nicol et al. (2014), ‘Designed-in security for cyber physical systems’, IEEE Security & Privacy, 12(5), 9–12. Radware (2016), Global Application & Network Security Report 2015–2016, accessed 10 May 2020 at https://www.radware.com/WorkArea/DownloadAsset.aspx?ID= 6442457234. Suiche, Matt (2016), ‘InfoSec — top acquirers & start-ups (Update)’, Medium.com, 30 July, accessed 10 May 2020 at https://medium.com/@msuiche/infosec-top-acquirers -start-ups-update-8300bac5d763. Temperton, James (2014), ‘Half of UK businesses will consider hiring hackers’, 17 November, accessed 10 May 2020 at http://www.wired.co.uk/article/kpmg-big -companies-should-hire-hackers. U.S. Code, Title 18, Part I, Chapter 47, §1030 (1986), Fraud and Related Activity in Connection with Computers, accessed 10 May 2020 at https://www.law.cornell.edu/ uscode/text/18/1030. Womack, Brian and Dina Bass (2015), ‘Dell to buy EMC in deal worth about $67 billion’, Bloomberg, 12 October, accessed 10 May 2020 at http://www.bloomberg .com/news/articles/2015-10-12/dell-to-acquire-emc-for-67-billion-to-add-data -storage-devices. Zeiter, Kim (2015), ‘A cyberattack has caused confirmed physical damage for the second time ever’, Wired, 13 January, accessed 10 May 2020 at https://www.wired .com/2015/01/german-steel-mill-hack-destruction/.
8. Critical infrastructure protection Andrea Locatelli 1 INTRODUCTION Since the end of the Cold War, both international relations and strategic studies have experienced a thorough revision of previously dominating paradigms. The theoretical approaches have broadened, as have the empirical referents included in the respective research agendas. In particular, as one author put it (Walt, 1991), the 1990s witnessed a period of renaissance for strategic studies. The almost exclusive focus of the bipolar period on military issues rapidly proved inadequate to understanding the complex security scenario emerging from the ashes of the delicate balance of terror: the rise of intra-state wars, pandemics, transnational terrorism – to name just a few – called for a new research agenda. Ironically, in the face of early expectations of a pacified new world order, traditional forms of war became less frequent, but new threats became much more common. For this reason, the very definition of security came back to the fore as a key issue. A variety of contributions have been developed over the years with a view to addressing four basic questions (Hyde-Price, 2001, pp. 32–4): (1) Who is to be secured? (2) Who is the securitizing agent? (3) What kind of threats should be tackled? (4) What means should be used? To answer these questions, many authors argued that the traditional state-centric view of security – that is, the state as a securitizing agent using military means to defend itself from the military threat posed by other states – was obsolete. In turn, many suggested a number of alternatives, focusing as a main security referent on either societies (societal security), or individuals (human security and emancipation). Not surprisingly, all these approaches have been deeply criticized. Yet, appraising their virtues and limits is beyond the ambitions of this chapter. What matters for our purposes is that the discipline has broadly accepted the idea that security nowadays transcends the territorial integrity of a state. On the contrary, the starting point of our argument is that functional security is equally, if not more, important than territorial security. Functional security means ‘the ability of the government and civil society to function, the neces152
Critical infrastructure protection
153
sity to maintain critical infrastructures, for democratic governance to manifest certain basic values’ (Sundelius, 2005, p. 70). In other words, in both academia and policy-making circles, it has become widely accepted that over and above a state’s physical integrity, national security should include the society’s ability to deliver a number of goods and services. These goods and services depend in turn on critical infrastructures (CIs), so their protection has become a priority for any state on its own security agenda. Needless to say, this view has been confirmed most evidently – and glaringly – when the COVID-19 pandemic struck virtually every corner of the world. In fact, at the time of writing, just a handful of countries proved capable of managing the spread and impact of the pandemic: by the end of June 2020, almost 9.5 million people became infected and 482 000 died.1 What is relevant for our purposes is the huge variation of the fatality rate (i.e., the number of deaths over the number of infected cases) across and sometimes within countries. Admittedly, data are hardly reliable due to the risk of significant underreporting, and more research needs to be done on eventual mutations of the virus itself. However, one may easily find evidence of such variation: in the case of Italy, for instance, on 21 June, Lombardy displayed a record 20.94 fatality rate, while Veneto, a neighbouring region with early cases of infections, featured a 10.3 value.2 The experience of these Italian regions suggests that a big role in containing the effects of COVID-19 was due to the different responses implemented by the regions, but most importantly to Veneto’s healthcare system’s capacity of resilience. In the following pages we will provide the reader with a concise overview of the theoretical literature on CIs, as well as a short description of the main policies implemented for critical infrastructure protection (CIP) in the US and Europe. The next section aims to show the complexities of CIP and is divided into three subsections. The first lays out the key terms and definitions of CIs and what features make them problematic. The second subsection shifts its attention from CIs to the concept of protection: in particular, it discusses the challenges and risks that endanger CIs and the promise of a resilience-inspired approach. Finally, since most CIs are owned or operated by private companies, the third subsection discusses at greater length the complexities of the public– private partnership (PPP). Section 3 deals with the policies implemented by the US and the European Union (EU). Admittedly, two case studies cannot fully cover the variety of policies and approaches currently being developed by most states to protect their CIs. However, the American and European experiences are worth investigating, as they are illustrative of exceptional cases: the US first committed to CIP in the late 1990s and has deeply expanded this policy thereafter; the EU represents a sui generis case because of its supranational nature, which implies the demanding task of harmonizing the policies of as many as 27 states.
154
Technology and international relations
Finally, in the concluding section, we will draw tentative conclusions and raise further questions that should orient future research on CIP.
2
THE COMPLEXITIES OF CRITICAL INFRASTRUCTURE PROTECTION
Defining CIs may look uncontroversial at a first glance. As the term infrastructure implies, it refers to ‘facilities, services, and installations needed for the functioning of a community or society’ (Bennett, 2018, p. 37). Adding the qualification of critical may sound redundant, since all infrastructures serve a purpose and their eventual failure impairs that function. So, when we refer to CIs, we hold that some of them are more necessary than others. In Pesch-Cronin’s and Marion’s (2017, p. 4) words, we assume that some systems are indispensable for ‘the smooth functioning of government at all levels’. While these definitions may sound sufficient to cover all the empirical referents underlying the term, operationalizing the concept is more complicated. In fact, different countries may come up with different definitions of what they hold as critical; or the list of CIs may evolve over time, as has happened in the US and Europe. 2.1
Defining Critical Infrastructures
Following the policy documents developed in the US and Europe (Department of Homeland Security, 2013; EU Commission, 2005, p. 24), 16 sectors can broadly be recognized as CIs.3 They are: • Banking and finance. Considering the huge amounts of money that are traded on a daily basis all over the world, it should not come as a surprise that this sector is the core foundation of any thriving economy. For this reason, all the infrastructures that support financial activities – providing credit, investing funds, making payments and so on – need protection from disruption. Since most of these activities are based on information technologies, the main risks are related to cyber attacks. • Chemical sector. The chemical industry is a perfect example of integrated infrastructure, it being based on the production of goods that are needed in a variety of sectors, from agriculture to pharmaceuticals. Unlike the previous sector, the dangerous nature of its components makes chemical facilities prone not only to cyber attack, but also to accidents and physical attack. • Commercial facilities. This sector includes all the business sites where people tend to gather en masse. Examples of commercial facilities include malls, hotels, stadiums, amusement parks, museums and parades. All these
Critical infrastructure protection
•
•
•
•
•
•
155
facilities share an intrinsic vulnerability – that is, they need to operate on the basis of an open access principle, which makes protection quite difficult. Moreover, most of them are small private companies, with limited resources available to enhance their security. Communications. Since the spread of the Transmission Control Protocol/ Internet Protocol (TCP/IP) in the early 1990s, the communications sector has become a vital enabler of several other critical infrastructures (Giacomello, 2014, pp. 241–2) like the above-mentioned financial sector, but also transportation systems, government facilities and healthcare. The intrinsic vulnerability of the Internet makes the communications sector susceptible to cyber attacks. Critical manufacturing. Although not all manufacturing industries can be considered as critical, some subsectors may well qualify as such. This is particularly the case with raw material processing (primarily iron, steel and aluminium), machinery and transportation equipment (construction machines, cars, ships), high-tech appliances (electrical components and so forth). Here too, the main actors operating in this sector are private firms, with most of the market dominated by big companies. Dams. Dams are recognized as critical infrastructures for very good reasons: they are a source of electricity, irrigation, as well as drinking and industrial water supply. Their integrity is essential not just for these sectors, but also for the very safety of those people whose life would be at risk in case of flooding. Unlike other sectors, CIP in this area relates to both human-made and natural threats. Defence industrial base. The military industrial complex includes both domestic and foreign weapons producers, as well as military facilities. The role of the state here is more evident than in other sectors, since military facilities are directly owned by the Armed Forces (although they might sometimes be operated by private military companies) (Abrahamsen and Williams, 2010); moreover, in some countries even defence companies remain under some kind of government control (Caruso and Locatelli, 2013). Emergency services. This sector consists of public functions aimed at guaranteeing personal safety, like the police, fire brigade and rescue operations. While most of the actors involved are governmental agencies, private companies may also be included, such as private security and medical services companies. This sector covers both day-by-day activities and emergency management operations. Energy. The energy sector serves as the backbone of all other infrastructures, since all business, services and government agencies need energy for their operations. Energy production and distribution encompasses three interconnected sources: electricity, natural gas and oil (nuclear plants are
156
•
•
•
•
•
•
Technology and international relations
also an obvious energy supplier, but their delicate nature calls for a sector of their own). While harm to power grids and facilities is likely to have a disruptive effect on ample swaths of society, gas and oil complexes are also highly dangerous in the case of accident. Food and agriculture. Food production, processing and distribution is obviously the basic concern of any community. In modern societies, big cities are particularly dependent on goods coming from distant farms and imported from abroad. As a result, CIP here calls for the active involvement of private actors (that own this sector almost in its entirety) with assets that are likely to be located abroad. The main threats relate to food contamination, pandemics and natural disasters. Government facilities. Government facilities include both physical structures (like embassies, education facilities, military installations, national laboratories) and virtual components (cyber assets that are now integral to the political process, like communication technology, databases and so on). In the US, in the wake of 9/11, a subsector has been added to include national monuments and icons. Healthcare. Like the dam and emergency sectors, healthcare is geared not only to performing functions, but also to guaranteeing its own survival. CIP in this field consequently relates to emergency responses to acts of terrorism, pandemic outbreaks and natural disasters. As we have seen in the opening pages of this chapter, the COVID-19 pandemic has brought this issue under the spotlight due to the stress test it has caused to intensive care units. Here too, private actors play a considerable role, since many facilities, services and operators are private businesses. Information technology (IT). Like the energy sector, IT is a vital enabler of other sectors. In the past 50 years it has become a pervasive and indispensable infrastructure for any major societal function. The points of vulnerability are multiple and concern virtual, real and human domains (that is, software, hardware and humanware). It hence follows that the main risks relate to human-made attacks and incidents. Nuclear material. Nuclear infrastructures primarily, but not exclusively, concern nuclear reactors. As shown most clearly by the Fukushima accident in March 2011, these facilities are subject to multiple risks, from human-made incidents to natural disasters, with potentially deadly consequences. Or, as in the case of the Stuxnet malware, cyber attacks might aim to disrupt plant functions, with the most likely effect of causing wide-scale power cuts. Transportation. The transportation sector includes a variety of subsectors – most notably aviation, roadways, maritime transport, railways and, in the US, mail and shipping services. These infrastructures are eminently physical, but they also require highly developed IT systems. They are
Critical infrastructure protection
157
obviously vital to the economy, but also for everyday life in any fair-size urban setting. For this reason, these infrastructures need to be secured from both natural and human-made risks. • Water supply systems. This infrastructure manages both drinking and wastewater. It is therefore closely interconnected with the health and food sectors. The main damage related to water supply concerns the threat of contamination by toxic agents. This can occur as a result of accident, terrorist attacks, cyber attacks, or natural disasters. This short list provides a first-glance overview of the sectors that qualify as critical infrastructure. Three points of complexity emerge: the first concerns the geographical scope of critical infrastructures, which can be local or national (or federal, as in the USA, or even supranational, as in the European context). Local critical infrastructures are defined as such by local jurisdiction. For this reason, we refer to systems that are deemed indispensable for a given community – no matter if it is a scantily populated rural area or a multimillion-inhabitant city. National critical infrastructures, on the other hand, are defined by a state’s central authority; they are therefore important for the political, economic and social system of the state as a whole. The main difference between local and national CIs is the authority that claims responsibility for protecting them – ergo, under whose jurisdiction they eventually fall. What makes this distinction problematic is that local authorities may have fewer resources for their own CIs, while national agencies may have little interest in investing in local CIs. The main problems arising from this distinction relate to the possibility of turf wars between state agencies and/or underfunding of CIP. The second point relates to how indispensable a given asset is. In the US, for instance, critical infrastructures are divided into Tier 1 and Tier 2 facilities, where the former are such because in case of harm or disruption they would significantly impact society at both regional and national levels. The latter is defined in turn as ‘less critical, but still needed for a strong community’ (Pesch-Cronin and Marion, 2017, p. 4). This definition implies that the impact of disruption would disturb the normal functioning of society, but with limited effects at a national level. The point is that since all CIs have points of vulnerability, it would hardly be possible to protect them all. The Tier 1 and Tier 2 distinction allows one to prioritize those CIs that are deemed vital and allocate more resources in their favour; however, the flip side of the coin is that interconnectedness among structures and systems may cause cascade effects, thus making disruption of Tier 2 facilities as critical as if they were Tier 1. Finally, the third point of complexity depends on the participation of private actors in ownership and/or management of CIs.4 For this reason, it is essential for government agencies to make sure that these private actors have all the
158
Technology and international relations
means and ability to set up adequate defences. The relationship between public and private actors implies high stakes, but – as we will see in Section 2.3 – it also entails a degree of friction that can make CIP problematic. Before we turn to this point, we first need to understand what we mean by protection – or, in other words, what different strategies can be used to achieve this goal. 2.2
How to Protect Critical Infrastructures?
After reviewing what features qualify an infrastructure as critical, we are now left with the task of developing strategies to protect CIs. As will be clear to the reader from the previous discussion, differences among sectors do not allow a single catch-all plan to be devised. On the contrary, sound strategies call for tailored measures to meet the risks associated with each infrastructure. For this reason, as a first step one needs to investigate which causes of harm are more likely to affect CIs. Three main sources of risk – or hazards – can here be outlined: natural hazards, technological hazards and human-caused hazards (Pesch-Cronin and Marion, 2017, p. 16.). Natural hazards usually happen in the form of natural disasters, like earthquakes, hurricanes, tsunamis and so on. Examples of this kind are well known, like Hurricane Katrina, which flooded New Orleans and surrounding areas in August 2005; or the tsunami that hit most countries facing the Indian Ocean in December 2004; or, finally, the earthquake (and subsequent tsunami) that led to the Fukushima nuclear accident in March 2011. As this last example shows most clearly, natural hazards may severely impair the physical assets underlying CIs, although early detection and reinforcing measures can be implemented. Other forms of natural hazard are pandemics and other health-related threats. Recent examples of this kind include the Ebola virus epidemic in West Africa (circa 2013–16), SARS in 2002–03 and Avian flu in the mid-2000s. While potentially less lethal than natural disasters, the economic impact of epidemics is not confined to healthcare, but can spill over to other CI sectors, like water supply systems and food and agriculture (for instance, the spread of Avian flu made poultry sales plummet in several countries, with significant economic and social repercussions). The second type of hazard involves accidents taking place as a result of human error or technological failure. Interestingly, there may be a single cause of harm, or one such may have a catalytic effect on other hazards. The above-mentioned cases of Katrina and Fukushima are examples of the latter form: one of the reasons why New Orleans suffered such extensive flooding was a massive failure in surge protection levees after the hurricane hit the area (Government Accountability Office [GAO], 2006); likewise, the Fukushima accident might probably have been contained had the plant operator devised better emergency procedures (Eisler, 2013). However, technological hazards
Critical infrastructure protection
159
can also be responsible for severe disruptions even in the absence of other factors. The ExxonMobil Refinery Explosion in Torrance (CA) on 18 February 2015 is a case in point: according to the US Chemical Safety Board (CSB) investigation report, the explosion – which fortunately led to debris dispersion and just two workers suffering minor injuries – occurred ‘due to weaknesses in the ExxonMobil Torrance refinery’s process safety management system’ (CSB, 2017, p. 6). Other examples of technological hazard include transportation accidents, hazardous material release and utility disruption. Unlike the first type of hazard, human-caused hazards are due to the actions of a single individual or group of people. This kind of hazard is also different from the second type, as it is intentional. Examples of this kind are terrorist attacks (in the form of armed assault, bomb attacks or mass killing) cyber attacks (against data or hardware) and physical attacks perpetrated by other states with conventional or non-conventional weapons. It follows that a variety of actors can be identified as a source of risk: from enemy countries to terrorist organizations, as well as lone wolves and hackers. Recent cases of human-caused attacks on CIs are the 2010 Stuxnet malware against the Iranian nuclear facilities in Natanz, the WannaCry ransomware that infected computers in almost 100 countries, as well as ISIS-related attacks in France, Germany and beyond. While probably not exhaustive, this categorization allows us to unpack the multiple sources of risk that CIP should take into account. To provide a concise overview of the complexities pertaining to CIP, Table 8.1 matches the main features of the 16 sectors discussed in terms of actors involved and hazards. So, how to protect these sectors from natural disasters, accidents and attacks? Obviously, the ideal answer would be through an ‘all-hazards’ approach – that is, framing a strategy that might tackle all these threats. The appeal of such an approach is that it would handle any kind of threat to the normal functioning of CIs. However, the complexity and interconnectedness of today’s infrastructures, as well as the unpredictability of natural and human-made accidents, make it hardly feasible to devise defences that are suitable for all circumstances. As Bologna, Fasani and Martellini (2013, pp. 54–5) maintain with reference to critical information infrastructures, ‘to classify all possible cyber threats and physical threats is an endless job that historically has shown all its limits: it works only for a short time’. So, the idea that has gained momentum over the past decade is that, when it comes to protection, the most suitable approach is one based on the concept of resilience. The term is enjoying widespread popularity: initially developed in the 1960s in the field of ecology (Chandler and Coaffee, 2017), it is now quite common in social sciences and political documents. Unfortunately, the rise in popularity has come at the expense of scientific rigor, with the concept now being stretched to encompass different meanings.
Technology and international relations
160
Table 8.1
Summary of the main features of CIP
Critical Infrastructures
Public/Private Actors
Sources of Risk
Banking and finance
Mostly private, big companies
Cyber attacks
Chemical sector
Mostly private, big companies
Cyber and physical attacks,
Commercial facilities
Mostly private, small companies
Communications
Mostly private, small & big companies
Critical manufacturing
Mostly private, small & big companies
accidents, natural disasters Cyber and physical attacks, accidents, natural disasters Cyber attacks Cyber and physical attacks, accidents, natural disasters Dams
Mostly private
Cyber and physical attacks,
Defence industrial base
Mostly public
Cyber attacks
Emergency services
Mostly public
Cyber and physical attacks,
accidents, natural disasters
accidents, natural disasters Energy
Public and private
Cyber and physical attacks, accidents, natural disasters
Food and agriculture
Mostly private, small & big companies
Physical attacks, accidents, natural disasters
Government facilities
Mostly public
Cyber and physical attacks, natural disasters
Healthcare
Public and private
Cyber and physical attacks,
Information technology
Mostly private, small & big companies
Cyber attacks
Nuclear material
Public and private
Cyber and physical attacks,
accidents, natural disasters
accidents, natural disasters Transportation
Mostly private
Cyber and physical attacks, accidents, natural disasters
Water supply systems
Public and private
Cyber and physical attacks, accidents, natural disasters
In brief, Elsner, Huck and Marathe (2018, p. 31) boil them down to a simple dichotomy: one version of the term consists in describing the features that allow a system to ‘bounce back’ to its original state after a shock (so conflating it with the related concept of recovery); the second conception implies the capacity of a system to ‘bounce forward’ – that is, to adapt to the new circumstances. Over time, the latter notion has gained more prominence. For instance, in the US, resilience is defined in policy documents as ‘the ability to resist, absorb, recover from, or successfully adapt to adversity or a change in conditions’ (quoted in Bennett, 2018, p. 286) and ‘the capacity of an organization to recognize threats and hazards and make adjustments that will improve
Critical infrastructure protection
161
future protection efforts and risk reduction measures’ (quoted in Pesch-Cronin and Marion, 2017, pp. 14–15). Resilience has gradually replaced protection as a goal in most strategic documents. While, at least initially, it was regarded as a second-best option to structural defences under the principle of acceptable loss, it has now become widely accepted that resilience complements protection, since it adds further provisions beyond defences (Bennett, 2018, p. 286). This is borne out by the criteria used to measure the resiliency of a CI. Robustness – namely, a system’s capacity to absorb shocks and keep on delivering its functions – is usually considered as the first requisite. The second criterion is resourcefulness – that is, a system’s capacity to marshal necessary resources during an emergency. Third comes redundancy – namely, equipping a given CI with multiple systems capable of delivering the same service, so that should the main system be impaired, others can replace it. Fourth is rapid recovery, the ability of a CI to restore its services and functions as soon as possible (Tierney and Bruneau, 2007, p. 15; for a more detailed list of resilience principles, see also Ganguly, Bhatia and Flynn, 2018, p. 101). The fifth and final requisite is adaptability, viz. learning the right lessons from past experiences and changing systems and/or procedures accordingly. To conclude, in this section we have seen a variety of risks facing CIs. This has called for an all-hazard approach – that is, the necessity to develop strategies that might handle all the sources of risk. Since a ‘fortress’ approach would be unrealistic, resilience – that is, the capability of a system to provide services, recover and improve after an accident – has shaped the doctrinal debate on the issue. In the next section we will consider a complicating factor in CIP. 2.3
Problems with the Public–Private Partnership (PPP)
The above discussion has focused almost exclusively on the role that government agencies can and should play in protecting their CIs. Coherently with the state’s self-attributed role as ultimate security provider for its citizens, we have assumed a classical perspective: the state performs this function by protecting critical infrastructures and making them resilient to exogenous threats. While this may well be the case for a few sectors – for example, government facilities, national monuments and icons – as we have seen, many systems and structures are beyond the state’s reach. It is in this perspective that PPPs play a vital role (for a broader discussion of the origins of PPPs see Chapter 9 by Giacomello in this book). It is therefore no wonder that most policy documents call for smooth cooperation between state and business actors. And, one might add, cooperation should be a natural outcome, infrastructure security being a shared interest for both actors. In other words, PPPs qualify as a win–win game, making cooperation the favourite
162
Technology and international relations
option for both players. However, contrary to such an optimistic view, public and private agencies need to overcome a number of barriers to making this partnership work, since they have different priorities, procedures and incentives (Carr, 2016). As we will see in this section, the task of protecting CIs implies a degree of conflict between states and corporations. The end result is that the former needs to interfere with the business of the latter (Fasani and Locatelli, 2017, pp. 38–40). In a nutshell, the main question that needs to be answered is not whether it is a legitimate course of action for a state to disturb the market (a point that even the laissez-faire doctrine has accepted under certain conditions). The point is rather to define the limits of state intervention: up to what point can governmental agencies regulate the activities of such private actors? What costs can they impose on them in the search for higher security standards? What exceptions can be accepted to the free market? Examples of these controversies abound. To recall a recent case, in the wake of the San Bernardino mass shooting in December 2015, a federal court ordered Apple to design a software that would allow the FBI to access one of the perpetrators’ work iPhone. The company refused, objecting that by doing so it would potentially violate its customers’ privacy, since the FBI would have gained access to any device. As this case shows, PPP is more easily said than done. The problem depends on three main types of market failure (Assaf, 2007). The first type of market failure is the public good.5 CIP represents a public good by definition, as its main output is national security: once an infrastructure is secured, the benefits of security spill over to the whole community, no one excluded. As a result, in the absence of private incentives, free riding is the dominant strategy for private actors (to put it bluntly: ‘Why should I incur the cost of producing security when I can enjoy it for free?’). This does not mean that CIP is systematically discarded by private actors, but it suggests that the level of security that governmental agencies find appropriate is higher than private companies would concede. This argument is supported by the second type of market failure: negative externalities – that is, when the activity of one actor causes costs to others, costs whose negative value exceeds the gains accruing to the original actor. A typical example is a polluting firm, which makes a profit from its activity but generates a negative outcome for the whole community due to the costs of pollution. In terms of CIP, the problem depends on the high degree of convergence among CIs: so, the costs of a vulnerable CI are not limited to what the company that owns or runs it might incur, but would probably spill over to other systems and infrastructures, thus multiplying the societal toll. For this reason, it is likely that the security investment that the company is willing to pay would be suboptimal in the absence of state intervention (Stiglitz and Wallsten, 1999, p. 53).
Critical infrastructure protection
163
Finally, the third type of market failure concerns information asymmetry – that is, when some of the actors involved in the market lack the information required to choose a proper course of behaviour. Companies in charge of protecting CIs have a strong incentive to hide relevant information that might be useful for companies committed to defending other infrastructures. We refer to their known vulnerabilities, what kind of defences are in place and data on the kind and effect of attacks and intrusions. Sharing information of this sort might prove vital to securing other systems and containing the effects of an attack (this is particularly true in relation to zero-day vulnerabilities). As mentioned, however, the private costs of sharing such information might be intolerable: being transparent on the defences in place might give potential adversaries a strategic advantage; exposing vulnerabilities might thwart the reputation of the company; not least, sharing data on intrusions might involve the company in legal action (for example, in the case of data violation). In a nutshell, without some kind of government intervention, private actors risk making the social cost of an eventual disruption higher than it would be under public supervision. To solve these market failures, governments can implement a number of solutions. The imposition of standards and regulations aimed at ensuring a minimum level of investment in CIP is a first option; obviously, the policy-making process leading to these regulations should include private actors, to ensure that applying the standards will not make CI management unprofitable (Grant, 2018, p. 47). By contrast, other solutions might include positive incentives like tax breaks and low interest loans for companies that invest in security. Other rules that states can enforce on their private counterpart (but also on intelligence agencies) concern information and best practices sharing (Pesch-Cronin and Marion, 2017, pp. 7–8). In conclusion, as Vaillancourt Rosenau (1999) argued more than 20 years ago, an effective PPP policy cannot escape some sort of regulation: policy makers must decide on a case-by-case basis how to allocate risks and responsibilities with their private counterparts. It is a delicate balance between contrasting incentives, but a bargaining solution is hardly ever impossible.
3
CASE STUDIES
So far we have outlined the main features of CIP: threats, strategies and complexities. In the following we will describe the main policies developed in the US and Europe to address this issue. Admittedly, a thorough overview of the main initiatives developed on the opposite shores of the Atlantic is beyond the scope and purpose of this chapter: both actors have built institutional and technological apparatus that embraces a variety of agencies and sectors (approximately those outlined in Section 2.1). However, the US and EU have followed
164
Technology and international relations
different trajectories, mostly due to their different institutional settings. For this reason, a cursory look at their main efforts, achievements and missteps will help to shed further light on the issues at stake. 3.1
The United States
The US was quick to develop CIP programmes as early as during the Cold War years. At that time, however, the definition of CI was much narrower than it is now, and plans were based mostly on military defence (Cordesman and Cordesman, 2002, pp. 54–5). The first serious effort to develop a comprehensive strategy for CIP dates back to the late 1990s: following the report released in 1997 by the President’s Commission on Critical Infrastructure Protection (PCCIP), in May 1998 then President Bill Clinton issued Presidential Decision Directive 63 (PDD-63) (The White House, 1998). PDD-63 laid out five domains and a number of sectors that required protection. What is relevant to our purposes is the institutional innovations that the directive brought about: a lead agency (that is, a federal executive department, like defence, commerce, energy, and so on) was assigned to each sector. Mindful of the role played by private stakeholders, PDD-63 mandated each agency to appoint a sector liaison official in charge of managing communications with private actors. These, in turn, were asked to designate a sector coordinator. Together, the sector liaison official and sector coordinator (plus all other relevant parties) were tasked with the goal of framing security plans for the protection of their sector, which were to be integrated into a national infrastructure assurance plan. In charge of developing the national plan was a coordination staff hosted within the broader Critical Infrastructure Assurance Office (CIAO), which both assisted the national coordination staff and supported single sectors developing their plans. Finally, PDD-63 created a number of high-rank positions within the executive branch, like the National Coordinator for Security, Infrastructure Protection, and Counterterrorism and the National Infrastructure Protection Center (NIPC). In conclusion, US PDD-63 represents a landmark document, as it first laid out an institutional setting and a procedural blueprint for CIP (Moteff, 2015, p. 4). The next big step forward dates to the aftermath of 9/11. The terrorist attacks in Washington and New York left a mark on the Bush administration’s approach to CIP, as they made it clear that the current policy had failed to ensure physical protection of such vital infrastructures as the Twin Towers and the Pentagon. Overall, two main lessons were learnt. The first concerned the sources of risk: contrary to previous expectation, the soft belly of US CIs was not the cyber element, but the real one. The second related to flaws in the federal security agencies’ bureaucratic processes (9/11 Commission Report,
Critical infrastructure protection
165
2004, pp. 407–18). Driven by these considerations, the Bush administration undertook four main initiatives.6 The first two initiatives were Executive Orders 13228 and 13231, signed by the president on 8 and 16 October 2001. The former established the brand-new Office of Homeland Security (OHS) – which was de facto replaced one year later by the Department of Homeland Security (DHS) – while the latter set up the President’s Critical Infrastructure Protection Board (PCIPB) and the National Infrastructure Advisory Council (NIAC). The statutory mission of the OHS explicitly referred to the terrorist threat, so its operational tasks went beyond CIP, but its primary goal ‘was to coordinate efforts to protect the US and its critical infrastructure from another attack, and maintain efforts at recovery’ (Pesch-Cronin and Marion, 2017, p. 30). Chaired by the Assistant to the President for Homeland Security, OHS then became the main federal agency in charge of developing a comprehensive CIP policy. The agencies established by Executive Order 13231 were not top-ranking, but still made for an appreciable amendment of the previous governance structure. The PCIPB had an advisory role and was charged with devising a national plan for CIP. The NIAC targeted information systems and its main assignment was to encourage private operators in this sector to perform a periodic assessment of their vulnerabilities (Moteff, 2015, p. 9). The third initiative was the newly established DHS’s National Strategy for Homeland Security (DHS, 2002b). Within the broader framework of the Bush administration’s counterterrorism policy, the 90-page document added new sectors to the list of CIs and reshuffled some of the lead agencies previously defined by PDD-63. It did not lead to the creation of new agencies, but expanded and clarified the federal prerogatives (as well as priorities) in terms of CIP. In the same vein, on 17 December 2003, the White House released Homeland Security Presidential Directive 7 (HSPD-7) – our fourth and final initiative. HSPD-7 outlined the role and responsibilities of each lead agency (now renamed as sector-specific agency, SSA) and assigned a coordinating role to the DHS, whose top official, the Secretary of Homeland Security, enjoyed the rank of cabinet member (Pesch-Cronin and Marion, 2017, pp. 31–4). In conclusion, under the Bush administration, CIP policies expanded and gained higher prominence among the Executive’s functions. Apart from the partial adjustment already mentioned, the reforms discussed above did not lead to substantial changes from the path marked out by the Clinton administration. The same holds true for the final step that shaped the current form of CIP in the US: the efforts spent by the Obama administration. Too many initiatives were launched between 2009 and 2016 to be covered here (for a detailed review, see Pesch-Cronin and Marion, 2017, pp. 37–58), but to name a few, Executive Orders 13563, 13636, 13691; Presidential Policy Directives 8 (PPD-8) and 21 (PPD-21); the 2013 National Infrastructure Protection Plan
166
Technology and international relations
(DHS, 2013). Taken together, these initiatives led to further expansion of the policies, roles and prerogatives of the federal agencies in charge of CIP: with PPD-21, President Obama mandated an assessment of the existing PPP models with a view to better engaging private actors in CIP; in connection with this, he also tried to improve information sharing, as well as research and development (Setola, Luiijf and Theocharidou, 2016, p. 8); he confirmed all-hazards and resilience as privileged approaches; finally, he stressed the need to improve cyber security policies (Pesch-Cronin and Marion, 2017, p. 58). As should be evident from our previous discussion, none of these efforts represents a significant departure from the early initiatives of the Clinton era. 3.2
The European Union
The European is very different from the American experience. Obviously, the main point of distinction for the EU is its supranational nature: with a membership of up to 27 states, each with its own authorities, laws and strategic culture, its main goal – at least until now – has consisted less in the development of a top-down policy than in an attempt to keep member states’ CIP standards and procedures in line. In other words, unlike the US, EU institutions have been severely constrained in their action by their limited enforcement capacity on member states. So, one may conclude, the European experience in CIP (as in other issue areas) has been driven by a clear overarching goal: policy integration. Moreover, the early attempts to protect CIs across the continent are more recent than in the US: this is quite surprising, since the level of convergence and interconnectedness among CIs across the territory of the Union is a decades-old reality.7 Nonetheless, it took the tragic experience of the March 2004 Madrid bombing to urge the Commission to take action in CIP. Thus, we may take the December 2004 European Council as a point of departure for our review of EU CIP policy: on that occasion, the Council endorsed the Commission’s proposal to set up a European Programme for Critical Infrastructure Protection (EPCIP). EPCIP represents ‘the overall framework for activities aimed at improving the protection of CI in Europe—across all EU states and in all relevant sectors of economic activity’ (Setola et al., 2016, p. 9). Within this framework, the first EU effort worth considering is the publication, in November 2005, of a green paper on CIP (EU Commission, 2005). The main tenets of the document were strikingly similar to the early US policies: to engage as many stakeholders as possible in CIP, define which sectors should be included as CIs, promote best practices and information sharing, and so on (Lazari, 2014, p. 46). The paper highlighted the underlying problem of European CIP: who should be in charge of writing the list of European CIs: community institutions or member states? The final draft leaned toward
Critical infrastructure protection
167
the second option and embraced a bottom-up approach, thereby frustrating the Commission’s ambition to play a major role in promoting integration. Although, admittedly, the green paper did not achieve direct implementation, it received a good deal of attention from member states and other stakeholders. Eventually, it served as a basis for the first major EU initiative: Directive 114/08/EC adopted by the European Council on 8 December 2008. In the attempt to solve the identification problem, Directive 114/08/EC established a common procedure to designate European CIs and assess their vulnerabilities. It also compelled private stakeholders to devise Operator Security Plans (OSPs, that is, security assets and procedures to protect the CI) and nominate Security Liaison Officers (SLOs, that is, an officer within the CI in charge of communicating with relevant national critical infrastructure protection authorities). The provision of a review process, to be held in 2012, highlights the attitude that drove the European institutions: the Directive per se was just one step in a long-term process, whose trajectory may be subject to changes, delays, or even speed-ups. And, indeed, compared to US policy, the EU effort was still in the early stages, since Directive 114/08/EC only applied to the energy and transport sectors. The third and final step towards CIP dates to 2016, with the so-called Network and Information Systems (NIS) Directive. Although not self-executing, this is the first piece of EU legislation aimed at strengthening cyber resilience on the part of EU infrastructures: in other words, it compels member states to transpose its provisions into their national legislations, but it leaves them with a margin of discretion as to their application. The NIS Directive displays some similarity with the Obama policies, as it is inspired by an all-hazard, resilience-oriented approach. Furthermore, just like Directive 114/08/EC, it pays particular attention to the PPP: private operators of CIs and states alike shall comply with a number of security requirements, in the form of both safeguarding and information obligations (Michels and Walden, 2018, pp. 16–22). Finally, it requires governments to develop a national security strategy for cyberspace, to appoint a national authority for cybersecurity, and one or more Computer Security Incident Response Teams (CSIRTs). Since, at the time of writing, the directive still has to be fully implemented by a number of states, it is not possible to fully assess its impact on CIP. However, the scope and binding nature of its provisions suggest it will be seen in the future as a turning point in EU CIP policy.
4 CONCLUSIONS The introductory remarks of this chapter stressed how states’ security agendas have expanded over the past thirty years. As we have seen in the previous pages, the US and Europe are no exception: they have devoted increasing
168
Technology and international relations
attention to CIs and their involvement is destined to grow still more in the future. If ever there was a need, the COVID-19 pandemic has dispelled any doubts about this conclusion. The reason CIP is likely to remain a priority for years to come depends on two main factors: the inherent weakness underlying contemporary societies and the steady evolution of information and communication technologies. With regard to the former, one may easily find early and evocative illustrations in Ulrich Beck’s (1992) and Antony Giddens’ (1990) definition of risk society: in these authors’ conceptualizations, technology-driven modernization had a multifold impact on social order, creating new – mostly human-made – sources of risk. In this respect, CIs represent an added layer of complexity and risk, since automation has led to the ‘shift from man to machine’ (Lazari, 2014, p. 8). So, while Beck was concerned in particular with the environmental issue, today’s dimensions of risk are increasingly multiple. It is therefore one of the main challenges of the day for any political order to manage risk – that is, how to keep it under an acceptable threshold and, equally relevant, how to allocate its costs across the societal tissue (the economic crisis resulting from the pandemic is another case in point). If one looks at the evolution of information and communication technologies (as shown in other chapters in this book), powerful new technologies whose effects are impossible to predict are being developed on an almost daily basis. These new technologies – occurring in the chemical, engineering, cyber and artificial intelligence domains – can be used to strengthen CIs and enhance their resilience, but may also result in formidable new threats. In conclusion, our discussion highlights a number of open questions that require further research. From an empirical perspective, we are left with no satisfactory solution to the problem of how to credibly mitigate risk: indeed, technological remedies are a necessary but not sufficient condition to guarantee protection, human skills being equally, if not more, important (Van Schewick, 2010); with regard to voluntary attacks (especially in cyberspace) system vulnerabilities make offence easier than defence (for a critique see Slayton, 2017); finally, ethical and legal regimes are of very limited utility when it comes to deterring and responding to attacks (Owens, Dam and Lin, 2009, pp. 36–7). As a result, from a theoretical viewpoint, further analysis is needed to better understand the implications that current technological trends are having, and will have, on our lives. The unpleasant truth that we should accept is that current efforts to implement CIP should be better seen as a process rather than as an outcome.
Critical infrastructure protection
169
NOTES 1.
2.
3.
4. 5.
6.
7.
Data gathered from the COVID-19 Dashboard by the Center for Systemic Science and Engineering (CSSE), Johns Hopkins University (JHU); last accessed 25 June 2020 at https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bd a7594740fd40299423467b48e9ecf6. Author’s elaboration on data gathered by Dipartimento della Protezione Civile; last accessed 25 June 2020 at http://opendatadpc.maps.arcgis.com/apps/opsdashboard/ index.html?fbclid=IwAR1MVKZcZPOQ3TG7z1F2FLw3gEsAEc0RoLw24Cn FOcS3ZObAimHaiJDbHH8#/b0c68bce2cce478eaac82fe38d4138b1. One may argue that such a difference is due to the limited number of tests performed in Lombardy (956 959 of a roughly 10 million population) compared to Veneto (876 915 of a 5 million population). But still, other evidence suggests otherwise: data gathered by Istituto Nazionale di Previdenza Sociale (INPS) on the mortality rate before and during the pandemic shows that in the virus-torn provinces of Brescia, Lodi and Cremona, the death rate experienced a 300–400 per cent increase compared to the 2015–19 baseline. In the neighboring Veneto province of Verona, the increase in mortality rate was in the order of 50–100 per cent: see https://www .inps.it/docallegatiNP/Mig/Dati_analisi_bilanci/Nota_CGSA_mortal_Covid19 _def.pdf; last accessed 25 June 2020. We will not discuss here the related concepts of ‘key resources’ and ‘key assets’. These terms, originally introduced into the policy jargon by the Homeland Security Act of 2002 (DHS, 2002a) and the National Strategy for Homeland Security (DHS, 2002b), were designed to point out individual platforms or systems that are still important but less vital than a CI. Examples of key resources and key assets are, respectively, a cruise ship and a national monument. Over time, most instances of key resources and key assets have been included as components of the 16 segments here discussed, so, for our purposes they do not deserve separate consideration. For a first-blush comparison of the US and EU visions, see Table 9.1 in Chapter 9 by Giacomello in this book. It is probably impossible to assess the actual proportion of private systems and structures over the total CIs in a given country. However, current estimates suggest that in the US this value is about 85–90 per cent (Bennett, 2018, p. 43). The classical definition of public good is based on two features: non-rivalry and non-excludability. In other words, the fact that some users benefit from the good does not reduce the amount of consumption available to others (non-rivalry) and, once the good is produced, it is not possible to exclude anyone (non-excludability). These properties cause free riding and underproduction. Space limitations prevent us from discussing single-sector reforms undertaken by the Bush administration. However, it is worth recalling that the Post-Katrina Emergency Management Reform Act of 2006 improved the capacities and procedures of the Federal Emergency Management Agency (FEMA). An erstwhile example of the interconnectedness and vulnerability of European CIs is the blackout that struck most of Italy on 28 September 2003: on that occasion, a tree flashover caused a temporary disruption of the Swiss Mettlen-Lavorgo powerline. The event sparked a major power outage in Italy, with as many as 50 million people affected by the blackout for several hours.
170
Technology and international relations
REFERENCES Abrahamsen, Rita and Michael C. Williams (2010), Security Beyond the State: Private Security in International Politics, Cambridge, UK: Cambridge University Press. Assaf, Dan (2007), ‘Government intervention in information infrastructure protection’, in Eric Goetz and Sujeet Shenoi (eds), Critical Infrastructure Protection, New York: Springer, pp. 29–39. Beck, Ulrich (1992), Risk Society: Towards a New Modernity, London: SAGE. Bennett, Brian T. (2018), Understanding, Assessing, and Responding to Terrorism: Protecting Critical Infrastructure and Personnel, 2nd edition, Hoboken, NJ: John Wiley & Sons. Bologna, Sandro, Alessandro Fasani and Maurizio Martellini (2013), ‘From fortress to resilience’, in Maurizio Martellini (ed.), Cyber Security: Deterrence and IT Protection for Critical Infrastructures, New York: Springer, pp. 53–6. Carr, Madeline (2016), ‘Public–private partnerships in national cyber-security strategies’, International Affairs, 92(1), 43–62. Caruso, Raul and Andrea Locatelli (2013), ‘Company Survey Series II: Finmeccanica amid International market and state control: a survey of Italian military industry’, Defence and Peace Economics, 24(1), 89–104. Chandler, David and Jon Coaffee (eds) (2017), Routledge Handbook of International Resilience, Abingdon: Routledge. Chemical Safety Board (CSB) (2017), ExxonMobil Torrance Refinery Electrostatic Precipitator Explosion Torrance, California, Investigation Report No. 2015-02-I-CA, accessed 25 January 2019 at https://www.csb.gov/exxonmobil-refinery-explosion-/. Cordesman, Anthony H. and Justin G. Cordesman (2002), Cyber-Threats, Information Warfare, and Critical Infrastructure Protection: Defending the U.S. Homeland, Westport, CT: Praeger. Department of Homeland Security (2002a), Homeland Security Act of 2002: Public Law 107-296, 25 November 2002, accessed 25 January 2019 at https://www.dhs .gov/sites/default/files/publications/hr_5005_enr.pdf. Department of Homeland Security (2002b), National Strategy for Homeland Security, Washington, DC, 16 July 2002, accessed 25 January 2019 at https://www.dhs.gov/ sites/default/files/publications/nat-strat-hls-2002.pdf. Department of Homeland Security (2013), NIPP 2013: Partnering for Critical Infrastructure Security and Resilience, accessed 25 January 2019 at https://www .cisa.gov/sites/default/files/publications/national-infrastructure-protection-plan -2013-508.pdf. Eisler, Ronald (2013), The Fukushima 2011 Disaster, Boca Raton, FL: CRC Press. Elsner, Ivonne, Andreas Huck and Manas Marathe (2018), ‘Resilience’, in Jens Ivo Engels (ed.), Key Concepts for Critical Infrastructure Research, Wiesbaden: Springer Fachmedien. EU Commission (2005), Green Paper on a European Programme for Critical Infrastructure Protection, COM(2005) 576 final, 17 November 2005, accessed 25 January 2019 at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX: 52005DC0576&from=EN. Fasani, Alessandro and Andrea Locatelli (2017), ‘Minacce cibernetiche: quali difese?’, Vita e Pensiero, 101(4), 35–42. Ganguly, Auroop Ratan, Udit Bhatia and Stephen E. Flynn (2018), Critical Infrastructures Resilience: Policy and Engineering Principles, Abingdon: Routledge.
Critical infrastructure protection
171
Giacomello, Giampiero (2014), ‘Rischi e minacce nel cyberspazio’, in Paolo Foradori and Giampiero Giacomello (eds), Sicurezza globale: le nuove minacce, Bologna: il Mulino, pp. 237–51. Giddens, Antony (1990), Consequences of Modernity, Cambridge, UK: Polity Press. Government Accountabiilty Office (GAO) (2006), Hurricane Katrina: Strategic Planning Needed to Guide Future Enhancements Beyond Interim Levee Repairs, GAO Report to Congressional Committees, GAO-06-934, accessed 25 January 2019 at http://www.gao.gov/cgi-bin/getrpt?GAO-06-934. Grant, Vaughan (2018), ‘Critical infrastructure public–private partnerships: when is the responsibility for leadership exchanged?’, Security Challenges, 14(1), 40–52. Hyde-Price, Adrian (2001), ‘Beware the Jabberwock! Security studies in the twenty-first century’, in Heinz Gaertner, Adrian Hyde-Price and Erich Reiter (eds), Europe’s New Security Challenges, London: Lynne Rienner, pp. 27–54. Lazari, Alessandro (2014), European Critical Infrastructure Protection, Heidelberg: Springer International. Michels, Johan D. and Ian Walden (2018), ‘How safe is enough? Improving cybersecurity in Europe’s critical infrastructure under the NIS Directive’, Queen Mary University of London, School of Law Legal Studies Research Paper 291/2018, accessed 25 January 2019 at https://papers.ssrn.com/sol3/papers.cfm?abstract_id= 3297470. Moteff, John D. (2015), Critical Infrastructures: Background, Policy, and Implementation, Washington, DC: Congressional Research Service, 10 June 2015, accessed 25 January 2019 at https://fas.org/sgp/crs/homesec/RL30153.pdf. Owens, William A., Kenneth W. Dam and Herbert S. Lin (2009), Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities, Washington, DC: The National Academies Press. Pesch-Cronin, Kelley A. and Nancy E. Marion (2017), Protection, Risk Management, and Resilience: A Policy Perspective, Boca Raton, FL: CRC Press. Setola, Roberto, Eric Luiijf and Marianthi Theocharidou (2016), ‘Critical infrastructures, protection and resilience’, in Roberto Setola, Vittorio Rosato, Elias Kyriakides and Erich Rome (eds) (2016), Managing the Complexity of Critical Infrastructures: A Modelling and Simulation Approach, Cham: Springer, pp. 1–18. Slayton, Rebecca (2017), ‘What is the cyber offense–defense balance? Conceptions, causes, and assessment’, International Security, 41(3), 72–109. Stiglitz, Joseph E. and Scott J. Wallsten (1999), ‘Public–private technology partnerships: promises and pitfalls’, American Behavioral Scientist, 43(1), 52–73. Sundelius, Bengt (2005), ‘Disruptions – functional security for the EU’, in Antonio Missiroli (ed.), EUISS Chaillot Paper No. 83: Disasters, Diseases, Disruptions: A New D-Drive for the EU, pp. 67–84. The White House (1998), The Clinton’s Administration’s Policy on Critical Infrastructure Protection: Presidential Decision Directive 63, 22 May 1998, accessed 25 January 2019 at https://fas.org/irp/offdocs/paper598.htm. The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States (2004), New York: W.W. Norton & Co. Tierney, Kathleen and Michel Bruneau (2007), ‘Conceptualizing and measuring resilience: a key to disaster loss reduction’, TR News 250, accessed 25 January 2019 at https://onlinepubs.trb.org/onlinepubs/trnews/trnews250_p14-17.pdf. Vaillancourt Rosenau, Pauline (1999), ‘The strengths and weaknesses of public–private policy partnerships’, American Behavioral Scientist, 43(1), 10–34.
172
Technology and international relations
Van Schewick, Barbara (2010), Internet Architecture and Innovation, Cambridge, MA: MIT Press. Walt, Stephen (1991), ‘The renaissance of security studies’, International Studies Quarterly, 35(2), 211–39.
9. A perfect storm: privatization, public– private partnership and the security of critical infrastructure Giampiero Giacomello 1 INTRODUCTION Any country (and its government), whether modern and advanced or relatively developed or even rather backward, in order to function, to produce and distribute services and goods to all its citizens, inevitably has to rely to some degree on a more or less intricate system architecture of computer networks. The latter are called ‘critical infrastructures’, or the ‘arteries and veins’ of societies and economies (this why they are defined critical; see Cohen, 2010): financial services, energy, transportation and transit services, information and communication, water supply, police, health and medical services, public administration, government and others. For example, think of what the lockdown for COVID-19 would have been without computer networks: no smart working, no online classes, no remote meeting, and so on. Adding a further layer of complexity, these systems and their information flow assets have been increasingly digitized and computerized to become critical information infrastructures (CIIs). In other words, it is like adding ‘nerves’ to ‘arteries and veins’. As the reader can easily guess, the larger and more complex such systems are, the more catastrophic their effects may be if they break down (De Brujine and Van Eeten, 2007; Metzger, 2004; Perrow, 2011). In the past, destroying or even disrupting physical infrastructures required substantial resources and energy. Strategic and theater infrastructure systems have always been subject to (military) attacks. The destruction of critical infrastructures, however, was not always easy to accomplish, as demonstrated by the bombing campaign against Germany in World War II (Newland and Chun, 2011, especially pp. 112–30). Today, major disruptions of the CII, via computer network attacks, would indeed have serious consequences on the well-being and wealth of the people affected (like ‘cutting the nerves’ of an adversary). A typical power outage or flight delays are moderate manifes173
174
Technology and international relations
tations of such an outcome, which could be aggravated by several orders of magnitude. Moreover, a failure in any of the CIIs would most likely send negative ‘ripples’ to other systems, thus creating a cascading disaster. Last but not least, although the CIIs are mostly developed and built at the ‘national’ level, they are also linked to the infrastructures of other countries; thus, the collapse of a portion of the CIIs would have a serious impact on its neighbors’ situation as well.1 Energy, money, information and other goods and services are accurately distributed thanks to the CIIs, hence it is hardly surprising that cyberattacks, of various types and with different objectives, are always on the rise.2 Indeed, the experience with the COVID-19 pandemic has actually shown a remarkable surge in cybercrime (especially scams and fake news). An inherent and notable problem of computer networks in general and thus of CII is that security was never a top concern for early software developers and engineers. When ‘merged’ with public utilities, it was clear that CII could not be considered as simple ‘profit maximizers’, because security concerns had to be taken into consideration. Admittedly, ‘security’ (in and by itself) is a rather problematic concept (Rothschild, 2007), albeit today a much-studied topic. Ranging ‘wide and deep’ (Buzan and Hansen, 2009), security today may include climate change and migration. Remarkably, economists have always been wary about tackling security because, among other reasons (Goodwin, 1991), it is a ‘large state sector’ and a public service for which economic models display an irritating unfitness. Cybersecurity is only its last addendum (Giacomello, 2013), and ‘cybersecurity is a public good, which implies that without government intervention, it will not be produced’ (Van Eeten and Bauer, 2009, p. 230).3 The public–private partnership (PPP) has been welcomed as a possible solution for CII security and protection (a ‘revolution’ according to Grimsey and Lewis, 2007), and national governments in Europe and North America have embraced it. The results, however, have been, at best, mixed (Forrer et al., 2010; Linder, 1999; Wang, 2009). Many, like Dixon and Kouzmin (2003), have cast doubts on the logic of ‘less state and less taxes’, which raises serious policy questions of social resilience and governance capacities in these diverse jurisdictions. As observed by Almklov and Antonsen (2010, p. 133), ‘[w]hile the operation, maintenance and protection of critical infrastructures were traditionally seen as a public responsibility, the trend is now that this responsibility is transferred to the private sector or at least influenced by private-like modes of organizing’. Unsurprisingly, more doubts have emerged (Abele-Wigert, 2006; Andersson and Malm, 2006) and alternatives to privatizing public utilities have been hard to find. Above all, acute observers have noted that, while in general PPP may be a viable solution in certain instances even for infrastructures, the application
Privatization, PPP and the security of critical infrastructure
175
of such a solution should be highly selective and governments should become ‘smarter’ about when and where to cede their authority to the private sector (Dixon and Kouzmin, 2003). The area in which care should be most exercised is that of ‘security’, and protection and defense of CIIs are pivotal components of security. This chapter contends that the protection of CIIs has today deteriorated because of a combination of events that created a ‘perfect storm’, hard to forecast but with dire consequences. Three key events led to the present condition of vulnerability of the CIIs, worsening the situation: (1) ‘Internetization’4 of management for critical services; (2) privatization in the 1990s of public utilities and services; and (3) adherence to the PPP doctrine even in the area of security. Thus far, the linking of the three events and their long-term consequences has not been appreciated and has largely remained outside even the specialized literature (e.g., Dunn Cavelty and Suter, 2009). In the next section the chapter summarizes what CIIs are and then proceeds to examine the three events. This historical scrutiny is crucial to understanding the process that has led to the present conditions. Methodologically, the events considered here and their consequences are a good example of path dependence. Path dependence is a method of research often applied in economics and originating in the work of David (1985) on the QWERTY keyboard. The idea was that small initial advantages or a few minor random shocks along the way could alter the course of history. Or, as Bebchuk and Roe (1999, p. 37) put it, ‘an economy’s initial ownership structures directly influence subsequent choices of ownership structure’. There is little doubt that history can be quite important for explaining strategic choices and organizational failures. Sydow, Schreyögg and Koch (2009, p. 690) identify three developmental phases of path dependence: (1) singular (single?) historical events, (2) which may, under certain conditions, transform themselves into self-reinforcing dynamics, and (3) possibly end up in an organizational lock-in. This is precisely what happened in this case with vulnerability of CIIs: Internetization and privatization (PPP being an evolution of privatization for infrastructures) merged in unanticipated ways to generate self-reinforcing dynamics, which ended up locking in the vulnerability of CIIs. It is now taking a considerable amount of political will to restore security in this field.
2
CRITICAL INFORMATION INFRASTRUCTURE TODAY
For a long time governments were the sole ‘owners’ of critical infrastructures. They paid for and protected them. Today, most of those infrastructures are not only owned and/or operated by private businesses, but they are also tied
176
Technology and international relations
together via computer networks, which create opportunities for an attacker to provoke cascade effects by hitting the control and management nodes. The US National Institute of Standards and Technology (NIST) defines critical infrastructures as those ‘system and assets, whether physical or virtual, so vital to the U.S. that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters’ (Kissel, 2013). Likewise, the European Commission describes critical infrastructures as ‘physical and information technology facilities, networks, services and assets that, if disrupted or destroyed, would have a serious impact on the health, safety, security or economic well-being of citizens or the effective functioning of governments in EU States’ (European Commission, DG Home Affairs, n.d.; see also European Council, 2008). Control mechanisms of various types connect infrastructures at multiple points, creating a bidirectional relationship between each given pair of infrastructures. Not only are infrastructures united by such close relationships, but they are also tied to one another across countries. This condition is even more visible for a highly integrated geographical area such as Europe.5 While there are some differences among the United States, the EU, and other economically advanced countries such as Japan, Australia, Canada and South Korea, on what precisely constitutes the CIIs (the EU has a common definition) (Abele-Wigert and Dunn, 2006), these differences tend to be more terminological than substantial (see Table 9.1 for example). Indeed, almost any government would agree that critical infrastructures crisscross many sectors of the economy, including banking and finance, transport and distribution, energy, utilities, health, food supply and communications, as well as key government services, since all countries must rely on the same information systems and processes to survive. Table 9.1 shows how indispensable CIIs are for so many segments of modern life. Before proceeding further, however, it is important to clarify three essential terms that represent fundamental qualities of CIIs in this context:6 reliability, resilience and redundancy. Reliability is the ability of an apparatus, machine, or system to consistently perform its intended or required function or mission, on demand and without degradation or failure, as well as the probability of failure-free performance: the higher the better, of course. The ability of CIIs to withstand attack and keep functioning or quickly recover (‘bouncing back’) is called resilience. Resilience, according to the Oxford Dictionary of Current English, is the ‘quality or property of quickly recovering the original shape or condition, after being pulled, pressed, crushed etc.’ (or attacked, for that matter). Resilience can be fostered through various means, technical and organizational (and for humans, psychological). One such (technical) means is to
Privatization, PPP and the security of critical infrastructure
177
replicate control systems one or more times, so that if the main one fails, there is a second or even a third back-up. This is called redundancy, which is a type of resilience, although the two are distinct. The latter is broader and more comprehensive than the former. Clearly, the more critical a system – think of the computers aboard a passenger airplane – the greater the desirable redundancy. Table 9.1
Critical infrastructures in the US and the EU
United States
European Union
Chemical
(See ‘Production’)
Commercial facilities
–
Communications
Communications and information technology (e.g., telecommunications, broadcasting systems, software, hardware and networks including the Internet)
Critical manufacturing
–
Dams
(See ‘Water’)
Defense industrial base
–
Emergency services
(See ‘Healthcare’)
Energy
Energy installations and networks (e.g., electrical power, oil and gas production, storage facilities and refineries, transmission and distribution system)
Financial services
Finance (e.g., banking, securities and investment)
Food and agriculture
Food (e.g., safety, production means, wholesale distribution and food industry)
Government facilities
Government (e.g., critical services, facilities, information networks, assets and key national sites and monuments)
Healthcare and public health
Healthcare (e.g., hospitals, healthcare and blood supply facilities, laboratories and pharmaceuticals, search and rescue, emergency services)
Information technology
(See ‘Communications and information technology’)
Nuclear reactors, materials and waste
Production, storage and transport of dangerous goods (e.g., chemical, biological, radiological and nuclear materials)
Transportation systems
Transport (e.g., airports, ports, intermodal facilities, railway and mass transit networks, traffic control systems)
Water and wastewater systems
Water (e.g., dams, storage, treatment and networks)
Sources: Cybersecurity & Infrastructure Security Agency (n.d.); European Commission, DG Home Affairs (n.d.).
178
Technology and international relations
Redundancy, however, like duplication, can also be considered from certain viewpoints as wastage and/or inefficiency if the main system works fine and never fails. This is an unresolved problem when it comes to CIIs and one that presents an interesting puzzle in the three events reviewed in this chapter. Guaranteeing reliability and resilience, via redundancy, comprises protecting the CIIs; in other words, it is the ‘security’ (from the Latin se curare, taking care of) of those infrastructures. Security, however, is the archetypical externality. As with pollution, externalities produce collective/societal problems without motivating sufficient private action, and it is not clear who should bear the costs. If the cost of failure is not borne directly by the owners/managers of the infrastructure, there are no incentives to improve conditions. At present, if a CII utility were to be attacked, the management would argue that it was an exceptional situation, thus outside their obligations. It could even contend that it could have been a foreign power, hence a responsibility of the government, not of the private sector. The same could also be said of cybercrime, as law enforcement is (mostly) an activity of the state. All these three assets of CIIs have been affected, generally in a negative way, by privatization. ‘Internetization’ and the consequent ‘PPP doctrine’ have not improved the situation much, as we will see in the last section of the chapter. Clearly, there is a sort of hierarchy of infrastructures. Unsurprisingly, electrical power and telecoms are ‘the most critical infrastructures, on which virtually all the others depend’ (Lukasik, Goodman and Longhurst, 2003, p. 7). Power generation and distribution is, effectively, one of the most essential infrastructures, whose failure would impact on all other CIIs, as demonstrated in literature reviews (Antonsen et al., 2010) as well as in empirical research (Almklov and Antonsen, 2010; Miller, Antonio and Bonanno, 2011).
3
CRUCIAL EVENTS
Two of the decisive events whose unintentional convergence resulted in a higher degree of CI vulnerability today took place in the 1990s, while the third became increasingly relevant in the 2000s. We will examine them, starting with the more ‘technological’ one – namely, the ‘Internetization’ of business communication; we then proceed to consider the two economic/ organizational ones – that is, the privatization ‘wave’ of the 1980s–90s and the advent of the ‘PPP doctrine’. As Griffin (1993, p. 1100) notes, narratives must be ‘unpacked’ and analytically reconstituted to build a replicable causal interpretation of a historical event. Thus, here we investigate the reasons why and how the (structural) lack of security of computer networks has ended up being incorporated into critical information infrastructures. Due to the negative synergy of mutually reinforcing effects, generated by ‘Internetization’,
Privatization, PPP and the security of critical infrastructure
179
privatization and the ‘PPP doctrine’, the whole structure of CIIs (highly interdependent to start with) has thus become more vulnerable. A Revolutionary Change Back in the early 2000s, Metzger (2004) warned of the risk that infrastructures might emerge with built-in instabilities, critical points of failure, and extensive interdependencies. Nonetheless, he attributed this risk to the ‘pressure to reduce the time from designing a product or service to marketing it’ (p. 199). In reality, the non-existence of security or safety features has always been one of the distinctive traits of the Internet (Portnoy and Goodman, 2009, p. 1), simply because it was never intended to be used to carry sensitive data or information that should be protected. Critical infrastructures became CIIs when the Internet went public. It is well known that the Internet was born in the 1960s but its use remained exclusive to the US federal government and research centers, as well as universities, until the early 1990s. Then, in 1993–94, the US government generously opened it to the general public. Thereupon, business took notice of it and soon the CIIs ended up incorporating the ‘birth defect’, ‘the original sin’ of the Internet – namely, the (almost) total lack of security – into their own inherent, existing vulnerabilities. This is how it happened. Up to the mid-1990s, the management and distribution of these services and goods was achieved through ‘proprietary’ channels that hardly overlapped and rarely needed similar policies or sets of instructions to function. Security was achieved ‘through obscurity’, meaning that since few people knew the details and secrets of ‘obscure’ operating systems, the risk of someone exploiting vulnerabilities was fairly low. Above all, maintenance of those systems required that personnel had to physically check nodes, controllers and pipes, and to monitor the performance of the systems themselves. With the diffusion of telecommunications in the 1960s and 1970s, and in particular the coming of age of computer networks and the Internet, a ‘window of opportunity’ to make such control more efficient and, above all, save money, opened up. At some point, it was discovered that it would be much more efficient, and hence less expensive, to remotely control the nodes and switches (Medida, Sreekumar and Prasad, 1998). Maintenance personnel and engineers would no longer be obliged to travel from point to point to gather data, tweak the nodes or upgrade systems. All these operations could be performed from afar at a fraction of the cost and personnel previously required. The Internet was already available at virtually no cost, provided that the Internet protocols (and other such protocols that make computer networks work) were used. It was a ‘merge’ that greatly benefitted companies and their customers, but ‘security’ was not a top priority.
180
Technology and international relations
Supervisory control and data acquisition (SCADA) and industrial control systems thus became ubiquitous. As more and more companies flocked to this money-saving solution, no business could afford to disregard one of the fundamental tenets of market economy – namely, that if all competitors cut costs (that is, switching to ICS/SCADA), one should follow suit, or risk being punished by consumers. The downside of this essential change in the business model was that TCP/ IP (that is, the Internet protocol) ‘transferred’ its own structural vulnerabilities to SCADA. Few, if any, within the business community or governments, realized it would be necessary to pay higher prices for a ‘more secure’ Internet and thus more resilient CIIs. Many professionals working with ICS/SCADA, as demonstrated by the researchers at the SANS Institute in 2013, are now well aware of the intrinsic vulnerability of these systems. Nevertheless, ameliorating this state of affairs, even now, remains a hard sell (SANS Institute, 2013). Another example can further illustrate the escalation in CII vulnerability that this change entailed. If there is no pressure, either from the government or the consumers, to make redundancy the rule for CIIs, then there is no need to do so, or benefit (Schneier, 2003, p. 41). Redundancy is the duplication, or triplication, of control systems and procedures, so that if one safety check or data monitor fails or is compromised from the outside, there is another, and perhaps yet another back-up, and the infrastructure continues to perform smoothly. Redundancy is the rule in airplanes, for example, so that if something malfunctions, the aircraft keeps flying. No one would complain about the high cost of this ‘necessity’, because the consequences would be catastrophic and apparent to all. The safety features and control mechanisms in a nuclear power plant are also redundant for the same reasons, and few would object to them either. However, the switches and process controls that allow power to go from the plant to the electric grid are only rarely redundant, for the same business logic as above. If those controls were to be compromised, a country would experience outages, just as if someone had stopped the power plant itself.7 Attacks on ICS/SCADA tend to be quite different from those events that grab media attention. In 2007, the ‘shutting down’ of Estonia was achieved through a distributed denial-of-service attack (DDoS). DDoSs are quite common and have been repeated in many instances and circumstances and the most dramatic of these have been widely reported as ‘cyberwar’. But such events, even when organized on a large scale, are little more than mischief and nuisance (albeit sometimes presented as acts of cyberwar). Some of the tools used by script-kiddies and real cyber-warriors are often the same or very similar. ‘Trojans’, for instance, are employed by both. Nonetheless, while DDoSs then transform the taken-over computers into ‘robots’ (bots) to saturate (mostly with ad hoc ‘worms’) Web servers, trojans, once inside the system,
Privatization, PPP and the security of critical infrastructure
181
try hard not to be conspicuous. ‘Cyber-prankers’ want their exploits to be recognized by the public, while the criminals aiming at profit want the victims to stay in the dark and feel ‘safe’ as long as possible. Indeed, they would prefer it if no one ever knew (even after they left) that they were inside the system. They too may use worms and bots, but to reconnoiter networks and servers until they find the specified target (Stuxnet is the most notable example of this type of operation). Once the victim has been positively identified (and administrator’s privileges acquired), cyber-warriors at the service of a sovereign government or black-hat hackers working for organized crime8 may steal all the information, change every security parameter, place back-doors (many of these, so they may come back into the perimeters whenever they want), place a time-bomb set to ‘explode’ at a certain point in the future (thus disabling the computer) and so on. It is somehow paradoxical that despite their potential level of threat, most ordinary computer users are reasonably safe from them,9 as the (relatively) few individuals who possess such skills work for governments or organized crime gangs that tend to consider them a ‘strategic’ resource that should be employed only for high-value operations. What we have here is a clear example of ‘structure-driven path dependence’. As Bebchuk and Roe (1999, p. 10) note, ‘[t]he first reason for structure-driven path dependence is grounded in efficiency’ (the other being grounded in rent-seeking). In point of fact, ‘Internetization’ generated greater efficiency in businesses, but at the cost of transmitting the original flaw in the TCP/IP protocol (no security mechanism) to the management of CIIs. What has allowed this ‘flaw’ to spread to public utilities has been a ‘privatization mentality’ that has characterized liberal economies in the past 30 years. Another Change: The Privatization Wave The situation described above, per se, would have been problematic but manageable, especially if massive public investments had been made available. The Y2K, also known as the ‘millennium bug’, constitutes a good example of a potential problem that was solved by devoting enough personnel and resources to it (Fayad, 2002). What happens, however, if owners and managers of public utilities are unwilling to provide such funds and personnel? What happens if a company providing public services considers redundancy of control systems as wasteful duplication and sees no reason to cut revenues to increase protection? Particularly if threats to such services are regarded as remote or unlikely? Certainly, thoughtful managers and CEOs would pay attention to studies and scenarios portraying the risks of disrupted infrastructures and act accordingly, wouldn’t they? Perhaps, but if they lived through the phase of ideological privatization of the 1980s and early 1990s, it is highly
182
Technology and international relations
unlikely that they will have the right frame of mind and organizational culture to allocate the funds for duplication and redundancy. This is where the second critical event comes into play. In the mid-1970s, the efficiency of state-led development strategies began to be questioned, because they failed to increase national control over the economy or offer greater efficiency. Even the Soviet Union, whose ideology was based on government interventionist policies and had long been considered by many as a better alternative to capitalism, started showing early signs of possible failure. Scholars, economists and politicians alike were therefore looking for a possible alternative that could solve the problems of these somewhat failed strategies (Henry, 2006). The answer was ‘privatization’ – that is, the ceding of state functions and/or assets, in full or in part, to private actors (Donahue, 1989; Fitzgerald, 1988; Martin, 1993; Teisman and Klijn, 2002; Vickers and Wright, 1988). There have been many organizational variants on privatization, but management through market mechanisms and commodification of services have been the common denominator (Almklov and Antonsen, 2010; Antonsen et al., 2010). It was, as Sheil (2000) put it, the ultimate “corporatization” of the public sector’. In the mid-1980s and early 1990s, first in Western societies and then in the rest of the world, a privatization frenzy took place. The path to ‘better’ government itself lay through privatization (Savas, 1987). Such ‘sweeping revolution’ of the liberal business model has been severely criticized (see next section), although many governments, particularly in the English-speaking world, have not really altered their acritical embracing of neoliberal economics. There are several causes of this durability: first, each country molded the process of privatization to its own entrepreneurial and business culture (Feigenbaum and Henig, 1994; Linder, 1999). Where there was no business culture, state control of the economy was replaced with something hardly preferable – rentiers; in part, because privatization was presented by its supporters merely as a tool available to governments to achieve greater efficiency and better administration (Pirie, 1988; Savas, 1987). The privatization wave was not inevitable (Henry, 2006), and might have been a simple means to better ends, but it did have strong political underpinnings (Feigenbaum and Henig, 1994). The elections of Conservative Margaret Thatcher in Great Britain and Republican Ronald Reagan in the United States delivered two powerful political leaders convinced that ‘too much government’, with its higher taxes, was the cause of the economic malaise of the West (Redenius, 1983). The ‘market’, on the other hand, had a natural economic superiority in terms of allocation of resources and efficiency: supply-side economics was thus born (Donahue, 1989; Fitzgerald, 1988; Redenius, 1983). Supply-side economists insisted that the most significant and efficient way of allocating resources was through the decentralized, individual, private
Privatization, PPP and the security of critical infrastructure
183
decision maker, and not through the centralized, collective public sector decision maker (e.g., Donahue, 1989; Fitzgerald, 1988). In many instances, and in particular where competition is possible, this is probably the case. The Reagan and Thatcher governments also contributed to exporting that ideology to developing countries. Their political direction was then followed by Democrat Bill Clinton and Labourite Tony Blair who favored its adoption in Eastern Europe, Russia and elsewhere. Advocacy of privatization as the most appropriate tool for governance also worked because its competitor – namely, nationalization – had been thoroughly discredited in socialist and developing countries. Thus, it was not the best tool, just the only one. It is important to point out that privatization may work where competition among diverse players is actually possible, which is why it was successful with airlines and telecom companies.10 With monopolies, privatization created new privileges, or greater inefficiency at greater cost, and many public infrastructures were indeed monopolies. Infrastructures that had been public for a long time became privately operated or even privately owned, but it was the business logic that was applied, and cutting costs to increase profit became paramount. The Internet and SCADA were thus a remarkable opportunity not to be missed. This state of affairs also applied to public utilities, where safety and the continuation of services should have been the first priorities. Resilience of network utilities could be achieved in different ways and to diverse degrees. As noted above, one way is by duplicating (or triplicating) controls, monitoring and safety devices, which are then made redundant. Far from being a negative feature, redundancy is the safest guarantee for critical systems: if the first control or safety device fails, there is a second and perhaps even a third. The fact that airplanes have two back-up control systems (in addition to the main one) is a tremendous guarantee for us to continue taking those planes. The private sector, however, has no intention whatsoever of paying for duplication, which is a clear inconsistency with market rules. Governments, which also benefited from the privatization of public utilities, would equally be reluctant to bear the costs alone now. Nonetheless, in the case of CII failure and cascading disasters, it will be governments that have to ‘foot the bill’ after societies have suffered the consequences. This is another example of what Palm (2008) calls ‘responsibility gaps’, which arise between public and private actors when it comes to taking responsibility for long-term planning and emergency management. The ideology of privatization as a simple tool for economic governance was justified on the ground of ‘private means for public ends’ (Donahue, 1989) or ‘better alternatives’ (Fitzgerald, 1988). Nevertheless, in the new millennium, the myth of privatization as juxtaposition between ‘the state and the market’ had to be modernized. The state giving away its assets and reducing its role in
184
Technology and international relations
the economy had become too ideologically charged and had to be replaced by new institutional instruments that stressed cooperation and common objectives of greater efficiency and good governance of the economy. This trend led to the emergence of the PPP, the public–private partnership.
4
A PPP SOLUTION?
As shown in the previous section, ‘privatization’ was the 1980s’ answer to the argument that, when it comes to the production and distribution of goods and services, governments perform poorly (Teisman and Klijn, 2002). This predominant line of thinking has been, of course, abundantly criticized not only by political theorists sharing Karl Polanyi’s criticisms of neoliberal economics (e.g., Block and Somers, 2014), but also by those schooled in administrative sciences (e.g., Forrer et al., 2010; Kouzmin, 2009; Wang, 2009) and in empirical research (e.g., Almklov and Antonsen, 2010; Miller et al., 2011). Although it was initially perceived as a ‘worldwide revolution’ (Grimsey and Lewis, 2007), with the waning of ideological fervor in the 1980s and the early 1990s and the growing complexity of modern states, ‘pure’ privatization became inexpedient. The simple equation ‘public bad, private good’ could not hold true under every circumstance. Moreover, governments wanted more ‘inclusiveness’, so up came the PPP scheme. The PPP is correctly viewed as a ‘derivative of the privatization movement, which captivated conservative leaders on both sides of the Atlantic’ in the 1980s (Linder, 1999, p. 37). The privatization movement clarified the separation between public and private, and opposed any division of responsibility between the two (Linder, 1999). Nonetheless, as the politically charged 1980s receded and it was realized that the public/private separation would not work in many circumstances, the idea of partnership emerged as a more viable solution. If privately owned companies have to provide some public services, the state has to exercise some control, particularly in pricing (Broadbent and Laughlin, 2003). As important as PPPs might be for the world economy, there is ‘no standard, internationally-accepted definition’ of them, the World Bank admits.11 Likewise, the European Union accepts that the term public–private partnership (‘PPP’) is not defined ‘at Community level’ (European Commission, 2004). Generally speaking, within the EU the term refers to ‘forms of cooperation between public authorities and the world of business which aim to ensure the funding, construction, renovation, management or maintenance of an infrastructure or the provision of a service’ (ibid.). Grimsey and Lewis (2002, p. 108) define it as a long-term contractual agreement for which private entities construct or manage public infrastructures or provide services through them. Already in 2004, the Commission acknowledged that the PPP was the cornerstone of most of its policies (and so did the European Investment
Privatization, PPP and the security of critical infrastructure
185
Bank). PPPs also seemed to provide a new, effective and legitimate tool for transnational governance (Borzel and Risse, 2005). The main sectors for these investments are energy, transportation, water and sanitation, all critical service sectors. More recently, however, popularity for PPPs has become more moderate. For example, Hodge and Greve (2007) claim that evaluations point to contradictory results regarding their effectiveness and that greater care is needed to strengthen future appraisals and conduct assessments. Hodge and Greve go so far as to use the term ‘policy cheerleaders’ to underline governments’ eagerness for PPPs. Empirical evidence of such problems is now being recorded (Newlove-Eriksson, Giacomello and Eriksson, 2018; Sheil, 2000). The problem of the new stakeholders’ willingness to invest enough funds to make the CII more resilient, including through redundancy, is also underlined by those who cast doubts whether PPP mixes can properly be described as partnerships (Wettenhall, 2003). Moreover, even if, via the PPP, governments and the private sectors try to find a way out of this ominous situation, it should not be forgotten that organizational theories show that institutional fragmentation – that is, too many stakeholders – negatively affects the ability to reliably manage critical systems, and the consequences could be quite serious (Perrow [1984], 2011). What is most worrisome, from the societal viewpoint, is that the PPP doctrine and, before it, the privatization wave, are at odds with the field of security (which is ‘uneconomic’). Business logic, on the other hand, demands that money and resources are not wasted on something that, in ‘normal’ conditions, works just fine.12 It would be difficult to disagree with the statement that if something increases the potential vulnerability of a system, that ‘something’ is a security threat. The privatization push of the 1980s in the United States and Great Britain and of the 1990s in Europe was based on the assumption of ‘same services at lower prices’, thus more efficiency and (some profit) for the private investor. More security – for instance, via system redundancy – meant spending on contingencies that in ‘normal’ times would not occur; hence, it ran contrary to the very culture of privatization, in particular as wasteful spending. Updating cables, pipes and tubes is the most that reasonable private managers would do with public utilities and many CIIs; duplicating them, so as to guarantee security through redundancy, would be totally out of the question, as it would be putting resources into hardening SCADA systems. What all governments and the public, however, have experienced with the first phase of the COVID-19 pandemic has taught a hard lesson about being better prepared and more resilient.
186
Technology and international relations
5 CONCLUSIONS This chapter has examined how the conjuncture of three historical events, linked to the demand for greater business efficiency, has for the first 20 years of the 2000s led to greater vulnerability of critical information infrastructures (CIIs). Privatization, ‘Internetization’ and the ‘PPP doctrine’ have all contributed to aggravating the problem of CII vulnerability that only recently, and under some pressure by governments, has been addressed. As one practitioner with long experience in the field noted, since ‘[t]he privatization of the utilities and the adoption of “Just in Time” delivery techniques for food and fuel means there is very little “give” in the system to cater for unexpected events’ (Hyslop, 2007, p. 5), it was about time governments and the public too took note. All three synergetic events discussed here have had a multiplying effect on the degree of CII vulnerability. It is probably true that ‘[r]esearch now supports the proposition that privately owned firms are more efficient and more profitable than otherwise comparable state-owned firms’ (Megginson and Netter, 2001, p. 380), but clearly, not all industries are the same. Ownership of a taxi or chocolate company by the state is not exactly the same as public utilities. If for such companies privatization was a viable answer, the same cannot be said of public utilities, especially since the enthusiastic embracing of TCP/IP (the Internet protocol) communication by businesses to reduce costs has also led to a decrease in the level of security for CIIs. Modern societies would demand that a ‘balance’ of anticipation and resilience policies be applied to solving the problems exposed and protecting CII. Effective anticipation, however, requires precise assessment of the risk, which was already difficult (albeit not impossible) when every infrastructure was separated from the others. Today, with networks, webs and grids all interconnected, cascade effects make effective anticipation next to impossible. Resilience through redundancy is unlikely to be implemented, especially to a satisfactory degree, because redundancy is a violation of market rules. When utilities and other critical infrastructures were only public, demands for greater investment to increase security would have been (somehow) met. With multiple stakeholders, however, and the PPP, all parties involved perfectly agree about cutting costs and equally disagree about who should bear the price tag for reducing the vulnerabilities of CIIs. It might be, however, that the 2020 pandemic convinces all business (not only the airline industry, for example) that redundancy is indeed indispensable. On the one hand, for businesses the public utilities that are part of the CII (e.g., water and power distribution) are public goods and thus it is the state that should provide the money, but, at the same time, the private sector is adamant about sharing the profits that may be generated by managing public
Privatization, PPP and the security of critical infrastructure
187
utilities. On the other hand, privatizing was and still is perceived as a way to save money and increase management efficiency. As a result, governments are reluctant to be the only party that has to provide the necessary funds (and perhaps increase taxation for this purpose) if the revenues are then shared with the private investors. Indeed, taking their cue from disaster recovery, Miller et al. (2011, p. 512) come to a substantial conclusion that can be easily applied to the whole field of infrastructure development and protection: ‘[i]nfrastructure upgrades that might sharply reduce [hurricane] Ike-type damages are excluded by cost-benefit analyses geared to maximize company and stockholder short-term interests’. It might be true that a ‘private sector supplier becomes a PPP when it provides public services’ (Broadbent and Laughlin, 2003, p. 333), but in the upshot things are much more controversial (Hodge and Greve, 2007; Wang, 2009). Moreover, as Adam Smith already theorized (and as demonstrated here for CIIs), safety and security have always been quite at odds with economic efficiency (Bailes and Frommelt, 2004). Societies should hence exercise great care in considering what conditions are necessary for protecting the public interest at a time when market forces exercise great power (Almklov and Antonsen, 2010; Dixon and Kouzmin, 2003; Dunn Cavelty and Suter, 2009; Kouzmin, 2009; Linder, 1999; Miller et al., 2011; Sheil, 2000; Wettenhall, 2003). The experience, for example, of France and her medical supplies during the COVID-19 pandemic is but another demonstration of just-in-time economics overruling security concerns (Onishi and Méheut, 2020). Through the PPP, it seems that, of late, governments and the private sector have started to agree that the CII need considerably better protection. While a more comprehensive approach to CII security is being worked out, what are the possible solutions for a ‘transition period’? Informal, real-time contact13 appears to be a sensible solution to the problem of security as a public good for critical infrastructures (De Brujine and Van Eeten, 2007). In theory, it would fit well with the mode of governance endorsed by the PPP doctrine. In practice, it still has to overcome turf jealousies by public administration and the fear of helping the competition or sharing one’s own knowledge typical of the private sector. Ultimately, that cybersecurity should become everyone’s concern is somehow inevitable. Hence governments should start educating their citizens and businesses more to cope with disruption and malfunctioning and resume investing considerable money in the protection of CIIs. The COVID-19 experience should have made this conclusion clear to everyone.
188
Technology and international relations
NOTES 1. The distinction between ‘critical infrastructures’ and ‘critical information infrastructures’ is that in the latter the emphasis is on the computers and their networks that permit remote control and management of physical infrastructures. 2. Evidence of this can be seen quite clearly even in real time on the website operated by Deutsche Telekom (www.sicherheitstacho.eu/) or that by Akamai (www .akamai.com/html/technology/dataviz1.html). From the latter it is even possible to download an app providing a continuous update. 3. There is an important caveat that should be acknowledged about the issue of CII ‘security’ – namely, that of ‘securitization’. This refers to enshrouding a social or technical concern with the ‘sacredness’ of national security, thus removing it from public debate and scrutiny. The original idea was developed by Buzan, Wæver and De Wilde (1998) and has been amply considered by other scholars (e.g., Dunn Cavelty, 2007). 4. By this term I refer to the dominant practice nowadays of using the Internet protocol, TCP/IP, in the business sector. However, while several types of communications can be encrypted, information from and to monitoring stations and SCADA systems (see next section) are never thus protected and are easily eavesdropped by anyone with quite simple ‘sniffer’ technologies. 5. It is the very subsidiarity principle that demands that the Commission be actively involved in monitoring the CIIs and the European Investment Bank in financing CII development in new member states and neighbors (e.g., Fritzon et al., 2007). 6. See Section 2.2 in Chapter 8. 7. To make the reactor’s nucleus overheat is another story, of course. However, as Stuxnet demonstrated in 2010–11, it is possible. What makes the difference, in this case, are the skills and resources of the attackers in developing something at the same level as Stuxnet. 8. Several (though not all) of these hackers are reportedly from Eastern Europe and Russia. 9. And if such users are smart enough to adopt even basic protections from script-kiddies and the average hacker as well. 10. Telecom services in Europe and elsewhere have been state monopolies for a long time, but privatization of them has been successful because, following the example of the United States, the monopolies were soon broken down by governments, in Europe with the eager help of the European Commission. The advent of the Internet in the communication business would have upset the status quo anyway. 11. The World Bank (2016), ‘A World Bank Resource for PPPs in Infrastructure’, accessed 22 January 2021 at https://ppp.worldbank.org/public-private-partnership/ spanhomespanimg-althome-srcpppsitesworldbankorgpppfileshome-icon-redpng/ home. 12. See also Chapter 8, Section 2.3. 13. It refers to the well-established habit by ICT staff of informing and consulting each other about new threats. Indeed, this is exactly the informal procedure most used in epidemiology during epidemics.
Privatization, PPP and the security of critical infrastructure
189
REFERENCES Abele-Wigert, Isabelle (2006), ‘Challenges governments face in the field of critical information infrastructure protection (CIIP): stakeholders and perspective’, in Myriam Dunn and Victor Mauer (eds), International CIIP Handbook 2006, Vol. II: Analyzing Issues, Challenges, and Prospects, Zurich: Swiss Federal Institute of Technology Zurich, pp. 55–68. Abele-Wigert, Isabelle and Myriam Dunn (eds) (2006), International CIIP Handbook 2006, Vol. I: An Inventory of 20 National and 6 International Critical Information Infrastructure Protection Policies, Zurich: Swiss Federal Institute of Technology Zurich. Almklov, Petter G. and Stian Antonsen (2010), ‘The commoditization of societal safety’, Journal of Contingencies and Crisis Management, 18(3), 132–44. Andersson, Jan and Andreas Malm (2006), ‘Public–private partnerships and the challenge of critical infrastructure protection’, in Myriam Dunn and Victor Mauer (eds), International CIIP Handbook 2006, Vol. II: Analyzing Issues, Challenges, and Prospects, Zurich: Swiss Federal Institute of Technology Zurich, pp. 139–67. Antonsen, Stian, Petter G. Almklov, Jørn Fenstad and Agnes Nybø (2010), ‘Reliability consequences of liberalization in the electricity sector: existing research and remaining questions’, Journal of Contingencies and Crisis Management, 18(4), 208–19. Bailes, Alyson J.K. and Frommelt, Isabel (eds) (2004), Business and Security Public– Private Sector Relationships in a New Security Environment, New York: Oxford University Press. Bebchuk, Lucian and Mark Roe (1999), ‘A theory of path dependence in corporate ownership and governance’, Discussion Paper No. 266, Harvard Law School, Cambridge, MA, accessed 22 January 2021 at http://www.law.harvard.edu/ programs/olin_center/papers/pdf/266.pdf. Block, Fred and Margaret R. Somers (2014), The Power of Market Fundamentalism: Karl Polanyi’s Critique, Cambridge, MA: Harvard University Press. Borzel Tania and Thomas Risse (2005), ‘Public–private partnerships: effective and legitimate tools of transnational governance?’ in Edgar Grande and Louis W. Pauly (eds), Complex Sovereignty: Reconstituting Political Authority in the 21st Century, Toronto: University of Toronto Press, pp. 195–216. Broadbent, Jane and Richard Laughlin (2003), ‘Public private partnerships: an introduction’, Accounting, Auditing & Accountability Journal, 16(3), 332–41. Buzan, Barry and Lena Hansen (2009), The Evolution of International Security Studies, Cambridge, UK: Cambridge University Press. Buzan, Barry, Ole Wæver and Jaap De Wilde (1998), Security: A New Framework for Analysis, Boulder, CO: Lynne Rienner Publishers. Cohen, Fred (2010), ‘What makes critical infrastructures critical’, International Journal of Critical Infrastructures Protection, 3(2), 53–4. Cybersecurity & Infrastructure Security Agency (n.d.), ‘Critical infrastructure sectors’, accessed 22 January 2021 at https://www.cisa.gov/critical-infrastructure-sectors. David, Paul A. (1985), ‘Clio and the economics of QWERTY’, The American Economic Review, 75(2), 332–7. De Brujine, Mark and Michel van Eeten (2007), ‘Systems that should have failed: critical infrastructure protection in an institutionally fragmented environment’, Journal of Contingencies and Crisis Management, 15(1), 18–29.
190
Technology and international relations
Dixon, John and Alexander Kouzmin (2003), ‘Public domains, organizations and neo-liberal economics: from de-regulation and privatization to the necessary “smart state”’, in Rainer Koch and Peter Conrad (eds), New Public Service, Wiesbaden: Gabler Verlag-Springer, pp. 263–91. Donahue, John D. (1989), The Privatization Decision: Public Ends, Private Means, New York: Basic Books. Dunn Cavelty, Myriam (2007), Cyber-Security and Threat Politics: US Efforts to Secure the Information Age, New York: Routledge. Dunn Cavelty, Myriam and Manuel Suter (2009), ‘Public–private partnerships are no silver bullet: an expanded governance model for critical infrastructure protection’, International Journal of Critical Infrastructure Protection, 4(2), 179–87. EU Commission (2004), ‘Green paper on public–private partnerships and community law on public contracts and concessions’, COM/2004/0327 final, accessed 22 January 2021 at http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri= CELEX:52004DC0327&from=EN. European Commission, DG Home Affairs (n.d.), ‘Migration and home affairs: glossary’, accessed 22 January 2021 at http://ec.europa.eu/dgs/home-affairs/e-library/ glossary/index_c_en.htm. European Council (2008), ‘Council Directive 2008/114/EC of 8 December on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection’, Official Journal of the European Union’, L 345/75, accessed 22 January 2021 at https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=uriserv%3AOJ.L_.2008.345.01.0075.01.ENG. Fayad, Mohamed E. (2002) ‘How to deal with software stability’, Communications of the ACM, 45(4), 109–12. Feigenbaum, Harvey B. and Jeffrey R. Henig (1994), ‘The political underpinnings of privatization: a typology’, World Politics, 46(2), 185–208. Fitzgerald, Randall (1988), When Government Goes Private: Successful Alternatives to Public Services, New York: Universe Books. Forrer, John, James Edwin Kee, Kathryn E. Newcomer and Eric Boyer (2010), ‘Public– private partnerships and the public accountability question’, Public Administration Review, 70(3), 475–84. Fritzon, Asa, Kristin Ljungkvist, Arjen Boin and Mark Rhinard (2007), ‘Protecting Europe’s critical infrastructures: problems and prospects’, Journal of Contingencies and Crisis Management, 15(1), 30–41. Giacomello, Giampiero (ed.) (2013), Security in Cyberspace: Targeting Nations, Infrastructures, Individuals, New York: Lexington Books. Goodwin, Craufurd L. (1991), Economics and National Security: A History of Their Interaction, Durham, NC: Duke University Press. Griffin, Larry J. (1993). ‘Narrative, event-structure analysis, and causal interpretation in historical sociology’, American Journal of Sociology, 98(5), 1094–133. Grimsey, Darrin and Mervyn Lewis (2002), ‘Evaluating the risks of public private partnership for infrastructure projects’, International Journal of Project Management, 20(2), 107–18. Grimsey, Darrin and Mervyn Lewis (2007), Public Private Partnerships: The Worldwide Revolution in Infrastructure Provision and Project Finance, Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing. Henry, Jody-Ann M. (2006) ‘The wave of privatization in the 1980s and 1990s: was it inevitable?’, 30 November, http://dx.doi.org/10.2139/ssrn.1293910.
Privatization, PPP and the security of critical infrastructure
191
Hodge, Graeme A. and Cartsen Greve (2007), ‘Public–private partnerships: an international performance review’, Public Administration Review, 67(3), 545–58. Hyslop, Maitland (2007), Critical Information Infrastructures Resilience and Protection, New York: Springer Science. Kissel, Richard (ed.) (2013), ‘NIST IR 7298 Revision 2: glossary of key information security terms’, National Institute of Standards and Technology (NIST), US Department of Commerce, accessed 22 January 2021 at https://csrc.nist.gov/ publications/detail/nistir/7298/rev-2/archive/2013-06-05. Kouzmin, Alexander (2009), ‘Market fundamentalism, delusions and epistemic failures in policy and administration’, Asia-Pacific Journal of Business Administration, 1(1), 23–39. Linder, Stephen H. (1999), ‘Coming to terms with the public–private partnership: a grammar of multiple meanings’, The American Behavioral Scientist, 43(1), 35–51. Lukasik, Stephen J., Seymour E. Goodman and David W. Longhurst (2003), ‘Special issue: protecting critical infrastructures against cyber-attack’, Adelphi Paper, No. 359. Martin, Brendan (1993), In the Public Interest: Privatization and Public Sector Reform, London: Zed Books Ltd. Medida, Srinivas, N. Sreekumar and Krishna V. Prasad (1998), ‘SCADA-EMS on the Internet’, in Energy Management and Power Delivery, 1998. Proceedings of EMPD’98. March 1998 International Conference IEEE, Vol. 2, pp. 656–60. Megginson, William L. and Jeffrey M. Netter (2001), ‘From state to market: a survey of empirical studies on privatization’, Journal of Economic Literature, 39(2), 321–89. Metzger, Jan (2004), ‘The concept of critical infrastructure protection’, in Alyson J.K. Bailes and Isabel Frommelt (eds), Business and Security: Public–Private Sector Relationships in a New Security Environment, New York: Oxford University Press, pp. 197–209. Miller, Lee M., Robert Antonio and Alessandro Bonanno (2011), ‘Hazards of neoliberalism: delayed electric power restoration after Hurricane Ike’, The British Journal of Sociology, 62(3), 504–22. Newland, Samuel J. and Clayton K.S. Chun (2011), The European Campaign: Its Origins and Conduct, Strategic Studies Institute, U.S. Army War College, Carlisle, PA. Newlove-Eriksson, Lindy, Giampiero Giacomello and Johan Eriksson (2018) ‘The invisible hand? Critical information infrastructures, commercialisation and national security’, The International Spectator, 53(2), 124–40. Onishi, Norimitsu and Constant Méheut (2020), ‘How France lost the weapons to fight a pandemic’, The New York Times, 17 May, accessed 22 January 2021 at https:// www.nytimes.com/2020/05/17/world/europe/france-coronavirus.html?smid=em -share. Palm, Jenny (2008) ‘Emergency management in the Swedish electricity market: the need to challenge the responsibility gap’, Energy Policy, 36(2), 843–9. Perrow, Charles ([1984], 2011), Normal Accidents: Living with High Risk Technologies, Princeton, NJ: Princeton University Press. Pirie, Madsend (1988), Privatization: Theory Practice and Choice, Aldershot: Wildwood House Limited. Portnoy, Michael and Seymour Goodman (eds) (2009), Global Initiatives to Secure Cyberspace: An Emerging Landscape, New York: Springer.
192
Technology and international relations
Redenius, Charles (1983), ‘The supply-side alternative: Reagan and Thatcher’s economic policies’, The Journal of Social, Political, and Economic Studies, 8(2), 189–209. Rothschild, Emma (2007), ‘What is security?’, in Barry Buzan and Lena Hansen (eds), International Security. Volume III: Widening Security, London: SAGE, pp. 1–35. SANS Institute (2013), ‘SCADA and process control security survey’, 1 February, accessed 22 January 2021 at www.sans.org/reading_room/analysts_program/sans _survey_scada_2013.pdf. Savas, Emanuel S. (1987), Privatization: The Key to Better Government, Chatham, NJ: Chatham House Publishers. Schneier, Bruce (2003), Beyond Fear: Thinking Sensibly About Security in an Uncertain World, New York: Copernicus Books. Sheil, Christopher (2000), Water’s Fall: Running the Risks with Economic Rationalism, Annadale, NSW: Pluto Press Australia. Sydow, Jörg, Georg Schreyögg and Jochen Koch (2009), ‘Organizational path dependence: opening the black box’, Academy of Management Review, 34(4), 689–709. Teisman, Geert R. and Erik-Hans Klijn (2002), ‘Partnership arrangements: governmental rhetoric or governance scheme?’ Public Administration Review, 62(2), 197–205. Van Eeten, Michael and Jan M. Bauer (2009), ‘Emerging threats to internet security: incentives, externalities and policy implications’, Journal of Contingencies and Crisis Management, 17(4), 222–32. Vickers, John and Vincent Wright (1988), ‘The politics of industrial privatization in Western Europe: an overview’, West European Politics, 11(4), 1–30. Wang, Yin (2009), ‘A broken fantasy of public–private partnership’, Public Administration Review, 69(4), pp. 779–82. Wettenhall, Roger (2003), ‘The rhetoric and reality of public–private partnerships’, Public Organization Review, 3(1), 77–107.
Index ethical and legal case in favor of MHC (meaningful human control) requirement 47–9 necessary condition 47 principles of distinction, proportionality and precaution 48 requirement for human control as principle of international law 50–53 shaping content of requirement for human control 53–8 see also artificial intelligence (AI), and autonomous weapons systems (AWS)
5G technology 34, 41–2 9/11 164–5 accountability and autonomous weapons systems (AWS) 48–9, 53, 56, 93–4 and drones 70 Almklov, Petter G. 174 anarchy 12 anti-satellite (ASAT) weapons 114, 116–17, 127 anti-submarine warfare continuous trail unmanned vehicle (ACTUV) 99 Antonsen, Stian 174 artificial intelligence (AI) 34–5, 36–7, 40–41 applications of 89–90 and cyber security 144–5 types of system 90–91 artificial intelligence (AI), and autonomous weapons systems (AWS) 48, 54–5, 89–90 AI concepts 90–91 AWS in practice 95–6 current AWS capabilities, developments and concerns 96–101 debate so far 92–5 difficulty in defining AWS 91–2 future regulation 101–2 Asia 40 patents, applications and obtaining of 28–31 R&D expenditure and activity 31–3 assemblages 14 asymmetric information 163 autonomous weapons systems (AWS) 45–7, 58–60, 84 arguments for and against 93–4 difficulty in defining 91–2
banking and finance, as critical infrastructure (CI) 154, 160, 177 Beck, Ulrich 10, 168 big data 41, 144–5 bots and botnets 145–6, 147, 180–81 Brennan, John 72 Brooks, Rosa 72 Bush (George W.) administration 164–5 Campaign to Stop Killer Robots 100 chemical sector, as critical infrastructure (CI) 154, 160, 177 China counter-drone measures 79 future 40 patents, applications and obtaining of 29, 30–31, 42 R&D expenditure and activity 31, 32–3, 39 and space 110–11 and US 34, 41–2 use of drones 72 Cisco 141, 142 Clinton administration 164 Cold War 7, 109–10, 113, 114 193
194
Technology and international relations
commercial facilities, as critical infrastructure (CI) 154–5, 160, 177 communications, as critical infrastructure (CI) 155, 160, 177 constructivism 11–13 Convention on Certain Conventional Weapons (CCW) 46, 50, 52, 59, 91, 101 counterterrorism, use of drones 71–3 COVID-19 pandemic 153, 156, 174 critical information infrastructure (CII) 173–5, 186–7 current situation 175–8 defining 176 events leading to increased vulnerability 178–84 and public-private partnership (PPP) 184–5 see also cyber attacks and defenses critical infrastructure protection (CIP) 152–4, 167–8 case studies 163–7 defining critical infrastructures/key sectors 154–8, 176, 177 public-private partnership (PPP) 161–3, 166, 167 strategies for protection 158–61 cryptography 139, 146–7 Cummings, Mary L. 96 cyber attacks and defenses 35, 41, 59, 132–3, 149–50 and critical infrastructure protection (CIP) 154, 155, 156–7, 160 cyber attackers 135, 136–7, 140, 180–81 defensive capabilities 137–41 early attacks 134 EU legislation 167 future trends and capabilities 143–9 modern attacks 134–5, 136–7 national actors 142–3 private actors 141–2 see also critical information infrastructure (CII) cyber-physical systems (CPSs) 147–9 cyber security industry 134, 141–2 cyberwar 35, 180–81 ‘cyborgs’ 15
dams, as critical infrastructure (CI) 155, 160, 177 dark web 137 DARPA Cyber Grand Challenge 147 data big data 41, 144–5 and cyber attacks 133, 144–5 governance of 37 deep learning 91, 146–7 defense industrial base, as critical infrastructure (CI) 155, 160, 177 Department of Homeland Security (DHS) 165 determinism 6, 12 deterrence 7, 94, 100–101 Directive 114/08/EC 167 distributed denial-of-service attacks (DDoSs) 180–81 drones 68–9, 83–4 counterterrorism 71–3 distancing of operators from reality 13 future developments 80–83 humanitarian operations 75–9 interstate conflict 73–4, 81–2 intrastate conflict and domestic repression 74–5 miniaturized 82–3 non-state actors 79–80 perspectives on 69–70 proliferation, and context 70–80 swarms 82, 97 emergency services, as critical infrastructure (CI) 155, 160, 177 employment, and technological change 36–7, 41 energy sector as critical infrastructure (CI) 155–6, 160, 177, 178 innovation in 34 epidemics and pandemics 158 COVID-19; 153, 156, 174 European Union (EU) 28, 33, 166–7, 184–5 externalities 162, 178 food and agriculture, as critical infrastructure (CI) 156, 160, 177
Index
free riding 162 geomagnetic and solar storms 120–24, 127–8 Global Cybersecurity Index 143 globalization 3, 4–5, 8–9, 10 Goodfellow, Ian 90–91 Google Brain team 146–7 government facilities, as critical infrastructure (CI) 156, 160, 177 Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) 46, 50, 52, 91, 101–2 hackers 140 Hague Convention (1907) 52 healthcare, as critical infrastructure (CI) 156, 160, 177 Heyns, Christof 69, 93–4 Horowitz, Michael C. 71, 96 human-caused hazards, and critical infrastructure protection (CIP) 159 human control over autonomous weapons systems see meaningful human control (MHC) (autonomous weapons systems, AWS) humanitarian operations, use of drones 75–9 India, R&D expenditure 31 industrial control systems (ICS) 180 information and communications technology (ICT) 34–5, 39 information asymmetry 163 information technology (IT), as critical infrastructure (CI) 156, 160, 177 intellectual property (IP), protection of 28 patents 23, 28–31, 42 interconnectedness of techno-societal systems 16 international (humanitarian) law, and autonomous weapons systems (AWS) 46, 48, 49, 50–53, 93–4, 101–2 international relations and technology, theorizing see technology and international relations, theorizing
195
International Studies Association (ISA) 5 Internet as assemblage 14 as liberalizing force 8–9 Internet security see cyber attacks and defenses Internetization of management for critical services 175, 179–81, 186 Iovane, M. 50–51 Iraq, use of drones 74 Iron Dome (AWS) 56, 58, 98–9 Islamic State, use of drones 80 Japan patents, applications and obtaining of 29, 31 R&D expenditure and activity 32, 33 joint surveillance target attack radar system (JSTARS) 82 Kennedy, Fred 99 Keohane, Robert O. 9 Kissinger, Henry A. 100 knowledge-based systems (AI) 90, 94 Korea (RoK), R&D expenditure and activity 32, 33 Kumar, Vijay 82 Ladsous, Hervé 77 liberalism 8–10 machine-learning 34–5, 36, 48, 54–5, 90–91, 94–5, 144 malware 134–5, 136, 138, 144, 145, 146 manufacturing, as critical infrastructure (CI) 155, 160, 177 market 182–3 market failure 162–3 Mazzucato, Mariana 26 meaningful human control (MHC) (autonomous weapons systems, AWS) 45–7, 58–60 requirement, ethical and legal case in favor of 47–9 requirement for human control as principle of international law 50–53
196
Technology and international relations
shaping content of requirement for human control 53–8 ‘mega-constellations’ 120 Merkel, Chancellor Angela 80 Missile Technology Control Regime (MTCR) 83 missiles and missile development 109–10, 114 mobile devices, and cyber security 136, 138 monopolies 183 Morozov, Evgeny 9 NATO (North Atlantic Treaty Organization) 40 natural hazards, and critical infrastructure protection (CIP) 158 Neal, Meghan 82 negative externalities 162, 178 Network and Information Systems (NIS) Directive 167 neural networks 91, 146–7 Nigeria, use of drones 74–5 non-determinism 6, 12, 14 non-state actors 24, 25, 38–9, 40–41, 79–80 nuclear facilities, as critical infrastructure (CI) 156, 160, 177 nuclear power sources in space 124 nuclear weapons 7, 10, 11, 109–10, 116, 127 Nye, Robert 9 Obama, President 9, 68, 71, 165–6 Office of Homeland Security (OHS) 165 orbital debris 117–20, 126–7 orbits, satellites 115 Ottawa Treaty (1997) 51 outer space see space and satellites ozone layer 125–6 Pakistan, use of drones 74 pandemics and epidemics 158 COVID-19; 153, 156, 174 Pandya, Jayshree 95–6 Panetta, Leon 68 patents 23, 28–31, 42 path dependence 175, 181
PAX report on autonomous weapons systems (AWS) 97–8, 99 peacekeeping operations, use of drones 77 Perrow, Charles 10 politics and popular culture 16–17 Power, Samantha 76 Presidential Decision Directive 63 (PDD-63) 164 principle of least privilege 139 private actors and critical infrastructure (CI) 157–8 cyber attacks and defenses 141–2 development of autonomous weapons systems (AWS) 97–8 space and satellites 111, 112 and technological innovation 24–7, 37–9 privatization 174–5, 181–4, 185, 186 public goods 162 public-private partnership (PPP) 25–6, 27 and critical information infrastructure (CII) 174–5, 184–5, 186–7 and critical infrastructure protection (CIP) 161–3, 166, 167 public utilities 186–7 R-7 missile 109–10 R&D agency among producers 23–4, 37–9 expenditure, global distribution of 28, 31–3, 39, 40 policy 25 role of state 26, 27 use of data on 23 realism 6–8 redundancy 161, 177–8, 180, 181–2, 183, 185 resilience 159–61, 167, 176–8, 183 revolution in military affairs (RMA) 7 risk society 10, 168 rocket launches 125–6 Russia and space 110, 126 use of drones 72 see also Soviet Union
Index
Sagan, Scott 10 satellites see space and satellites Schwartz, General Norton 70 science and technology studies (STS) 13 Sea Hunter (AWS) 99 security 152–3, 174, 178 Sharkey, N. 57 Singer, Peter W. 100 Slijper, Frank 97–8 social media 9 social networks, and cyber security 136 social norms 11 social order and equality 41 software security and cyber attacks 132–3, 138, 139, 140, 147, 149 solar and geomagnetic storms 120–24, 127–8 Soviet Union missile development 109–10 and space 110, 116, 124 space and satellites 109–12, 126–8 current use of outer space around Earth 112–16 ground risks of space operations 124–6 militarization and warfare 113–14, 116–17 orbital debris 117–20 solar and geomagnetic storms 120–24 space tourism 25–6 state agency in technological innovation 37–9 power 24–5, 27 repression 75 role of 26, 27 see also public-private partnership (PPP) state-related actors, cyber attacks by 135, 137 Stockholm International Peace Research Institute (SIPRI) 95 supervisory control and data acquisition (SCADA) 180, 183 surveillance studies 14–15 techno-politics 13–15
197
technological hazards, and critical infrastructure protection (CIP) 158–9 technological innovation, distribution of 23–4, 40–42 agency (private actors and state-owned enterprises) 37–9 geography of patents and R&D 27–33 global distribution 26 key sectors, trends and scenarios 34–7 technology, states and business 24–7 technology and international relations, theorizing 3–6, 17 constructivism 11–13 discussion and review of 6–15 gaps and new horizons 15–17 liberalism 8–10 realism 6–8 recent publications 4–5 techno-politics 13–15 terrorism 9, 71–3, 79–80, 97, 100, 159, 164–5 Tesla 26–7 time-space compression 14 Trans-Pacific Partnership (TPP) 28 transportation, as critical infrastructure (CI) 156–7, 160, 177 trojans 180–81 UNICEF 78 Union of Concerned Scientists (UCS) Satellite Database 112 United Kingdom (UK), use of drones 72, 73 United Nations Institute for Disarmament Research (UNIDIR) 45, 46 United Nations (UN), peacekeeping and humanitarian operations 77, 78–9 United States (US) and 5G technology 41–2 and China 34, 41–2 and critical infrastructure protection (CIP) 164–6 development of autonomous weapons systems (AWS) 99 drone exports 83–4 foreign policy 9 and humanitarian operations 76
198
Technology and international relations
missile development 109 and NATO (North Atlantic Treaty Organization) 40 patents, applications and obtaining of 29, 31 and protection of intellectual property (IP) 28 R&D expenditure and activity 32, 33, 37, 38, 39 space and satellites 110, 112, 116, 117–18 use of drones 68, 69, 70, 71–4, 76, 77, 81–2 Universal Protection Service 141 unmanned aerial vehicles (UAVs) see drones unmanned autonomous vehicles (UAVs) 35–6 unmanned combat aerial vehicles (UCAVs) 81 V-2 missiles 109
Waltz, Kenneth 7, 10 war crimes 48–9 Ware, Jacob 100 water supply systems, as critical infrastructure (CI) 157, 160, 177 weapons anti-satellite (ASAT) weapons 114, 116–17, 127 developments in 7 drones 68–75, 76–7, 79–80 international law governing 50–53 military applications in space 113–14, 116–17, 127 missile development 109–10 nuclear 7, 10, 11, 109–10, 116, 127 see also artificial intelligence (AI), and autonomous weapons systems (AWS); autonomous weapons systems (AWS) Wendt, Alexander 11, 12 ‘white hats’ 140 Work, Robert 75 XAI (eXplainable AI) 55