179 24 3MB
English Pages [223] Year 2019
Technology and Agency in International Relations
This book responds to a gap in the literature in International Relations (IR) by integrating technology more systematically into analyses of global politics. Technology facilitates, accelerates, automates, and exercises capabilities that are greater than human abilities. And yet, within IR, the role of technology often remains under-studied. Building on insights from science and technology studies (STS), assemblage theory and new materialism, this volume asks how international politics are made possible, knowable, and durable by and through technology. The contributors provide empirically rich and pertinent accounts of a variety of technologies relevant to the discipline, including drones, algorithms, satellite imagery, border management databases, and blockchains. Problematizing various technologically mediated issues, such as secrecy, violence, and questions of how authority and evidence become constituted in international contexts, this book will be of interest to scholars in IR, in particular those who work in the subfields of (critical) security studies, International Political Economy, and Global Governance. Marijn Hoijtink is an Assistant Professor in International Relations at VU Amsterdam. Matthias Leese is a Senior Researcher at the Center for Security Studies (CSS), ETH Zurich.
Emerging Technologies, Ethics and International Affairs Series Editors: Steven Barela, Jai C. Galliott, Avery Plaw, Katina Michael
This series examines the crucial ethical, legal and public policy questions arising from or exacerbated by the design, development and eventual adoption of new technologies across all related fields, from education and engineering to medicine and military affairs. The books revolve around two key themes: • •
Moral issues in research, engineering and design Ethical, legal and political/policy issues in the use and regulation of Technology
This series encourages submission of cutting-edge research monographs and edited collections with a particular focus on forward-looking ideas concerning innovative or as yet undeveloped technologies. Whilst there is an expectation that authors will be well grounded in philosophy, law or political science, consideration will be given to future-orientated works that cross these disciplinary boundaries. The interdisciplinary nature of the series editorial team offers the best possible examination of works that address the ‘ethical, legal and social’ implications of emerging technologies. For more information about this series, please visit: https://www.routledge.com /Emerging-Technologies-Ethics-and-International-Affairs/book-series/ASHSER1408 Emerging Technologies in Diverse Forensic Sciences Ronn Johnson Cyber Attacks and International Law on the Use of Force The Turn to Information Ethics Samuli Haataja Global Environmental Governance in the Information Age Civil Society Organizations and Digital Media Jérôme Duberry
Technology and Agency in International Relations Edited by Marijn Hoijtink and Matthias Leese
First published 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 selection and editorial matter, Marijn Hoijtink and Matthias Leese; individual chapters, the contributors The right of Marijn Hoijtink and Matthias Leese to be identified as the author[/s] of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. With the exception of Chapter 7, no part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Chapter 7 of this book is available for free in PDF format as Open Access from the individual product page at www.routledge.com. It has been made available under a Creative Commons AttributionNon Commercial-No Derivatives 4.0 licence. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-1-138-61539-7 (hbk) ISBN: 978-0-429-46314-3 (ebk) Typeset in Times New Roman by Integra Software Services Pvt. Ltd.
Contents
List of tables and figures List of contributors Foreword 1
How (not) to talk about technology: International Relations and the question of agency
vii viii xi
1
MATTHIAS LEESE & MARIJN HOIJTINK
2
Co-production: The study of productive processes at the level of materiality and discourse
24
KATJA LINDSKOV JACOBSEN & LINDA MONSEES
3
Configuring warfare: Automation, control, agency
42
MATTHIAS LEESE
4
Security and technology: Unraveling the politics in satellite imagery of North Korea
66
PHILIPP OLBRICH
5
Vision, visuality, and agency in the US drone program
88
ALEX EDNEY-BROWNE
6
What does technology do? Blockchains, co-production, and extensions of liberal market governance in Anglo-American finance
113
MALCOLM CAMPBELL-VERDUYN
7
Who connects the dots? Agents and agency in predictive policing MAREILE KAUFMANN
141
vi Contents 8
Designing digital borders: The Visa Information System (VIS)
164
GEORGIOS GLOUFTSIOS
9
Technology, agency, critique: An interview with Claudia Aradau
188
CLAUDIA ARADAU, MARIJN HOIJTINK, & MATTHIAS LEESE
Index
204
Tables and Figures
Tables 3.1 Levels of Automation 3.2 Selected Levels of Automation vis-à-vis loop tasks; scenario: UAV flying over operational zone; task categories based on
52 56
Figures 6.1 Co-producing authority in global governance
122
Contributors
The Editors Marijn Hoijtink is an Assistant Professor in International Relations at VU Amsterdam. Her research interests include emerging security technologies and their relation to the politics of risk, militarism, and weapons research, and the global circulation of security and military technologies. She has recently received a four-year Veni grant from The Netherlands Organisation for Scientific Research (NWO) to study the politics of engineering lethal autonomous weapons systems. Matthias Leese is a Senior Researcher at the Center for Security Studies (CSS), ETH Zurich. His research is primarily interested in the social effects produced at the intersection between security and technology, and pays specific attention to the normative repercussions of new security technologies across society, both in intended and unintended forms. His work covers various application contexts of security technologies, including airports, borders, policing, and R&D activities.
The Contributors Claudia Aradau is Professor of International Politics in the Department of War Studies, King’s College, London. Her research has developed a critical political analysis of security practices and their transformations. Among her publications are Politics of Catastrophe: Genealogies of the Unknown (with Rens van Munster, 2011) and Critical Security Methods: New Frameworks for Analysis (co-edited with Jef Huysmans, Andrew Neal, and Nadine Voelkner, 2014). Her recent work examines security assemblages in the digital age, with a particular focus on the production of (non)knowledge. She is currently writing a book with Tobias Blanke on algorithmic reason and the new government of self and other. She is on the editorial collective of Radical Philosophy. She is also chair of the Science, Technology, and Art Section (STAIR) of the International Studies Association (2018–2019).
Contributors
ix
Malcolm Campbell-Verduyn Malcolm Campbell-Verduyn is Assistant Professor in International Political Economy at the University of Groningen. His research explores on the roles of emergent technologies, non-state actors, and expert knowledge in contemporary global governance. He is the editor of Bitcoin and Beyond: Cryptocurrencies, Blockchains and Global Governance (Routledge, 2018) and author of Professional Authority After the Global Financial Crisis: Defending Mammon in Anglo-America (Palgrave MacMillan, 2017). Alex Edney-Browne is a PhD candidate in International Relations at the University of Melbourne. Her thesis examines the emotional and psychosocial effects of drone warfare for people living under drones in Afghanistan and US Air Force drone veterans. Georgios Glouftsios works as a Postdoctoral Fellow at the School of International Studies, University of Trento, Italy. His research is situated at the intersections of the broader transdisciplinary fields of Critical Security Studies and Science and Technology Studies. More specifically, his current research explores the security dimension of the Copernicus space programme by focusing on how satellite technologies are used in the context of EU CSDP missions. In the past, his research focused on the design, use, and operational management of large-scale IT systems deployed for border security, law enforcement, and migration management purposes in the EU. Mareile Kaufmann has been studying digital technologies and their dialogue with society for almost a decade. She teaches and researches as a postdoctoral researcher at Oslo University (Criminology) and at the Peace Research Institute, Oslo, in a secondary position. Mareile’s current projects look at understandings of crime in digitized societies, as well as the use of technologies in policing, surveillance, and digital countercultures. Katja Lindskov Jacobsen is a senior researcher at the University of Copenhagen in the Department of Political Science’s Centre for Military Studies. Her research centres on contemporary interventionism in the global South – with a specific interest in Africa – in part focusing on familiar institutions, like the UNHCR, UNODC, or UN peace operations, and in part looking at the role of new technologies (biometrics) and/or domains (maritime) of interventions. She is the author of The Politics of Humanitarian Technology (2015) and her research has been published in International Affairs, Security Dialogue, Citizenship Studies, and Journal of Intervention and Statebuilding among others. Linda Monsees is a Postdoctoral Fellow at the Goethe University Frankfurt Cluster of Excellence “The Formation of Normative Orders.” Prior to that, she worked at the Center for Advanced Internet Studies and the Bremen Graduate School of Social Sciences. Her research focuses on networked technology, especially digital encryption. She combines perspectives from
x
Contributors science and technology studies and political theory in order to think about new forms of democratic practices in a technological societies.
Philipp Olbrich pursues a PhD in International Relations at the University of Groningen and is the managing editor of the Journal of International Humanitarian Action. His doctoral dissertation examines the use and implications of commercial satellite imagery for the governance of human security. Further research interests include the role of technology in global security, the politics of outer space, and the conflict on the Korean peninsula.
Foreword
The idea for this book was born when we organized a section on “The Role of Technology in IR” for the European International Studies Association (EISA) annual Pan-European Conference in September of 2017 in Barcelona. Not only were we overwhelmed by the large number of high-quality contributions that we ended up with for the section, but what struck us in particular was how many of our colleagues either explicitly or implicitly dealt with the question of how technology affects our ability to act. As this theme was also prevalent in our own work, we felt that it was worth taking up and exploring further. Two years after the conference in Barcelona, this book should be seen as the result of a sustained and intense (and very much still ongoing) conversation about the relationship between technology and agency, how this relationship should be studied, and how technological agency comes to matter in the context of international politics. As editors, we feel privileged to be part of this conversation, to moderate the brilliant research and ideas of our colleagues, and to be in a position to channel the diverse interventions into a common framework. We would like to take this opportunity to say thank you to everyone who has accompanied and supported the process of authorship and editorship. Without the critical engagement and the challenges put forward by many great minds, this book would not have turned out the way that it has. Much appreciation goes to EISA and Victoria Basham and Cemal Burak Tansel (the program chairs for the Barcelona conference) for providing us with the space to start this conversation, and of course to everyone who contributed to the section. To Rob Sorsby and everybody else at Routledge, who, from the start, have been enthusiastic about our plans and always very supportive along the way. To the two anonymous reviewers who spotted a number of weak points in an early book outline and helped us to address these. To Myriam Dunn Cavelty for critical and encouraging engagement with messy work-in-progress. To Claudia Faltermeier for tireless work on the Endnote database. But most of all we are of course indebted to the contributors to this volume, who have made our lives easy by not only providing high-quality
xii
Foreword
work, but also by doing so in the most punctual, responsive, and respectful manner. This includes Claudia Aradau, who did not hesitate when we asked her to conclude the book with an interview that reflects on the wider implications of studying technology and agency in IR, and went out of her way to carefully engage with the entire draft manuscript. Amsterdam/Zurich, 31 October 2018
1
How (not) to talk about technology International Relations and the question of agency Matthias Leese & Marijn Hoijtink
In recent years, advances in both physical (i.e. engineering and robotics) and digital (i.e. artificial intelligence and machine learning) aspects of technology have led to the development of powerful new technologies such as so-called Autonomous Weapons Systems (AWS), algorithmic software tools for counterterrorism and security, or “smart” CCTV surveillance. These and other technologies have potentially profound repercussions for the ways in which action in international politics becomes possible, the ways in which relations between states become structured, and the ways in which wars are fought, security is produced, and peace is made and maintained. Accordingly, algorithmic and robotic technologies1 have received much attention from the discipline of International Relations (IR), but also from the policymaking world, the media, and the public. Debates thereby predominantly revolve around the claim that such technologies could to a large extent act autonomously, i.e. without human input when it comes to tasks like identifying and engaging military targets, searching for indicators of terrorist activity within large datasets, or analyzing live video footage for deviant behavior. This means that technologies are ascribed the general capacity to act and to create an impact in the world. In other words, they are believed to have agency that is predicated upon the ability to collect information about the world through sensors or data input, and to interact with the world on the basis of this information. Such an assumption would run counter to the modernist presupposition that agency (defined by the Oxford Dictionary as “action or intervention producing a particular effect”) could be exclusively found in humans, as humans would be the only species capable of reflexive thinking capacities, and therefore self-consciousness and free will. From this perspective, ascribing agency to technologies (or other non-human elements) creates a set of quite fundamental problems: if – staying within the above examples – machines would make decisions about what to define as a legitimate military target, who should be considered a potential terrorist, or what kind of behavior would warrant interventions by state authorities, then who could and should be morally, legally, politically, or economically held accountable and responsible for these decisions and their consequences? In turn, these
2
Matthias Leese & Marijn Hoijtink
and similar considerations have direct implications for international politics. Should AWS, for example, be preventively banned or integrated into existing non-proliferation regimes? How are international security practices informed and structured by global data collection programs and algorithmic number-crunching? And what kind of public order is being engendered by behavioral analysis in CCTV systems, possibly combined with other features such as automated face recognition software? Presupposed machine agency in the sense of autonomous action would seriously challenge the status of (international) politics as a domain of human activity. A closer look at how technologies “act” however usually reveals that they do not do so in an autonomous fashion, after all. Military drones are operated and supervised by a whole team of human staff on the ground. Counterterrorism software tools need to be developed, implemented, maintained, and fed with data on a daily basis by human analysts. And alerts produced by surveillance systems still need to be validated and acted upon by human security officers. This means that most technologies are, in fact, rather working with humans than in the place of humans. They assist, pre-structure, point out and make suggestions. They do the “heavy lifting,” take care of both complex and challenging tasks as well as dull and monotonous ones, and sometimes they “extend” human cognition by giving us access to additional information that we cannot sense ourselves. But in the end, humans and technologies enable each other in order to create an impact in the world. Technologies should therefore, in the sense of the workload distribution that characterizes them, best be conceptualized as “socio-technical systems” (Law, 1991) that are comprised of heterogeneous human and non-human elements. Such an understanding of technology – while acknowledging complexity and context sensitivity of (political) action – does, however, not resolve the question of agency in relation to algorithmic and robotic technologies. Clearly, when machines or computer systems do things that their human operators cannot do (or do not want to do), they play a role in how action is constituted and how meaning is produced. Hence, there is a need to study the ways in which technologies have, and exercise, agency. Technologies are political agents – not in a liberal sense that would presuppose that they act as conscious subjects whose actions are predicated upon volition and free will, but in the sense that they have effects on political action. This may seem a banal claim. Yet, we find that in the discipline of IR two broader tendencies have long prevented such a conceptualization of technology within international politics. The first tendency is the predominantly determinist reading of technology throughout the history of IR. From classic works such as Ogburn’s Technology and International Relations (1949b) or Skolnikoff’s Science, Technology, and American Foreign Policy (1967) to more recent contributions, most analyses are in fact predicated upon the assumption that technology is either fully controlled by humans or alternatively placed outside of human agency (McCarthy, 2013, 2018). While neatly fitting in with a prevailing scientific
How (not) to talk about technology
3
understanding of analysis (i.e. causal and mechanistic) throughout mainstream IR (Jackson, 2017), such a treatment of technology does however not sit well vis-à-vis algorithmic and robotic technologies and the acknowledgment of complexity and human-machine interactions within socio-technical systems. In order to overcome the externalization of technology as an explanatory variable in IR and to render it “endogeneous” to international politics, a number of scholars have thus suggested to unpack technology by foregrounding its construction, implementation, and use. Such a holistic approach would then enable us to account for the politics that go into technology, as well as for the politics that emanate from technology (e.g., Herrera, 2003; Fritsch, 2011).2 The second tendency that has prevented a stronger analytical appreciation of technology in international politics is the conceptualization of agency within IR. IR scholars have long been concerned with the “agent-structure problem” (Wendt, 1987), i.e. the question of whether human action should be seen as the decisive element for the analysis of international politics, or whether human action would always already be pre-defined and constrained by the social structures in which it is embedded. In an attempt to overcome this duality of agency and structure, Jackson and Nexon (1999) have proposed to turn to a relational analysis of action that, rather than asking what international actors do, foregrounds who these actors are and how their agency is produced. This relationalist turn has paved the way for a re-appreciation of (political) agency as emergent and dynamic rather than static and pre-determined. Moreover, it allows us to move away from an understanding of agency as an attribute (that would need to be located within someone or something) and towards an understanding of agency as a product of interaction. In other words: agency does not precede action, but action constitutes agency. Most importantly, however, it speaks to the acknowledgment that agency must not necessarily be exclusive to humans enables us to account for technology and its politicality through the study of interaction within socio-technical systems. The aim of this book – based on the premises to (1) unpack technologies in order to render them political, and (2) to understand agency as something that is produced through interaction – is to ask how technologies (co-)produce, alter, transform, and distribute agency within international politics. Working through the notion of agency and its transformations against the backdrop of algorithmic and robotic technologies thereby allows us to reconsider the ways in which technology has been treated in IR. A focus on agency moreover serves as a common denominator for the variegated theoretical and conceptual approaches that scholars in IR have more recently taken up to study technology, including the likes of “Social Construction of Technology” (SCOT, Bijker et al., 1987), “Actor-Network Theory” (ANT, Callon, 1984; Latour, 2005), “co-production” (Jasanoff, 2004), “performativity” (Butler, 2010), “vibrancy” (Bennett, 2010), “mangle” (Pickering, 1993), “intra-action” (Barad, 2007), “configuration” (Suchman, 2007), or post-human approaches (Cudworth and Hobden, 2013).
4
Matthias Leese & Marijn Hoijtink
The contributions to the book provide in-depth explorations of the entangled and multi-layered ways in which humans and technologies interact, work together, and mutually empower and/or constrain each other. In this vein they offer a variety of theoretical and empirical accounts of Technology and Agency in International Relations, including questions of theory-building and empirical analysis that emanate from Jasanoff’s notion of “co-construction” (Jacobsen and Monsees), the boundary work between humans and non-humans in military weapons systems (Leese), the mediation of security governance through the production and analysis of satellite imagery (Olbrich), the effects of practices of drone warfare on how military operators perceive the world (Edney-Browne), the role of blockchain technology for international financial regulation (Campbell-Verduyn), the design of algorithms for crime forecasting and intelligence (Kaufmann), and the emergence of large IT infrastructure systems for border management (Glouftsios). The book concludes with an interview with Claudia Aradau, who discusses technology and agency in relation to her own work on materiality, Big Data and algorithmic security, and explores a number of questions concerning politics, ethics, and methodology vis-à-vis the discipline of IR. This introduction proceeds in three steps. First, we briefly revisit IR’s grand theoretical debates (i.e. realism, liberalism, and constructivism) and pay specific attention to the ways in which technology within these frameworks has been treated in a deterministic and externalized fashion. Subsequently, we discuss the agency-structure debate and the turn towards relational analyses. We then explore more recent influences from STS and New Materialism into IR, and analyze how these approaches help us to study technology and agency in international politics.
Technology in IR: determinism and externalization IR’s answers to “the question concerning technology,” to borrow from Heidegger’s (1977) seminal essay, have come with quite a degree of variance, depending on assumptions about the essence of the international system, the possibilities and conditions for change or stability, and the general relationship between technology, politics, and society. As Ogburn (1949a: 18) has argued as early as in 1949, “in international relations the variables often stressed are leaders, personalities, social movements, and organizations. These are important variables in explaining particular actions and specific achievements. But because of their significance the variations of technological factors should not be obscured.” In Technology and International Relations – an early attempt to create a systematized account of the role of technology in global affairs – Ogburn (1949a: 16) illustrates the presumed causal influence of technological tools on world politics as follows: Few doubt that the early acquisition of steam power by the British before other states acquired it helped them to become the leading world
How (not) to talk about technology
5
power of the nineteenth century and thereby made the task of British diplomacy much easier. Britain’s steel mills, with their products for peace and for war, enabled her to spread much more effectively the ways of European civilization into Africa and southern and southeastern Asia. Ogburn’s account notably set the tone for ensuing realist engagements with technology – and particularly military technology – as a capabilities-enhancing variable that provides states with a power edge vis-à-vis other states in the international arena. As for realist and neorealist IR scholars, the international system is characterized by an anarchic structure that produces fierce competition between rivaling nation–states (Morgenthau, 1948; Waltz, 1979), the absence of rules (and/or their enforceability), the will to survive, and the lack of certainty about the intentions of other states (Mearsheimer, 1994). As the hierarchy within the international system is determined by the power capacities of states, the question of power and its acquisition is central. Power is in this sense usually conceptualized in terms of military and economic capacities. Technology is within realist and neorealist accounts of international politics then mainly treated as a tool that enhances state power, for instance through upgrades of military equipment (e.g., longer missile range, higher firing rates, more protective armor), or improved efficiency of economic means of production. In the realist paradigm, technology has the capacity to become a game changer within the international system and its study has been put center stage by many during the Cold War period. Against the backdrop of technological competition between the West and the East (e.g., the arms race, the space race), the (sub-)discipline of Strategic Studies primarily evolved around the study of the influence of military technologies on power distribution within global politics. As Buzan (1987: 6) argues, “the subject matter of Strategic Studies arises from two fundamental variables affecting the international system: its political structure, and the nature of the prevailing technologies available to the political actors within it.” Whereas questions of the political structure of the international sphere were considered a task for traditional IR, the technological component of international security had to be, according to Buzan (1987: 8), discussed by scholars of Strategic Studies focusing on the “variable of military technology.” Independent of whether one considers the study of technology to be a unique feature of the dedicated (sub-)field of Security Studies, or alternatively as a core concern of IR, the distinction made by Buzan indicates that the political structure of the international system is itself not affected by the availability of technology – an argument that, thus, treats technology as an externalized explanatory variable for change/stability in the international system. This does, of course, not mean that technology would not be seen as important for international politics. For realists, the political structure influences the
6
Matthias Leese & Marijn Hoijtink
development and implementation of technology, and technology, in turn, is widely regarded as a factor determining the military capacities of states and their strategic options in an international system that is characterized by anarchy. During the Cold War period, large parts of the IR literature were in fact dominated by questions about military capacity and the control thereof, with a particular focus on nuclear technology and the implications of the availability of the atomic bomb as an unprecedented means of mass destruction. After the end of the Cold War, the focus of analysis – following new military strategies vis-à-vis newly available technologies – shifted increasingly towards the incorporation of information and communication technologies (ICTs) into military equipment in order to enhance warfighting capacities of the US military. This so-called Revolution in Military Affairs (RMA) corresponded closely with more risk-averse political strategies of Western states that sought to avoid military fatalities, as well as a turn towards more specialized high-tech troops that would be able to conduct combat with precision and efficiency (Shaw, 2005). Within concepts of RMA, information is regarded as the key component that creates an advantage on the battlefield, as it enables better situational awareness and enhanced decision-making – both in combat and in military planning (Gray, 2005). While a (neo-)realist research agenda on technology is still very much focused on questions of how technological advancements alter military capacities and therefore potentially bring about changes in international politics that are predicated upon state power, the increased interest in ICTs bears an interesting parallel to liberal IR approaches to technology. Starting from a rather different analytical point of departure, liberal scholars posit that the international system undergoes a continuing transformation into a networked, interconnected, and interdependent global structure that is decisively distinct from the anarchic assumptions of the realist tradition (Rosenau, 1990). Within such processes of transformation, technology is conceptualized as a major driver that connects actors at multiple levels. As liberal scholars argue, the time-space compression of globalization has to a large extent been enabled and accelerated by ICTs and mobility and transportation technologies. These technologies, so the argument goes, have elevated cultural and economic exchange between societies to an unprecedented level and have thereby strengthened cultural ties on a global scale (Rosenau and Singh, 2002). Rosenau (1990: 7) describes the “postinternational politics” of a globalized world as [S]horthand for the changes wrought by global turbulence; for an ever more dynamic interdependence in which labor is increasingly specialized and the number of collective actors thereby proliferates; for the centralizing and decentralizing tendencies that are altering the identity and number of actors on the world stage; for the shifting orientations that are transforming authority relations among the actors; and for the
How (not) to talk about technology
7
dynamics of structural bifurcation that are fostering new arrangements through which the diverse actors pursue their goals. Whereas most liberal scholars share a general optimism about the possibilities of an interconnected world for the spread of common norms and values and the general conditions for peace, others have also pointed to the risks emanating from global connectivity. For example, Der Derian (2003) foregrounds how information technology has empowered non-state and non-Western actors, but has at the same time contributed to the professionalization of transnational organized crime and terrorism. In his work on the rise of the network society, Castells (2000) goes as far as to claim that the structure of the international system has turned away from one in which states are the dominant actors, towards one that is founded on flows and networks instead of static and sedimented institutions. In a globalized and interdependent world, international organizations, NGOs, or multi-national corporations should be recognized as relevant actors on a global scale, as their role in the regulation of global issues bears witness of novel and complex structures at the international level. For Fukuyama (1992), in such a world, the increasing availability of technological means for military purposes and the ensuing potential for destruction such military technologies have would lead to a redistribution of power in the sense that differences between actors would be leveled and the international system would become geared towards more cooperation rather than conflict. As technology plays a considerable part in liberal IR theory as the driver of systemic change, liberalism can be viewed as a helpful attempt to theorize the status of technology through phenomena such as interdependence, cooperation, and transnationalism. However, it should be kept in mind that technology is only one among multiple factors that engender such developments. Political programs, social change, and cultural influences are regarded to be just as transformative as the influences of new technologies when it comes to processes of globalization. For Rosenau, for instance, education and politicization of the population is key when it comes to changes in world politics. As he argues, although world politics would not be on a new course today if the microelectronic and other technological revolutions had not occurred, if the new interdependence issues had not arisen, if states and governments had not become weaker, and if subgroupism had not mushroomed, none of these dynamics would have produced parametric change if adults in every country and in all walks of life had remained essentially unskilled and detached with respect to global affairs. (Rosenau, 1990: 13) Finally, a different approach to technology in world politics is put forward by constructivist positions. As constructivism, generally speaking, presupposes
8
Matthias Leese & Marijn Hoijtink
that the world is “made” by human beings (Onuf, 1989), constructivist IR scholars suggest that material aspects within international politics do matter, but that they only acquire meaning in relation to social norms and identities (e.g., Wendt, 1992; Katzenstein, 1996). This claim is grounded in the assumption that international politics are embedded in a structure that is fundamentally social, and that this structure in turn influences the identities of global actors. For Wendt (1995), the social structure that underpins international politics is characterized by shared knowledge, material resources, as well as practices. His conception of politics presumes that technologies do matter, but – similar to Rosenau’s reservations – only in conjunction with larger social and societal trajectories. As Wendt (1995: 73) argues, “material resources only acquire meaning for human action through the structure of shared knowledge in which they are embedded.” In other words, technology can be an influential factor within the international system (Adler, 1997), but its impact cannot be understood without the social layers within which it is embedded. And while there is a general possibility for systemic change, such change is crucially not brought about by the invention or implementation of new technologies, but by changing norms and values. As Wendt (1995: 81) puts it, “to analyze the social construction of international politics is to analyze how processes of interaction produce and reproduce the social structures – cooperative or conflictual – that shape actors’ identities and interests and the significance of their material contexts.” This brief summary of mainstream IR theories and their stance toward technology, although certainly not doing enough justice to decades of debates and theory-building, illustrates how technology, against the backdrop of the discipline’s defining question (i.e. change and stability within the international system), has predominantly been conceptualized as an external variable that exerts influence on international politics, but that is in itself little political. In other words, IR scholars were for the most part interested in technology as a tool that has the capacities to amplify power, foster processes of globalization, or play a role in the emergence of norms and identities. IR has, however, shown surprisingly little interest in unpacking technology – that is in investigating how technologies are being constructed or how they become implemented and used in specific institutional or organizational contexts. In a lifecycle of technology that covers different stages from basic and applied research; engineering and design; implementation, practice, and maintenance; to eventual “death” or replacement, IR was thus first and foremost interested in how already available and implemented technologies interfere with politics and society (Fritsch, 2011). McCarthy (2013, 2018) attributes this externalization to a predominant determinist understanding of technology that can be encountered throughout most of IR, either in instrumentalist or essentialist terms. An instrumentalist understanding of technology presupposes that technology is a neutral tool that only acquires meaning through its use and resulting social and political practices. The assumption here is that technology could be
How (not) to talk about technology
9
fully controlled by humans and could thus serve as a means to pre-specified ends. In IR, this idea can be encountered most clearly in realist accounts that see (military) technology as a means to enhance the capacities to wage war, and therefore to gain power vis-à-vis other states. An instrumentalist understanding of technology thereby results, as demonstrated, in the inevitable externalization of technology as a variable that influences the international system, but is itself not an integral part of that system. Essentialism, on the other hand, conceptualizes technology as a central driving force for progress. Essentialist variants of determinism are underpinned by a strong belief in teleological progress, and by the idea that social and economic constraints can be overcome by technological innovation. Dahlberg (1973), for example, identifies a “technological ethic” that is deeply embedded within Western values and politics, and that is characterized by scientific rationalization, an exploitative control of nature, the search for perfection, an increasing functional specialization, and novel forms of mobility. For him, technology in all these manifestations directly impacts the exercise of politics. As he argues, “it should be clear that the contexts of international relations, the behavior of most relevant actors, and even our understandings of international relations are strongly but variously colored by the technological ethic” (Dahlberg, 1973: 84). Others, such as Mumford (1970) or Winner (1977), have put forward a more pessimistic reading of the presupposed essentialist characteristics of technology, as they regard faith in technological innovation as more dangerous than liberating, and caution against unforeseen consequences and side effects from the implementation of new technologies at scale. Independent of whether one favors an optimistic or pessimistic general stance towards technology, framing technology as deterministic is analytically compatible with the discipline’s focus on explaining change and stability in the international system. At the same time, however, such a perspective reduces technology to something that is already given and that changes the world from the outside. Determinist accounts of technology thus fail to take into account how technologies come into being and how existing social, political, and economic structures are always already imprinted on them. Even though within Strategic Studies there is a sustained tradition of research around the theme of technological innovation (e.g., Parker, 1988; Rosen, 1991; Farrell and Terriff, 2002), these perspectives seldom go beyond a determinist understanding of technology as an instrument that needs to be developed in order to create (military) power capacities. More recently, a number of IR scholars have expressed a general discontent with the determinist analytical treatment of technology as an externalized explanatory variable for change/stability in the international system (e.g., Herrera, 2003, 2006; Fritsch, 2011, 2014; Mayer et al., 2014; Salter, 2015a; Davidshofer et al., 2017; McCarthy, 2018). These authors claim that technological development and technological practices must not be separated from the social, political, and economic structures in which they are embedded. This has
10 Matthias Leese & Marijn Hoijtink already resulted in detailed accounts of issues as diverse as transnational business governance (Porter, 2014), the legal expertise surrounding the use of drones and targeted killings (Leander, 2013), or the socio-technical construction of airport security (Schouten, 2014; Valkenburg and van der Ploeg, 2015; Hoijtink, 2017). These contributions highlight the open-endedness of processes of technological development and demonstrate that technology is never the neutral tool that it is often presented to be. On the contrary, technological development and deployment is highly political and subject to social, institutional, economic and material possibilities and constraints, alongside preferences of developers, engineers, and designers. Taking seriously Herrera’s (2003: 566) claim that “technology needs to be endogenous to politics,” an understanding of technology as socially constructed helps us to overcome the determinist ontologies that have prevented the unpacking of technology within mainstream IR. Most notably, such a perspective on technology emphasizes the need to replace the totalizing imaginary of a master (human)/slave (machine) relationship – or vice-versa, depending on whether one favors an optimistic or pessimistic stance – with the idea of complex socio-technical systems in which humans and machines work together. This, as we will argue in the below, also opens up the study of technology for an understanding of agency as emergent through the interaction between human and non-human elements.
Agency in IR: agents, structures, relations In IR, agency has been most prominently discussed as part of the “agentstructure problem” (Wendt, 1987). Starting from the question whether human agency or the social structure within which it is embedded determines international action, debates about agency have mostly been concerned with how to situate agency and structure vis-à-vis each other, as well as vis-à-vis monocausal structuralist or intentionalist theories (e.g., Dessler, 1989; Hollis and Smith, 1991; Doty, 1997; Wight, 1999). Most approaches to the agent-structure problem depart from the assumption that agency and structure are mutually constitutive, and thus look for ways of accommodating both in the analysis of international politics. Wendt (1987), for example, has suggested a “constructionist” framework that he regards as capable of accounting for the constraints that international actors face with regard to social structures, but also for the power that these actors possess to transform the structures within which they are embedded. Despite the fact there is still a lack of shared agreement in IR about what agency actually means (Wight, 2006), agency is in IR usually considered as an exclusive concern of the human domain. This ties in neatly with much of modernist philosophy and social theory that, in the vein of the Cartesian split between mind and matter, places the liberal subject at the center of its ontology. This anthropocentric perspective rests on the presupposition that only humans possess consciousness and free will, and should therefore
How (not) to talk about technology 11 occupy a preeminent position in the world. In this tradition, a boundary between the human world and the non-human world thus separates the conscious subject from the unconscious matter with which it is surrounded – supported by a Newtonian account of physics that presupposes the existence of universal natural laws that explain the causal forces which move otherwise lifeless matter. Much of mainstream social science theory, including IR, subscribes to such a scientific analytical paradigm that is predicated upon the identification of causal mechanisms in order to explain social and political action (Jackson, 2017). The capacity to act would from such a perspective necessarily be constrained to humans vis-à-vis the social structures they create. Such an angle does however not problematize the notion of agency itself, as it brackets the question who can be an actor in the first place. Inspired by sociological accounts of agency (Emirbayer and Mische, 1998), Jackson and Nexon (1999) have thus suggested to analytically foreground the ways in which agency is produced through relations and the social and political entities that they produce and stabilize. Instead of homing in on the possibilities for human agency against the backdrop of social structures, they direct our attention to action itself, and how agency can be retraced backwards and located in interaction. The relational perspective proposed by Jackson and Nexon has several implications. First of all, it opens up the analytical toolbox of IR for influences from beyond the discipline. A relational understanding of agency speaks closely to various approaches from STS and New Materialism, and IR scholars have started to explore how these approaches can be productively integrated into IR. We will engage with these encounters in more detail below. Second, it presupposes an empirical rather than a theoretical research agenda (Braun et al., 2018). If (political) agency emerges through interaction, detailed study of these interactions is paramount. Importantly, this implies that there must not be a totalizing account of what agency is or what it does. Rather, agency must by definition be understood as multiple, variegated, and context dependent. This again speaks closely to the sociological and anthropological tradition of empirical (ethnographic) study of scientific and technological practices in STS. STS scholars have foregrounded the analytical importance of empirical sites of inquiry, most prominently embodied in the move to study the “laboratory” as the site where scientific facts are produced and start their journey to make an impact on the world (e.g., Latour and Woolgar, 1979; Lynch, 1985; Knorr-Cetina, 1995). And even though STS work has by no means been restricted to laboratory studies, the insight that context matters for the ways in which technologies are rendered into socio-technical systems and transform the ways in which we act is persistently important. Third, an understanding of agency as emergent through interaction does not exclude non-human elements. This acknowledgment is key when we think of
12 Matthias Leese & Marijn Hoijtink algorithmic and robotic technologies and the socio-technical systems that they constitute. As we have outlined in the beginning of this introduction, the notion of the socio-technical system challenges an understanding of nonhuman elements as passive objects that are fully subjected to human agency, and rather encourages us to study the role of objects in the constitution of agency, as they share or split workload together with humans. As such, a relational perspective on agency by definition challenges the modernist anthropocentric ontology. It thereby speaks closely to a broad body of scholarship under the title of New Materialism, which brings together a range of scholars from different theoretical and disciplinary backgrounds, including post- or anti-humanism, critical or speculative realism, chaos theory, complexity theory, object-oriented metaphysics, modern vitalism, or philosophy of becoming (Connolly, 2013b: 399; Coole, 2013: 452). What New Materialism scholars, despite their variegated theoretical roots, have in common is their refusal to uphold the anthropocentrism that has long dominated modernist and liberal philosophy and social theory. As Coole and Frost (2010: 8) argue, “modern philosophy has variously portrayed humans as rational, self-aware, free, and self-moving agents” that exercise dominance over nature and technology – and it is precisely this ontological divide that has enticed New Materialist scholars to search for alternative ways of framing the relationship between the human and nonhuman elements of the world. Seminal contributions by scholars such as Bennett (2010), Barad (2007), Haraway (1991), or Hayles (2006) focus not only on the role of science and technology within society, but also widen the analytical scope to the ontological status of materiality itself. Starting from the assumption that “materiality is always something more than ‘mere’ matter: an excess, force, vitality, relationality, or difference that renders matter active, self-creative, productive, unpredictable” (Coole and Frost, 2010: 9), New Materialism scholars subscribe to an ontology of complexity and emergence in the context of which natural elements, technological artifacts, animals, and humans interact in creative and partly unforeseeable ways. From such a perspective, as Barad (2007: 33) writes, “the world’s radical aliveness comes to light in an entirely nontraditional way that reworks the nature of both relationality and aliveness.” Such a perspective then allows for novel modes of analyzing the social, the political, and the economic as domains that are no longer produced by human decision-making and actions alone, but by entangled, emergent, and generative powers that include a variety of non-human actors and effects. Bennett (2010) aptly illustrates how such an understanding of the relevance of non-human forces plays out through her account of the 2003 power blackout in the US Midwest and Northeast and Canadian Ontario, which affected about 50 million people and lasted, in some regions, for an entirety of 4 days (U.S.-Canada Power System Outage Task Force, 2004: 1). Leading to the failure of the electricity grid, a chain of cascading interaction effects, almost without human interferences, unfolded such major damage to
How (not) to talk about technology 13 the grid that not even fail-safe measures could prevent the blackout. As Bennett (2010: 25) writes: [W]hat seems to have happened on that August day was that several initially unrelated generator withdrawals in Ohio and Michigan caused the electron flow pattern to change over the transmission lines, which led, after a series of events including one brush fire that burnt a transmission line and then several wire-tree encounters, to a successive overloading of other lines and a vortex of disconnects. One generating plant after another separated from the grid, placing more and more stress on the remaining participants. In other words, one thing had led to another, with the notion of “the thing” here referring to something that is explicitly non-human. The seemingly banal acknowledgment that “the international, the globe, the world is made up of things, of stuff, of objects, and not simply of humans and their ideas” (Salter, 2015a: vii), and more importantly, the acknowledgment that these things can contribute to the constitution of agency through interaction with humans and other things, has more recently gained increasing traction within IR. Scholars have for example started to explore neoliberal capitalist practices as an interplay of social, geological, biological, and climate systems (Connolly, 2013a, 2013b), the materiality of conflict and the importance of forensic knowledge about material objects in the context of investigating human rights violations (Walters, 2014), the socio-technical assemblages of digital security practices (Bellanova and Duez, 2012), or the material dimensions of infrastructure and its implication for the politics of infrastructure protection (Aradau, 2010), and have made material aspects of the international sphere the subject of edited collections (Acuto and Curtis, 2014; Salter, 2015b, 2016) and special issues in academic journals (Srnicek et al., 2013). Particularly with regard to technologies that do things that humans simply cannot do themselves (e.g., recognizing and engaging an incoming hostile missile within seconds; extracting patterns from millions of database entries; simultaneously monitoring and analyzing multiple video streams), the possibility for non-exclusively human agency has provoked a number of regulatory and ethical debates. Is the current legal system, for example, capable of accommodating actions that have not been consciously carried out by humans? Could machines ever act in a morally responsible fashion? And if not, where must accountability and responsibility be located when humans and computer systems work together, but the system does things that the human operator could not do themselves? The modernist-liberal imaginary of agency revolves around the conscious individual and its volitional decision-making, leading to eventual action and consequences in the world. This causal chain establishes the possible allocation of responsibility for one’s actions, both in the courtroom and morally speaking. A notion of agency that is “decoupled from criteria of intentionality, subjectivity, and
14 Matthias Leese & Marijn Hoijtink free-will” (Sayes, 2014: 141) however fundamentally complicates the causal chain of reasoning that is elemental to the idea of responsibility. If collectives, assemblages, networks, and mediating coalitions are conceptualized as pertinent for the production and reproduction of agency, then it becomes increasingly difficult to apply traditional legal and ethical categories. Such questions not only have practical appeal vis-à-vis the challenges that algorithmic and robotic technologies pose, but they also strike at the core of what it means to be a human being in this world. As Coole and Frost (2010: 4) put forward, what is at stake here is nothing less than a challenge to some of the most basic assumptions that have underpinned the modern world, including its normative sense of the human and its beliefs about human agency, but also regarding its material practices such as the ways we labor on, exploit, and interact with nature. A symmetrical understanding of ontology would indeed prescribe an ethical responsibility of acting within and with the world, rather than acting vis-àvis the world.
Studying technology and agency in IR This book addresses the question how agency, understood as an emergent form of interaction within socio-technical systems, comes to matter within international politics. The ways in which agency comes into being and with what repercussions must however, due to the empirical multiplicity and context sensitivity of interactions between humans and non-humans, by definition always remain situated and partial. This means that a general theory of Technology and Agency in International Relations is hardly possible. Such a generalization is, however, neither desirable nor is it what we are striving for here. The contributions to this book offer careful empirical analyses that place socio-technical systems within their political, legal, economic, ethical, cultural, and organizational contexts and explore how agency emerges and comes to matter. Situating technology within specific contexts thereby enables them to problematize the notion of agency and its transformations and effects in international politics. At the same time, it allows the authors to demonstrate that agency comes into being in variegated ways: voluntarily or involuntarily; planned or emergent; structured or chaotic. Thinking about technology and agency through these relations and interactions then arguably allows us to more systematically understand the implications of algorithmic and robotic technologies for international politics. The point of analyzing agency through the study of interaction in sociotechnical systems is to account for plurality and complexity, and to do so in ways that allow us to come to terms with such plurality and complexity rather than to homogenize or totalize the role of technology in international
How (not) to talk about technology 15 politics and the ways in which it becomes part of political action. The study of technology and agency in IR in this sense, as Claudia Aradau (this volume) puts forward, thrives on the incorporation of multiple theoretical and methodological perspectives that allow us to embrace complexity and plurality – and thereby challenge the long-standing preference for parsimonious theory-building in IR. This book should in this sense be understood as an invitation to draw upon a multiplicity of approaches and concepts in research on technology and agency. Whereas the contributions to the book are united by the attempt to productively problematize agency and technology, they do so by means of a diverse conceptual toolbox. Mareile Kaufmann (this volume) in her analysis of algorithms for predictive policing, Georgios Glouftsios (this volume) in his account of the construction of the Visa Information System for European border management, and Malcolm Campbell-Verduyn (this volume) in his investigation of blockchain technology and its implications for international financial regulation, all draw on the Social Construction of Technology (SCOT) literature. SCOT scholars suggest that we conceptualize technology as enmeshed with discursive and material networks, as well as with the heterogeneous controversies, conflicts, and discourses that surround them (e.g., Latour and Woolgar, 1979; Callon, 1980, 1986b; Hughes, 1983; Bijker et al., 1987; Mackenzie and Wajcman, 1999). Building on a strong notion of constructivism, SCOT approaches reject the teleological assumptions that essentialist forms of determinism posit, and instead highlight the open-endedness of processes of technological development. By means of empirical engagement with the various stages through which technologies emerge, SCOT scholars emphasize that technology is never the neutral tool that it is often presented to be. On the contrary, technological development and deployment is highly political and subject to social, institutional, economic and material possibilities and constraints, alongside preferences of developers, engineers, and designers. Kaufmann’s chapter (this volume) in this vein provides us with an interesting account of the life cycle of a technology, as it traces algorithms for predictive policing purposes from the cradle to the grave. Drawing on interviews with police staff, software developers, and programmers, she engages the consecutive stages of (pre-)conception, birth, adolescence, graduation, implementation, and death, and sketches out how each of these stages becomes subject to negotiation, controversy, and organizational and infrastructural requirements. While Kaufmann’s research was initially “only” interested in questions of agency, she soon finds that larger social and political trajectories took center stage during the analysis of empirical data. To be able to understand the workings and effects of data and algorithms for predictive policing, she thus argues, a range of other elements need to be taken into account, including the importance of a longer history of technology in police work and attitudes towards data and digital methods within the police.
16 Matthias Leese & Marijn Hoijtink Glouftsios’ (this volume) analysis of the Visa Information System (VIS) – a large-scale IT system that was designed for the management of the European border framework – follows a similar approach. Building on ethnographic fieldwork and expert interviews, he highlights the dispersed ways in which the VIS emerged throughout a multi-year process that included a variety of heterogeneous elements and actors. As he follows the VIS through variegated instances of design, technical feasibility studies, political negotiations, calculations, draftings, and re-draftings, Glouftsios manages to explicate how in the construction of technology, networks of heterogeneous elements are being tied together and rendered productive. He thereby forcefully demonstrates how multiple human and non-human elements, such as EU bureaucrats and security experts, servers, network cables, interfaces, and algorithms are involved in the constitution of the VIS system, and by extension, in the very practicing of border security, migration management and law enforcement in the EU and its neighborhood. Another prominent way to study agency and technology in IR is through the toolbox of Actor-Network Theory (ANT), as adopted by a number of the contributions in this volume (Olbrich, this volume; Glouftsios, this volume; Kaufmann, this volume). ANT, as advanced by Callon, Law, Latour, and others (e.g., Callon, 1984, 1986a; Law, 1986, 1992; Latour, 2005), has been particularly prominent in IR in recent years (e.g., Barry, 2013; Best and Walters, 2013; Bueger, 2013; Nexon and Pouliot, 2013; Passoth and Rowland, 2015). It starts from the assumption that social effects are produced by heterogeneous networks of actants that comprise social and technical parts, including organizations and institutions as much as things, artefacts, and humans. Each of these elements should be seen as equally important to the network, as they produce and re-produce social order in a joint fashion. ANT presupposes that all of the elements of the network are relevant for actions, whether their actions emerge in a deliberate (human) fashion or not. Latour (2005) therefore suggests using the term “actant” as opposed to the liberal expression of the “actor,” as the notion of the actant indicates that action is not necessarily tied to human intention or consciousness. Such a perspective then allows for more suitable modes of understanding non-human action. As Latour (2005: 71) has famously argued: “If action is limited a priori to what ‘intentional,’ ‘meaningful’ humans do, it is hard to see how a hammer, a basket, a door closer, a cat, a rug, a mug, a list, or a tag could act.” One of the things that ANT then brings to the study of international politics is a concern with the place of non-humans in political life and the effects of relational practices between humans and non-humans. From an ANT perspective, agency is always entangled and distributed. In addition, an ANT approach advances the study of technology in international politics by drawing specific attention to the link between situated and local practices of knowledge production and their broader effects, or to how particular knowledge claims or truth claims gain content and political importance.
How (not) to talk about technology 17 In his chapter on the use of satellite imagery for the monitoring of human rights abuses, Philipp Olbrich (this volume) draws on ANT to point out how satellite technology becomes a participant in the making and remaking of North Korea as a security threat and pariah state. For Olbrich, the use of satellite technology has a key impact on what can be known (or not known) about human rights abuses, conflict, or political violence on a global scale. In turn, what is presented as evidence through the use of satellite technology has important effects for how the international community engages with North Korea – or, rather, disengages with North Korea, as practices of satellite surveillance reify the image of North Korea as a pariah state and further limit the potential for dialogue. Finally, as Olbrich shows, in the process of making North Korea visible and producing evidence, satellite imagery itself remains largely unquestioned. In fact, in the process of conducting satellite surveillance, satellite technology is further reified as an objective, neutral, and desirable way of examining human rights violations. Applying a slightly different perspective on the relations between humans and technology and the resulting effects for the production of agency, the contributions by Katja Lindskov Jacobsen and Linda Monsees (this volume) and Malcolm Campbell-Verduyn (this volume) are informed by Sheila Jasanoff’s (2004) work on co-production. Jacobsen and Monsees argue that the concept of co-production – even though somewhat underacknowledged within IR – is particularly suitable for studying technology and technological agency in international politics. For them, the way in which co-production places specific emphasis on how science and technology, or the making of scientific knowledge or facts, affects social order and hierarchies has important analytical value in the sense that it re-introduces key questions in IR, such as global power, inequality, and norms. Campbell-Verduyn’s inquiry into the governance of international finance foregrounds the political and economic perception of blockchain technology that changes from a framing of the blockchain as a threat to established financial institutions to the incorporation of blockchain technology within the liberal capitalist system. He highlights the role of blockchain technology as co-productive of the transformation of international finance, arguing that the blockchain has produced and legitimized the power of its users, while at the same time being subjected to the influence of its users. Forms of political agency that are produced through this interaction between human and technological authority then unfolded global repercussions in the sense that they provided the conditions for further extending liberal governance modalities in the wake of the 2008 global financial crisis. Both CampbellVerduyn and Jacobsen and Monsees manage, through the concept of co-production, to explicate how both discursive and material aspects of technology come to matter in the ways in which agency is produced. This relation between discourse and materiality is also key in the work of Suchman (2007, 2012) that Leese (this volume) mobilizes in his analysis of
18 Matthias Leese & Marijn Hoijtink human-machine relations in military weapons systems. In order to understand what might be at stake in the future of warfare against the backdrop of potentially “autonomous” weapons systems, Suchman’s concept of configuration for him provides a productive lens, as it directs analytical attention to the specific ways in which humans and machines share or split tasks, and how their relationship revolves around notions of automation and control. Leese’s analysis in this sense highlights the role of cultural imaginaries that inform the construction of socio-technical systems, and particularly the idea of “meaningful human control” over automated system functions. In doing so, he draws specific attention to the presupposed boundary between humans and computers that is within socio-technical systems engendered through the notion of human control. A slightly different perspective is applied by Alex Edney-Browne (this volume) in her analysis of visuality within practices of drone warfare. In order to demonstrate what can go wrong when humans and technologies work together, she engages practices of drone warfare and how the fallibilities of human-machine interaction on the battlefield can unfold lethal consequences. Building on visual IR theories and critical military studies, she puts forward powerful concerns with regard to the growing authority of visual technologies in military affairs. As she works through the notions of failure and fallibility, Edney-Browne’s analysis points to the importance of examining and uncovering the flaws that are inherent in algorithmic and robotic technologies and the socio-technical systems that they comprise. Such a critical stance then challenges techno-fetishization and questions military institutions’ embellishments about their technological capabilities. Finally, in her reflections on technology and agency in international politics, Claudia Aradau (this volume) urges us to expand our analytical toolbox even further, by also including feminist and post-colonial perspectives on technology and agency and by paying explicit attention to the multiplicity and debates within STS and IR. In our interview with her, which serves as a conclusion to the book, Aradau elaborates, among other things, on questions of the global circulation of technology, the role of technology in the production of knowledge and broader issues of secrecy, critique, and politics. Aradau thereby draws particular attention to the ways in which distributed and entangled modes of agency produce specific forms of knowledge, or act upon our bodies in specific ways. According to her, a symmetric reading of the world could and should still lead to an engagement with how asymmetric relations of power, authority, and knowledge are produced. This would then also direct attention to questions of who or what gets to speak and act, or what counts as evidence. For her, these questions are underpinned by particular relations between actors, but also by the technologies, forms of equipment and instruments that these actors can have or interact with. Overall, IR scholars have in recent years made sustained and encouraging efforts to render ideas and concepts from STS and New Materialism
How (not) to talk about technology 19 productive for the study of the international, which has resulted in a variety of efforts to re-appropriate our understanding of technology and agency in international contexts. However, as Salter (2015a: xviii–xix) notes, these efforts still resemble a “party not quite in full swing.” In other words, there remains much empirical and conceptual work to do. With this book, we hope to offer a contribution to the debates by foregrounding the importance of agency, specifically with regard to algorithmic and robotic technologies. If we perceive of agency in relational and entangled forms that emerge through the interactions between heterogeneous elements, we should turn our attention to these interactions, and the ways in which they bind humans and non-humans together. A “flat” or symmetric reading of ontology as proposed by STS and New Materialist scholars then not only requires us to rethink what it means to act in the world, but also raises a set of questions that concern the ways in which international politics are structured.
Notes 1 Being aware of the risk to oversimplify the many different types of technologies that hold relevance for international politics, we will throughout this introduction refer to “algorithmic and robotic technologies,” as this term covers both physical (“hardware”) and digital (“software”) aspects. The most intense debates about technology can usually be encountered when both of these aspects are combined, i.e. when technologies are rendered “intelligent” based on sensing and algorithmic processing capacities, while at the same time able to move around and interact with their environment. 2 When we speak here of politics in relation to technology, we do not refer to regulatory debates or to the governance of technology, but rather to the ways in which technology is embedded in politics and/or has political effects by means of its interaction with humans.
References Acuto M and Curtis S (eds.) (2014) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan. Adler E (1997) Seizing the Middle Ground: Constructivism in World Politics. European Journal of International Relations 3(3): 319–363. Aradau C (2010) Security that Matters: Critical Infrastructure and Objects of Protection. Security Dialogue 41(5): 491–514. Barad K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham/London: Duke University Press. Barry A (2013) The Translation Zone: Between Actor-Network Theory and International Relations. Millennium – Journal of International Studies 41(3): 413–429. Bellanova R and Duez D (2012) A Different View on the ‘Making’ of European Security: The EU Passenger Name Record System as A Socio-Technical Assemblage. European Foreign Affairs Review 17(2/1): 109–124. Bennett J (2010) Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press.
20 Matthias Leese & Marijn Hoijtink Best J and Walters W (2013) “Actor-Network Theory” and International Relationality: Lost (And Found) in Translation. International Political Sociology 7(3): 332–334. Bijker W E, Hughes T P and Pinch T J (eds.) (1987) The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambride/London: MIT Press. Braun B, Schindler S and Wille T (2018) Rethinking Agency in International Relations: Performativity, Performances and Actor-Networks. Journal of International Relations and Development. Online first: 10.1057/s41268-018-0147-z. Bueger C (2013) Actor-Network Theory, Methodology, and International Organization. International Political Sociology 7(3): 338–342. Butler J (2010) Performative Agency. Journal of Cultural Economy 3(2): 147–161. Buzan B (1987) An Introduction to Strategic Studies: Military Technology and International Relations. Basingstoke/London: Macmillan Press. Callon M (1980) The State and Technical Innovation: A Case Study of the Electrical Vehicle in France. Research Policy 9(4): 358–376. Callon M (1984) Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. The Sociological Review 32(1): 196–233. Callon M (1986a) The Sociology of an Actor-Network: The Case of the Electric Vehicle. In Callon M, Law J & Rip A (eds.) Mapping the Dynamics of Science and Technology: Sociology of Science and the Real World. Basingstoke: Macmillan Press, 19–34. Callon M (1986b) Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. In Law J (ed.) Power, Action and Belief: A New Sociology of Knowledge? London: Routledge, 196–223. Castells M (2000) The Information Age: Economy, Society and Culture, Part 1: The Rise of the Network Society. Malden/Oxford/Chichester: Wiley-Blackwell. Connolly W E (2013a) The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism. Durham: Duke University Press. Connolly W E (2013b) The ‘New Materialism’ and the Fragility of Things. Millennium – Journal of International Studies 41(3): 399–412. Coole D (2013) Agentic Capacities and Capacious Historical Materialism: Thinking with New Materialisms in the Political Sciences. Millennium – Journal of International Studies 41(3): 451–469. Coole D and Frost S (2010) Introducing the New Materialisms. In Coole D & Frost S (eds.) New Materialisms: Ontology, Agency, and Politics. Durham/London: Duke University Press, 1–43. Cudworth E and Hobden S (2013) Of Parts and Wholes: International Relations beyond the Human. Millennium – Journal of International Studies 41(3): 430–450. Dahlberg K A (1973) The Technological Ethic and the Spirit of International Relations. International Studies Quarterly 17(1): 55–88. Davidshofer S, Jeandesboz J and Ragazzi F (2017) Technology and Security Practices: Situating the Technological Imperative. In Basaran T, Bigo D, Guittet E-P & Walker R B J (eds.) International Political Sociology: Transversal Lines. Milton Park/New York: Routledge, 205–227. Der Derian J (2003) The Question of Information Technology in International Relations. Millennium – Journal of International Studies 32(3): 441–456. Dessler D (1989) What’s at Stake in the Agent-Structure Debate? International Organization 43(3): 441–473.
How (not) to talk about technology 21 Doty R L (1997) Aporia: A Critical Exploration of the Agent-Structure Problematique in International Relations Theory. European Journal of International Relations 3(3): 365–392. Emirbayer M and Mische A (1998) What Is Agency? American Journal of Sociology 103(4): 962–1023. Farrell T and Terriff T (eds.) (2002) The Sources of Military Change: Culture, Politics, Technology. Boulder: Lynne Rienner Publishers. Fritsch S (2011) Technology and Global Affairs. International Studies Perspectives 12 (1): 27–45. Fritsch S (2014) Conceptualizing the Ambivalent Role of Technology in International Relations: Between Systemic Change and Continuity. In Mayer M, Carpes M & Knoblich R (eds.) The Global Politics of Science and Technology – Vol. 1: Concepts from International Relations and Other Disciplines. Dordrecht: Springer, 115–138. Fukuyama F (1992) The End of History and the Last Man. London: Penguin Books. Gray C H (2005) Peace, War, and Computers. London/New York: Routledge. Haraway D J (1991) Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge. Hayles N K (2006) Unfinished Work: From Cyborg to Cognisphere. Theory, Culture & Society 23(7–8): 159–166. Heidegger M (1977) The Question Concerning Technology and Other Essays. New York: Harper & Row Publishers. Herrera G L (2003) Technology and International Systems. Millennium – Journal of International Studies 32(3): 559–593. Herrera G L (2006) Technology and International Transformation: The Railroad, the Atom Bomb, and the Politics of Technological Change. Albany: SUNY Press. Hoijtink M (2017) Governing in the Space of the “Seam”: Airport Security after the Liquid Bomb Plot. International Political Sociology 11(3): 308–326. Hollis M and Smith S (1991) Beware of Gurus: Structure and Action in International Relations. Review of International Studies 17(4): 393–410. Hughes T P (1983) Networks of Power: Electrification in Western Society, 1880–1930. Baltimore: John Hopkins University Press. Jackson P T (2017) Causal Claims and Causal Explanation in International Studies. Journal of International Relations and Development 20(4): 689–716. Jackson P T and Nexon D H (1999) Relations before States: Substance, Process and the Study of World Politics. European Journal of International Relations 5(3): 291–332. Jasanoff S (2004) The Idiom of Co-Production. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 1–12. Katzenstein P J (1996) The Culture of National Security: Norms and Identity in World Politics. New York: Columbia University Press. Knorr-Cetina K D (1995) Laboratory Studies: The Cultural Approach to the Study of Science. In Jasanoff S, Markle G E, Petersen J & Pinch T J (eds.) Handbook of Science and Technology Studies. Thousand Oaks/London/New Delhi: Sage, 140–166. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Latour B and Woolgar S (1979) Laboratory Life: The Social Construction of Scientific Facts. Beverly Hills: Sage. Law J (ed.) (1986) Power, Action, and Belief: A New Sociology of Knowledge? London: Routledge & Kegan Paul.
22 Matthias Leese & Marijn Hoijtink Law J (1991) Introduction: Monsters, Machines and Sociotechnical Relations. In Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London/New York: Routledge, 1–23. Law J (1992) Notes on the Theory of the Actor-Network: Ordering, Strategy and Heterogeneity. Systems Practice 5(4): 379–393. Leander A (2013) Technological Agency in the Co-Constitution of Legal Expertise and the US Drone Program. Leiden Journal of International Law 26(4): 811–831. Lynch M (1985) Art and Artifact in Laboratory Science: A Study of Shop Work and Shop Talk in A Research Laboratory. London: Routledge. Mackenzie A and Wajcman J (eds.) (1999) The Social Shaping of Technology. Berkshire: Open University Press. Mayer M, Carpes M and Knoblich R (2014) The Global Politics of Science and Technology: An Introduction. In Mayer M, Carpes M & Knoblich R (eds.) The Global Politics of Science and Technology – Vol. 1: Concepts from International Relations and Other Disciplines. Dordrecht: Springer, 1–35. McCarthy D R (2013) Technology and ‘The International’ Or: How I Learned to Stop Worrying and Love Determinism. Millennium – Journal of International Studies 41 (3): 470–490. McCarthy D R (2018) Introduction: Technology in World Politics. In McCarthy D R (ed.) Technology and World Politics: An Introduction. Milton Park/New York: Routledge, 1–21. Mearsheimer J J (1994) The False Promise of International Institutions. International Security 19(3): 5–49. Morgenthau H J (1948) Politics among Nations: The Struggle for Power and Peace. New York: Knopf. Mumford L (1970) The Myth of the Machine. New York: Harcourt Brace. Nexon D H and Pouliot V (2013) “Things of Networks”: Situating ANT in International Relations. International Political Sociology 7(3): 342–345. Ogburn W F (1949a) The Process of Adjustment to New Inventions. In Ogburn W F (ed.) Technology and International Relations. Chicago: University of Chicago Press, 16–27. Ogburn W F (ed.) (1949b) Technology and International Relations. Chicago: University of Chicago Press. Onuf N G (1989) World of Our Making: Rules and Rule in Social Theory and International Relations. London: Routledge. Parker G (1988) The Military Revolution: Military Innovation and the Rise of the West, 1500–1800. Cambridge: Cambridge University Press. Passoth J-H and Rowland N J (2015) Who Is Acting in International Relations? In Jacobi D & Freyberg-Inan A (eds.) Human Beings in International Relations. Cambridge: Cambridge University Press, 266–285. Pickering A (1993) The Mangle of Practice: Agency and Emergence in the Sociology of Science. American Journal of Sociology 99(3): 559–589. Porter T (2014) Technical Systems and the Architecture of Transnational Business Governance Interactions. Regulation & Governance 8(1): 110–125. Rosen S P (1991) Winning the Next War: Innovation and the Modern Military. Ithaca: Cornell University Press. Rosenau J N (1990) Turbulence in World Politics: A Theory of Change and Continuity. Princeton: Princeton University Press.
How (not) to talk about technology 23 Rosenau J N and Singh J P (eds.) (2002) Information Technologies and Global Politics. Albany: State University of New York Press. Salter M B (ed.) (2015a) Introduction: Circuits and Motion. In Making Things International 1: Circuits and Motion. Minneapolis: University of Minnesota Press, vii–xxii. Salter M B (ed.) (2015b) Making Things International 1: Circuits and Motion. Minneapolis: University of Minnesota Press. Salter M B (ed.) (2016) Making Things International 2: Catalysts and Reactions. Minneapolis: University of Minnesota Press. Sayes E (2014) Actor–Network Theory and Methodology: Just What Does It Mean to Say that Nonhumans Have Agency? Social Studies of Science 44(1): 134–149. Schouten P (2014) Security as Controversy: Reassembling Security at Amsterdam Airport. Security Dialogue 45(1): 23–42. Shaw M (2005) The New Western Way of War: Risk-Transfer War and Its Crisis in Iraq. Cambridge: Polity. Skolnikoff E B (1967) Science, Technology, and American Foreign Policy. Cambridge/ London: MIT Press. Srnicek N, Fotou M and Arghand E (2013) Introduction: Materialism and World Politics. Millennium – Journal of International Studies 41(3): 397. Suchman L (2007) Human-Machine Reconfigurations: Plans and Situated Actions, 2nd Edition. Cambridge: Cambridge University Press. Suchman L (2012) Configuration. In Lury C & Wakeford N (eds.) Inventive Methods: The Happening of the Social. London/New York: Routledge, 48–60. U.S.-Canada Power System Outage Task Force (2004) Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations. Available at http://energy.gov/sites/prod/files/oeprod/DocumentsandMedia/Black outFinal-Web.pdf (accessed 10 August 2018). Valkenburg G and van der Ploeg I (2015) Materialities between Security and Privacy: A Constructivist Account of Airport Security Scanners. Security Dialogue 46(4): 326–344. Walters W (2014) Drone Strikes, Dingpolitik and Beyond: Furthering the Debate on Materiality and Security. Security Dialogue 45(2): 101–118. Waltz K N (1979) Theory of International Politics. Reading: Addison-Wesley. Wendt A (1987) The Agent-Structure Problem in International Relations Theory. International Organization 41(3): 335–370. Wendt A (1992) Anarchy Is What States Make of It: The Social Construction of Power Politics. International Organization 46(2): 391–425. Wendt A (1995) Constructing International Politics. International Security 20(1): 71–81. Wight C (1999) They Shoot Dead Horses Don’t They? Locating Agency in the AgentStructure Problematique. European Journal of International Relations 5(1): 109–142. Wight C (2006) Agents, Structures and International Relations: Politics as Ontology. Cambridge: Cambridge University Press. Winner L (1977) Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge: MIT Press.
2
Co-production The study of productive processes at the level of materiality and discourse Katja Lindskov Jacobsen & Linda Monsees
Introduction: on co-production and agency Sparked by an interest in technology, a number of International Relations (IR) scholars have looked for theoretical insights and analytical resources beyond the traditional toolkit of political science. Approaches stemming from the field of Science and Technology Studies (STS) and the related theoretical stream of new materialism are considered to be most valuable for grasping the particular character of technology (Amicelle et al., 2015; Austin, 2017; Barry, 2013; Barry and Walters, 2003; Jacobsen, 2015b; Salter, 2015; Squire, 2015; Walters, 2014). Importantly, all of these uses of STS insights by IR scholars have the important point in common that they, as noted by William Connolly, refuse “the mechanical modes of explanation in classical materialism” (Connolly, 2013: 400). Indeed, the extent to which different IR scholars’ have applied STS-insights is an important testament to how STS can help advance our understanding of important, yet underappreciated issues of contemporary global politics. To be more specific, an important benefit that STS insights can bring to IR is the starting point that not only human practitioners impact how technology affects global politics – so too, may technology itself. Put differently, this body of literature has emphasized in different ways how it is “crucial to recognize the vibrancy of matter or that things have power and energy in themselves independent of interpretations and representations imposed by humans” (Mac Ginty, 2017: 856). A key benefit of this conceptualization of technology as capable of affecting social relations in ways that cannot be reduced to the results of human intent, realized through technology, is that it enables new types of analyses “by following power into places where current social theory seldom thinks to look for it” (Jasanoff, 2004b: 24). This could be in “genes, climate models, research methods” or accounting systems (Jasanoff, 2004b: 42), or as picked up by IR scholars, in infrastructure (Aradau, 2010; Hönke and Cuesta-Fernández, 2017), roadblocks (Bachmann and Schouten, 2018), or biometric technologies (Jacobsen, 2015a). Broadening the concept of agency – or agentic capacity (Coole, 2013; see also Coole and Frost, 2010) or vibrancy (Bennett, 2010; Mac Ginty, 2017) – is one of the most attractive features of STS scholarship and has been widely
Co-production 25 debated within IR. In this chapter, we want to introduce a particular perspective of this question of materiality’s agentic capacity. We suggest that Sheila Jasanoff’s work on the co-production of science and technology and social order (Jasanoff, 2004c) offers an important conceptual framework for thinking about how technological development shapes society and the other way around. Jasanoff’s work is not primarily associated with the broadening of the concept of agency, as is the work of other STS-scholars like Karen Barad, Donna Haraway, or Bruno Latour. However, we argue that Jasanoff’s idiom of co-production offers a valuable entry point into the study of agency, precisely because it explicitly argues that our study of agency should not be reduced to the question of whom or what possesses agency. Co-production is thus a way to broaden the inquiry of agency. Paying attention to how technology has agency, for example in the way in which it constitutes new domains of life as knowable and amenable to intervention, does not contradict or exclude attention to the importance of discourses in affecting how particular technologies are understood as “safe,” “solutions,” or “objective”. Indeed, from this perspective it becomes possible to explore the interplay between constitutive processes at the level of discourse and at the level of materiality. This implies to look at if technology is framed as objective or if social order and implied hierarchies are framed as taken for granted and unchangeable. Importantly, for Jasanoff, the study of co-productive processes is inseparable from broader reflections on how the agentic capacity of materiality plays into the constitution of social order. As Jasanoff (2004b: 25) notes: “Our methods for understanding and manipulating the world curve back and reorder our collective experience along unforeseen pathways.” The concept of co-production thus entails explicit attention to how “forms of human organization and behavior” are affected in different ways “by science and technology” (Jasanoff, 2004b: 25). In that sense, the concept of co-production is arguably strongly compatible with an analysis of international politics and may thus help bridge the gap between STS-inspired work and other sociological or practice-based approaches in IR. This is so since co-productionist studies already focus on social order, questions of power and global inequality, as well as on a variety of practices that a study of the social production of science and technology would necessarily entail engaging with. Through this introduction of Jasanoff’s work to IR, we offer an example of an STS-approach through which to discuss and explore questions of agency in a manner that pays attention to constitutive processes at the level of discourse and at the level of materiality. This, we suggest, may hold important insights for the field of IR at large, as do insights from practicebased and sociological approaches of studying agency for the analysis of agency and technology. More specifically, looking at constitutive processes at these two levels and their interplay may be a helpful starting point for exploring the role of technology in various intervention settings – be it humanitarian biometrics, drones in high-tech warfare, or satellites in environmental security. Looking at what particular technologies help constitute
26 Katja Lindskov Jacobsen & Linda Monsees through what they render “visible, intelligible, and thereby governable” (Rothe, 2017: 334) – e.g. digital bodies (Jacobsen, 2015a), legal expertise (Leander, 2013), or environmental risks (Rothe, 2017) – and at how discourses shape the ways in which specific technologies come to be understood, seems a fruitful lens for bringing out the politics at play in both of these constitutive processes. The text is structured as follows. First, we introduce and explain the analytical value of the idiom of co-production with particular attention to questions of agency. We conclude this section by moving these ideas and insights from STS into IR, building on existing literature on the “translation” of STS into IR (Best and Walters, 2013). We then introduce three different ways of studying agency that all provide valuable insights for researching the co-production of science, technology, and society. Two of these methodologies (relational and ascribed agency) stem from theories developed outside of STS but we show how they are compatible with a co-productionist analysis. This contribution thus also shows how concepts of agency that stem from different theoretical traditions speak to each other and how the division between traditional and new concepts of agency might not always be all that stark. In short, this last section includes reflections about how studying agency entails methodological choices that reflect how the researcher understands and conceptualizes agency,1 including questions about the importance of constitutive processes at different levels and the interplay between these in the specific cases at hand.
Co-production We consider the concept of co-production to be a valuable tool to inquire how agency is co-produced in human-material networks in the context of international relations. In this section, we will firstly introduce Jasanoff’s work in more general terms, before zooming in on the idea of agency. For Jasanoff (2004a: 2), co-production indicates that “the ways in which we know and represent the world (both nature and society) are inseparable from the ways in which we choose to live in it.” Thus, the two dimensions of the “co” in co-production primarily concern science and society. Indeed, co-production explicitly calls attention to questions about “what sorts of scientific entities or technological arrangements can usefully be regarded as being co-produced with which elements of social order” (Jasanoff, 2004b: 18). We want to suggest a reading in which the two principal dimensions of co-production are a) about the social production of technology and b) about production by technology given its agentic capacity. In this reading, the focus of the analysis is still the “co-production of science and social order” – as in the title of Jasanoff’s (2004c) edited volume – but with more explicit attention to how two types of production unfold; production of technology and production by technology, each of which entails questions about the co-production of science/technology and society.
Co-production 27 Concerning the production of technology, this includes questions about the social processes and practices through which a specific technology gets constituted in a particular capacity (as “safe,” as “trusted,” as “objective,” etc.) and with what social implications. Concerning production by technology, this includes questions about the ways in which a specific technology generates effects, problem framings, structures, etc. with socio-political implications. This could be by affecting the reach of or rationale for state power, or by affecting social order and hierarchy via seemingly technical processes or scientific “facts.” We conclude this section with a discussion of how the idiom of co-production might be able to respond to some points of critique leveled against translating STS into IR. We show here that coproduction is compatible with more traditional questions of political science, including questions of power and inequality. This will then allow for a discussion of the different methodologies for studying agency in the last section. Social production of technology The first dimension of co-production invites us to explore the social production of technology. This firstly means scrutinizing how a technology or scientific discovery comes to appear as reliable or factual. Or, to use an example the importance of which is perhaps more easily recognizable to IR scholars how a technology of warfare, such as a drone, comes to appear ethical (Agius, 2017; Suchman et al., 2017). From a co-productionist perspective, when exploring the social production of technology, one must focus on analyzing and making visible the various social practices and discourses that contribute to the social production of scientific facts and technological authority. As Jasanoff (2004b: 17) puts it, when viewed in this way, “the workings of science and technology cease to be a thing apart from other forms of social activity.” Rather than taking for granted that scientific facts and/or authoritative technologies are distinct from the social world as fundamentally objective phenomena, we are instead encouraged to explore how, for example, “proof” of a given technology’s reliability and authority is produced – through what micro-level processes, practices and attending discursive framings? Investigating processes of knowledge production has always been a core interest of STS since its early days (Latour, 1987; Wynne, 1989). For political scientists attention to discursive framings is important in order to understand how technology enters into the social world, including what expectations, assumptions etc. are attached to them. Indeed, the concept of co-production invites us to explore how discursive framings matter in the social production of science and technology, for example with reference to how they “often tacitly merge normative and technical repertoires” (Jasanoff, 2004b: 41). Thus, one possible way to draw upon co-productionist insights is to explore the processes that go into the production of a particular technology
28 Katja Lindskov Jacobsen & Linda Monsees in a specific capacity. For example, since 2001, the United Nations High Commissioner for Refugees (UNHCR) has deployed biometrics in various operations on the note that this technology would help reduce fraud, produce more accurate population figures and speed up the process of refugee registration (Jacobsen, 2017). In 2001, biometrics was still in many ways surrounded by controversy and questions about its scalability for instance remained unsettled (Jacobsen, 2015a). Noticing this, a co-productionist analysis could look at how, for example humanitarian actors contribute to the making of biometric registration technology as “safe,” “proven,” “tested” in a real-world setting. It could analyse the making of success stories with political implications, e.g. for thoughts about the further roll-out of biometrics in additional refugee contexts and elsewhere. For indeed, biometrics have also been deployed in a range of other intervention contexts. One example being the more recent turn to biometric voter registration, as supported by organizations like the European Union (EU) and United Nations Development Program (UNDP). From a co-productionist perspective, understanding the implications of these technologies requires an analysis of how different social processes – including those of development agencies – affect how this technology is framed and understood, and, how the particular framings that emerge will in turn have implications for how certain problems are understood and addressed (Abdelnour and Saeed, 2014). Co-production looks not only at the micro-production of scientific facts, but also at how this affects the macro-production of social order (Jasanoff, 2004b). Indeed, the concept explicitly asks us to explore how the micro level links to macro concerns, such as the production of social norms and hierarchies, which extend even to the global level. In this way, co-production offers a way of responding to the critique leveled against STS for being too focused on micro practices. Jasanoff has, for example, stressed the importance of attending to how the (micro) making of emissions science affects (macro) structures of global inequality, as a specific scientific framing of global warming was contributing to the reproduction of a state of inequality, which already existed in the world (Jasanoff, 2010). Thus, Jasanoff shows how a co-productionist analysis is not only about exploring how climate science is produced as authoritative, but also about interrogating the implications of this on social order. Thus, with reference to the work of Agarwal and Narain (1991), Jasanoff notes how a particular scientific framing effectively attached equal weight to all emissions regardless of their source, so that methane emissions from subsistence agricultural practices in developing countries (‘survival emissions’) were held to be just as bad from an environmental standpoint as emissions from wasteful practices such as excessive automobile use and high beef consumption (‘luxury emissions’) in industrialized countries. (Jasanoff, 1993: 35; see also 2010)2
Co-production 29 Socially relevant production by technology The second dimension of co-production concerns the production by technology qua its agentic capacity. Using Jasanoff’s idiom the core question is not only: who or what has agency? Instead, the question concerns, more specifically, the different ways in which agentic capacity impacts on the constitution of political order. Two sets of related questions are relevant here: what stems from agentic capacity, or how does technology have productive effects of its own that are neither reducible to discourse nor to a 1:1 relation of human intent through technology? And, secondly, how does that which technology produces then loop back onto social structures and ordering? In a manner that does not discard the importance of discourse, coproduction demands that we take seriously the ability of technology, and matter more broadly, to have effects that do not simply stem from discursive framings. Indeed, in our reading of co-production the other dimension of the concept, is about the production that happens “by” technology. In one of the examples used by Jasanoff (2004b: 13) to explain this dimension of the co-productionist idiom, she refers to the unintended effects of human consumption of “chlorofluorocarbons released from spray cans and air conditioners” on the ozone layer, and ultimately on the planet. Her study of the effects of chlorofluorocarbons encourages us to take seriously the ability of agentic technology to influence “social norms and hierarchies” as well as “the very terms in which we human beings think about ourselves and our position in the world” (Jasanoff, 2004a: 2). But, importantly, Jasanoff’s take on the role of matter and agentic capacity is explicitly concerned with avoiding the idea that the appreciation of matter as having agentic capacity always already involves determinism. Indeed, the focus on discourse in the co-productionist framework testifies to this, by inviting a double focus on how both human and material forms of agency go into the production of science and society. It is this double focus that we wish to highlight in our reading of co-production. Again, the crucial point from an IR perspective is that the analysis should not halt at this point. Indeed, the important point is to look at how technology’s agentic capacity might affect the making of social global order. Such an analysis would highlight “how sociotechnical formations loop back to change the very terms in which we human beings think about ourselves and our positions in the world” (Jasanoff, 2004a: 2). In other words, the coproductionist idiom demands that we take seriously the ability of agentic technology to influence “social norms and hierarchies” as well as “the very terms in which we human beings think about ourselves and our position in the world” (Jasanoff, 2004a: 2). Co-production invites us thus to explore how the agentic capacity of technology can feed back into the production of social order. To unpack this a little more, let us turn to an example. Moving to this other dimension of the co-productionist idiom, we would need to ask in
30 Katja Lindskov Jacobsen & Linda Monsees what sense these technologies loop back onto the social in ways that stem from their vibrancy and agentic capacity. Following this call, other studies have shown how such technologies have constitutive effects of political significance. It has for example been highlighted how technologies give rise to a new type of refugee body or a new type of voter population – digitalized bodies whose political significance is inseparable from seemingly technical, but indeed highly political, questions of data access, data sharing, data processing, data retention etc. (Jacobsen, 2015a, 2017). In the case of biometric voter registration, the very technology – the technical process whereby a unique body part of an individual is transformed into a digital template that can be stored, matched, shared etc. – contributes to the production of digitalized biometric data of entire voting populations. This, in turn changes an important parameter in the relationship between a state and its populations, as new questions of access to and securing of this sensitive data emerge. It also adds new dimensions to the relationship between external actors and states, for example, where external actors may find voter registers important vis-à-vis their pursuit of national security objects. To appreciate this it should be stressed that biometric data collection, matching, storing etc. has been and still is central to counter-terror efforts of most western states. With biometric voter registration, it becomes possible for external actors to think of another state as “interveneable” and as relevant for reasons of security. Moreover, co-production invites detailed analysis of how biometric voter registration may also have other effects, for example, of how false matches prevent citizens from casting their votes, or how that which the technology produces (notably databases with uniquely identifiable biometric of all voters in a given country), enables new forms of intervention and political disputes (Jacobsen, 2017). Concerning the issue of new types of political disputes, an example is that, in the case of Solomon Islands, the introduction of biometric voter registration has given rise to allegations that election candidates “have hired young men to buy voters’ biometric voter ID cards” to ensure support (Hobbis and Hobbis, 2017: 114). In short, with this dimension of co-production that asks us to look at how technology has productive effects, it becomes possible to extend the analysis of agency to a study of how technologies have agentic capacity, which in turn gives rise to new questions and paradoxes around which agency is then debated. Such debates could emerge in relation to questions about how biometrics may empower or demobilize those that are registered, or about agency in the sense of responsibility vis-à-vis these newly emerged digitalized refugee subjects and their vulnerability. In short, paying attention to the agentic capacity of matter does not exclude appreciation of how productive processes at the level of discourse also matter. The two dimensions of co-production entail an analytical choice. We do not claim that all technology or all materiality must first and foremost be studied with attention to questions of agentic capacity. Nor do we suggest
Co-production 31 that the concept of agency will play out in all empirical analyses in the same way. What we showed here is how concepts stemming from STS – like the idiom of co-production – can, indeed, be used when seeking to analyze the politics of technology in the context of global governance and international relations and how such concepts enable attention to productive processes at the levels of both discourse and materiality. In other words, coproduction with its sensibility towards the looping back effects of technology seems for us especially suited for an IR analyses looking at the broader implications of technological development. One example through which to illustrate what a co-productionist analysis may look like is the work done on the politics of biometric refugee registration. In different ways, this work illustrates how a number of important issues that co-production helps bring attention to – like the emergence of the digital refugee body and attending questions of access, and new types of insecurity (Jacobsen, 2015a) – would indeed be difficult to appreciate without combining an analysis of the production of digitalized refugee bodies with an analysis of how biometrics is produced, at the level of discourse, as a solution to other challenges on the radar of contemporary global security, notably related to the Global War on Terror (Amoore, 2006; Bell, 2013). And vice versa, analyzing the social production of biometrics without paying attention to how the technology itself gives rise to constitutive effects runs the risk of omitting not only how newly vulnerable biometric refugee bodies emerge, but also of neglecting the pressing need for international actors to attend to these vulnerabilities. “Co-production” and critique of STS in IR Adding to the above account of the analytical value of the notion of coproduction in IR, it is important to note that co-production not only offers analytical resources through which to explore otherwise overlooked aspects of the significance of technologies to contemporary global politics. Moreover, the notion of co-production also has another advantage, namely that it is an example of an STS concept through which it is possible to address some of the criticism often raised against “applying” STS in IR (e.g., Koddenbrock, 2015; Nexon and Pouliot, 2013). To highlight this aspect of co-production this section illustrates how the notion of co-production offers analytical resources through which to address and move beyond the current critique. One example of a point of critique, which the notion of co-production offers resources to address, is that STS is too – perhaps even exclusively – focused on micro-level issues, and not on bigger macro-level issues of key importance to IR (Nexon and Pouliot, 2013), such as global inequality and global power structures. Yet, given its explicit concern with how the production of scientific facts or authoritative technologies is intimately bound up with the production of social order – also at the global level – co-production is not only focused on the micro level. Indeed, as Jasanoff explains, key to a co-productionist analysis is a concern with how the
32 Katja Lindskov Jacobsen & Linda Monsees making of scientific order entangles with the making of social order (Jasanoff, 2004b). And, importantly, for the argument that co-production is an example of an STS concept through which it is possible to address issues at the global level, Jasanoff and other STS scholars have already demonstrated how a co-productionist analysis can help call attention to the role of science in global debates about climate change, focusing on how the making of climate science may affect structures of global inequality. This means for instance that global inequality is a result of how, with “different modes of environmental knowledge making,” different aspects of a given problem “are given priority,” and specific “solutions are rendered thinkable” (Beck et al., 2017: 1064). Thus, a co-productionists analysis not only uncovers the micro-production of “reliable technology” or “scientific facts” but also examines how these productive processes at the micro level are tied in with the production of a particular social order. Below, we not only introduce different methodologies to study agency, but also show how these avenues allow for a study of agency within global politics. Another point of critique to which the notion of co-production offers a productive approach is the discussion about the ontological primacy of productive forces at the level of either discourse or materiality (cf. Lundborg and Vaughan-Williams, 2015). Rather than opposing the two, co-production instead offers a more dynamic reading of the relationship between discourse and materiality. Other STS scholars have similarly argued for the importance of attending to both levels in one’s analysis. In this view, Lucy Suchman, Karolina Follis, and Jutta Weber call attention to and, indeed, contribute to a growing body of STS scholarship committed to examining “the material and discursive infrastructures that hold the logics of (in)security in place” (2017: 984). They conceptualize security as a “technoscience” (Suchman et al., 2017: 986), highlighting how actors, objects, and practices all contribute to a distinct security landscape. We can thus see that being attentive to technology can also mean acknowledging the complex ways in which materiality and discourse are interlinked – and that this distinction itself is the result of social processes (Hansen, 2010). The question of technology and agency is not reduced to the question of who or what possesses agency. Indeed, unpacking the notion of co-production can help to move current debates beyond dichotomous arguments about the ontological primacy of discourse or materiality by offering an analytical framework through which to focus instead on questions about how the interaction of discourse and materiality matters to the production of particular solutions/problems of global politics. Through this brief discussion of two points of critique (micro and macro; discourse and materiality), it becomes possible to appreciate co-production as one example of an STS concept through which the dialogue between STS and IR can be moved forward. Co-production is thus not only a valuable tool for empirical analysis. Thinking about the relationship between IR and STS in such a way prevents an unhelpful rhetoric of turns (the linguistic turn, material turn, practice turn, etc.), which increases the risk that important legacies are
Co-production 33 forgotten and that the debate deadlocks into fruitless conversation about which turn got it more right (cf. Aradau and Huysmans, 2014: 599). Pitching a material turn against a linguistic turn, implicitly suggesting that one is more advanced than the other, risks negating how scholarly work often transgresses predefined boundaries and simplifies scholarly traditions (Woolgar and Lezaun, 2013). More importantly, the risk might be that important theoretical resources are forgotten and commonalities played down. That is why we introduce three avenues for researching agency in order to show the richness of possible analyses.
Researching agency: three avenues Studying the co-production of agency requires conducting research into the way in which scientific and technological development are embedded within the production of social order. Put differently, the idea is not only to look at who or what possesses agency, but at the ways in which agency emerges in webs of relations and at how agency is thus not just the result of human intention nor of the material structure alone. Studying agency can then be understood as part of a broader investigation into the impact of objects, material structures, and technologies on the processes that go into the making of political ordering. From this starting point, different entry points into the study of agency are possible. Here, we do not want to narrow down the study of agency to one particular research strategy but rather to show how a variety of approaches to the study of agency are compatible with an interest in coproduction of material and social dimensions of global politics. A predominant way of researching agency in the context of technology is to use the idea of distributed agency or agencements, which locates agency not only in human actors but also in objects and the webs of relations in which actants are involved. This account is often put into opposition with classical accounts of agency in which the intentionality and reflexivity of human actors are considered to be the defining feature of agency. Ideas from relational sociology consider agency as only ever emerging in webs of relation, but are more sensitive to ideas such as identity and social structures. Starting from different ontological assumptions, relational sociology might also be helpful in understanding how agentic capacity of technology is produced in practice. A third entry point is looking at the ascription of agency. Agency is then not something possessed by actants, but ascribed by a third party. The political relevance of ascribed agency comes to the fore, once these ascriptions are challenged and controversie emerge about the alleged agentic capacities of technologies, such as drones or artificial intelligence. Distributed agency The idea of distributed agency is most commonly associated with actornetwork theory (ANT). Indeed, one of the most provoking features of ANT
34 Katja Lindskov Jacobsen & Linda Monsees is its expansion of the concept of agency towards objects. From an ANT perspective, humans and non-humans alike can possess agency. Agency emerges when an actor-network is created and certain actants have the ability to alter this network. Marianne de Laet and Annemarie Mol (2000) show in their classical study how an object – the bush-pump – can be meaningfully conceptualized as an agent. They investigate “what it means to be an actor” (de Laet and Mol, 2000: 226; emph. in orig.). Their study shows in detail how the bush-pump works and how it is implemented across Zimbabwe and altered by local (human) actors to meet their specific needs. According to de Laet and Mol (2000: 226), the pump “acts as an actor.” The authors follow the creation, implementation, and spread of the bushpump, thereby not only detailing its technological features but also presenting how the bush pump is embedded in a specific network. In this network, the bush-pump serves as an actor not only because it provides water, but also it enacts a certain idea of health by providing clean water (de Laet and Mol, 2000: 232). From an IR perspective, their discussion of the way in which the bush pump is linked to processes of nation building is especially instructive. The pump is locally produced and an important part of Zimbabwe’s infrastructure. “So while nation-building may involve writing a shared history, fostering a common cultural imagery or promoting a standard language, in Zimbabwe it also has to do with developing an infrastructure for water” (de Laet and Mol, 2000: 235). Through tracing the network in which the bush-pump is established, the authors show the ways in which the bush-pump serves as an actor. Importantly, agency is here not only a (possible) characteristic of objects; the objects themselves are understood as multiple. Thus, the bush-pump as a specific technology does not have clear cut boundaries but is fluid. Where the bush-pump starts and ends depends on the context and is not a “neutral” aspect of the object as such. This fluidity is a core idea for analyses of distributed agency. The boundaries of agents are never fixed, but are only constituted in the process in which a network is established. The idea of distributed agency works well with a co-productionist framework since it allows an appreciation of how agency is co-produced in human-non-human webs of relations. Although certain strands of ANT are quite averse against a vocabulary focusing on society (Latour, 2005), a study of distributed agency is compatible with a co-productionist analysis that is interested in the production of social order. Studies of distributed agency need to be linked to broader questions of social order, such as legitimacy or inequality. Reading the idea of distributed agency back onto a framework of co-production might thus provide a valuable tool to engage with the politics of technology. De Laet and Mol’s research can be seen as an example of how studying agency can be linked back to a study of broader patterns of social order such as nation-building. This implies that the researcher takes claims about agency not as pre-given but tries to disclose this in the process of inquiry (Callon, 1984). It is not decided a priori who or what possesses
Co-production 35 agency, but this is precisely the question in need of being answered through empirical research (Sayes, 2014). Researching agency from this perspective implies following how a network is stabilized, and inquiring who acts as a spokesperson and who or what serves as an actor. Ethnomethodology and open research are imperative for such a study of agency. Methodologically, the well-developed and refined vocabulary of ANT can help to guide one’s analysis of distributed agency. However, when combined with an interest in co-production, further questions about the impact on social order would be asked. While many ANT studies do ask these questions (Law and Mol, 2008) from an IR perspective it might be useful to put these questions even more at the heart of analysis. Relational and process-oriented approaches The concept of agency is also central to more classical sociological analysis. Here, agency is reserved to demarcate intentional and reflexive behavior. It might sound counter-intuitive to discuss these approaches in a volume on technology and agency. However, an object-centered approach can learn much from the sensitivities developed in relational sociology. Emirbayer and Mische (1998) provide a concept of agency that is embedded within a larger tradition of pragmatist theorizing. In particular, their thinking about how agency needs to be understood in its temporal dimension provides an important addition to studies of distributed agency within STS-inspired research. According to the authors, the “agentic dimension of social action can only be captured in its full complexity, […], if it is analytically situated within the flow of time” (Emirbayer and Mische, 1998: 963). Studying agency means being attentive to the historically evolved structures in which action take place. Agents do not emerge out of the blue but are embedded in historically grown structures. Against rational choice theory and in the tradition of pragmatism, the authors highlight the need to be attentive to the creative and open-ended aspects of agentic actions and not only the habitual and routine aspects (see also Friedrichs and Kratochwil, 2009). In that sense, the pragmatist conception of agency is in line with pleas for relational and process-oriented research in IR. Attempts by relational sociology to counter moves of reification as posited by Jackson and Nexon go in line with a co-productionist account of agency since both allow for opening up the study of agency for a focus on context and structures (Jackson and Nexon, 1999). This means that the focus lies on how agency plays out in social processes. In line with an interest in providing a sociological analysis of agency, Braun et al. focus in their recent account of agency on practices. Although they build on ANT and poststructuralism, methodologically, the focus lies in how agency emerges in and through practices (Braun et al., 2018). Sebastian Schindler (2014) has shown how distinctions usually perceived as theoretical such as structure and agency are part of contestations at the level of practice. In his empirical
36 Katja Lindskov Jacobsen & Linda Monsees study on International Organizations he examines how ideas about who or what possess agency (states, individuals) are not only subject to theoretical debates but feed back into disputes on the level of practice. Agency is thus not a pre-given feature of certain entities but something that is performed and open to contestation in practice. This relational perspective speaks to the historical dimension that is present in many studies of co-production. How technology and the political co-produce each other can only be seen in a historical perspective. Understanding how agents emerge requires sensitivity to a temporal perspective that also looks at the structure in which agents are embedded. Similar to ANT, a relational account would look at how agents are constituted in relations. Agency is constituted in practice and thus agents are an outcome of social processes rather than pre-constituted entities. For an analysis of world politics, this perspective is valuable since it can trace how, for instance, standards and dominant technologies are developed in an institutional setting. The question of technology and agency is not about whether technology possesses agency but about how agents impact the development of technology. In that way, such a perspective is much more compatible with a classical political science analysis. Politics in this context is thus much more bound to institutions and formal political actors. Such an analysis would, in contrast to the previous approach, focus more on the processes and institutions involved in developing and implementing technology and try to tease out how agency is established (or challenged) in these processes. Ascribed agency The last possibility that we discuss here is that of looking at how agency is ascribed to technology. One can understand the challenges posed to the concept of agency by, for instance, artificial intelligence (drones, robots, automatic weapons, and algorithms) not only as positing a challenge to the concept of agency, but as controversies around it as different opinions about what it means to be an (human) agent. Thus, the claim is less an ontological one but a methodological one, in which agency becomes another category of possible contestation. The object of research is then not the technology and its (possible) agentic capacities but the question of how these technologies are described as possessing agency. This is in line with Erik Ringmar’s (2018) critique of post-structural and new materialist conceptions of agency. According to him, these approaches miss the way in which agency is constituted through performances that cannot be understood by merely looking at an actor-network. What is important to understand is how imaginaries and collective ideas about agency (of, for instance, the state) bring an actor into being. Ringmar focuses on the theatrical aspects of agency since its agency is unthinkable without its representation. From this perspective, ideas about the agentic capacity of drones or robots are not a sign of the expansion of agency (do robots become human-like?).
Co-production 37 These debates about the agency of robots can be re-read as a sign of political struggles about what it means to be human. Véronique Pin-Fat (2013) reads these contestations not as ontological statements about humanity. Challenges posed by AI can also be understood as a challenge to what we understand to be human. Controversies around these new technologies can also be conceptualized as a change in the language to describe, conceptualize and act on the basis of particular understandings of what distinguishes human from non-humans. What it means to be an agent and in which way technology can be understood as being endowed with (human) agency is a result of a process in which agency is ascribed to entities. In contrast to the idea of distributed agency, studying the ascription and contestation of agency does not rely on strong ontological assumptions about actors (or actants). Agency as a political phenomenon is not something inherent in actants but something that is ascribed, contested, and challenged in discursive practices. Anna Leander (2013) for instance shows in her study on drones how people ascribe agency to drones. Arguing for a distinct character of “technological agency,” Leander (Leander, 2013: 818) shows how drones are not only treated as having agentic capacities, but also how this changes the field of legal expertise. Importantly, drones are ascribed agency by humans and thus impact the field of legal expertise as agents as such. Methodologically, researching the ascription and contestation of agency demands that the researcher looks at controversies about the agentic character of technology. Here it might be interesting to see how controversies about agency emerge but also how controversies are settled. The study of controversies can be traced back to the early days of the social study of technology. Trevor Pinch and Wiebe Bijker (1987) showed how to study controversies around technological development by focusing on relevant groups and their problem definition. From this perspective, attention has been paid to how controversies emerge, but also to how they are settled, to the work that goes into closing (temporarily) particular controversies and to the socio-political implications thereof.3 These sensitivities can be translated into the study of agency as part of which the ascription and contestation of agency becomes the core interest. The challenge for IR scholars will be to embed these controversies in a broader political setting to understand the implications for politics.
Conclusion Researching technology and materiality in IR has led to the translation of STS concepts into the study of world politics. The concept of agency has been key in many engagements with technology in IR. Researching agency cannot, as we have noted above, be understood sufficiently by treating it in a narrow way, neglecting the role of technology (or materiality more generally, cf. Connolly, 2013). IR scholars have thus looked to STS and drawn on one of its core ideas, namely that of acknowledging the vibrant, fluent and agentic character of technology.
38 Katja Lindskov Jacobsen & Linda Monsees In this contribution, we introduced Sheila Jasanoff’s concept of coproduction. Her attention to how technological and scientific development loops back onto political order, is a valuable addition to the canon of IR. We not only introduced the concept of co-production, but also showed how her work is able to reply to some critique leveled against STS. With this concept in place, we identified different methodological paths that allow us to research agency. Firstly, distributed agency, as it is prominent in ANT approaches, looks at how objects can have agency in actor-networks. Secondly, we identified the literature on relational sociology as a fruitful literature that looks at how (human) agency emerges in the interplay of structure and agency and how it is thus specifically valuable to provide an analysis that is more sensitive to the temporal dimension of agency. Lastly, we argued that looking at how agency is ascribed and challenged provides another avenue for understanding the politics of agency. Agency is thus not a feature of actants, but something that is ascribed in discourse. Debates about the expansion of agency should not only be understood as ontological debates but also as an expansion of a certain language game of what it means to be human (cf. Pin-Fat, 2013). Using coproduction as a leading concept for studying agency in these various ways allows us to look not only at who or what possesses agency, but also at the ways in which agency emerges in webs of relations. Viewed as such, it is not just the result of human intention nor of the material structure alone. The aim cannot be to understand agency from all these three perspectives at the same time. Not all methodologies should (or could) be combined in one study. However, our aim was to show that studying technology and agency might require (and allow for) a variety of different methodologies. By using co-production as a conceptual lens, we highlighted the need to link the study of agency back to broader questions of social ordering and to go beyond single case studies. The idiom of co-production allows for all of these three strategies, each of which can highlight different aspects of agency and possibly flesh out distinct political effects. The study of agency should not be reduced to the question of who or what possesses agency but should be interested in the wider societal setting in which controversies of agency are located. This chapter is a first step in this direction.
Notes 1 We therefore do not provide a definition of agency. The different methodologies we introduce will be based on rather different concepts of agency. The broad definition of agency provided in the introduction is thus also our vantage point. Ultimately, one cannot provide a definition of agency independently from the empirical study at hand. 2 In “A New Climate for Society,” Jasanoff (2010: 248) distinguishes between “subsistence” and “luxury” emissions: “Carbon pricing, they proposed, should distinguish between subsistence and luxury emissions, the former reflecting the necessities of the poor, the latter the whims of the rich.” 3 See for example Martin and Richards (1995) for an STS account of how such “controversies often have profound social, political and economic implications.”
Co-production 39
References Abdelnour S and Saeed A M (2014) Technologizing Humanitarian Space: Darfur Advocacy and the Rape-Stove Panacea. International Political Sociology 8(2): 145–163. Agarwal A and Narain S (1991) Global Warming in an Unequal World: A Case of Environmental Colonialism. New Delhi: Centre for Science and Environment. Agius C (2017) Ordering without Bordering: Drones, the Unbordering of Late Modern Warfare and Ontological Insecurity. Postcolonial Studies 20(3): 370–386. Amicelle A, Aradau C and Jeandesboz J (2015) Questioning Security Devices: Performativity, Resistance, Politics. Security Dialogue 46(4): 293–306. Amoore L (2006) Biometric Borders: Governing Mobilities in the War on Terror. Political Geography 25(3): 336–351. Aradau C (2010) Security that Matters: Critical Infrastructure and Objects of Protection. Security Dialogue 41(5): 491–514. Aradau C and Huysmans J (2014) Critical Methods in International Relations: The Politics of Techniques, Devices and Acts. European Journal of International Relations 20(3): 596–619. Austin J L (2017) We Have Never Been Civilized: Torture and the Materiality of World Political Binaries. European Journal of International Relations 23(1): 49–73. Bachmann J and Schouten P (2018) Concrete Approaches to Peace: Infrastructure as Peacebuilding. International Affairs 94(2): 381–398. Barry A (2013) The Translation Zone: Between Actor-Network Theory and International Relations. Millennium – Journal of International Studies 41(3): 413–429. Barry A and Walters W (2003) From EURATOM to “Complex Systems”: Technology and European Government. Alternatives 28(3): 305–329. Beck S, Forsyth T, Kohler P M, Lahsen M and Mahony M. (2017) The Making of Global Environmental Science and Politics. In Felt U, Fouché R, Miller C A & Smith-Doerr L (eds.) The Handbook of Science and Technology Studies. Cambridge: MIT Press, 1059–1086. Bell C (2013) Grey’s Anatomy Goes South: Global Racism and Suspect Identities in the Colonial Present. Canadian Journal of Sociology 38(4): 465–486. Bennett J (2010) Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press. Best J and Walters W (2013) “Actor-Network Theory” and International Relationality: Lost (And Found) in Translation. International Political Sociology 7(3): 332–334. Braun B, Schindler S and Wille T (2018) Rethinking Agency in International Relations: Performativity, Performances and Actor-Networks. Journal of International Relations and Development. online first: 10.1057/s41268-018-0147-z. Callon M (1984) Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. The Sociological Review 32(1): 196–233. Connolly W E (2013) The ‘New Materialism’ and the Fragility of Things. Millennium – Journal of International Studies 41(3): 399–412. Coole D (2013) Agentic Capacities and Capacious Historical Materialism: Thinking with New Materialisms in the Political Sciences. Millennium – Journal of International Studies 41(3): 451–469. Coole D and Frost S (eds.) (2010) New Materialisms: Ontology, Agency, and Politics. Durham/London: Duke University Press.
40 Katja Lindskov Jacobsen & Linda Monsees de Laet M and Mol A (2000) The Zimbabwe Bush Pump: Mechanics of a Fluid Technology. Social Studies of Science 30(2): 225–263. Emirbayer M and Mische A (1998) What Is Agency? American Journal of Sociology 103(4): 962–1023. Friedrichs J and Kratochwil F (2009) On Acting and Knowing: How Pragmatism Can Advance International Relations Research and Methodology. International Organization 63(4): 701–731. Hansen A D (2010) Dangerous Dogs, Constructivism and Normativity: The Implications of Radical Constructivism. Distinktion: Journal of Social Theory 11(1): 93–107. Hobbis S K and Hobbis G (2017) Voter Integrity, Trust and the Promise of Digital Technologies: Biometric Voter Registration in Solomon Islands. Anthropological Forum 27(2): 114–134. Hönke J and Cuesta-Fernández I (2017) A Topolographical Approach to Infrastructure: Political Topography, Topology and the Port of Dar es Salaam. Environment and Planning D: Society and Space 35(6): 1076–1095. Jackson P T and Nexon D H (1999) Relations before States: Substance, Process and the Study of World Politics. European Journal of International Relations 5(3): 291–332. Jacobsen K L (2015a) Experimentation in Humanitarian Locations: Iris Registration & Repatriation of Afghan Refugees. Security Dialogue 46(2): 144–164. Jacobsen K L (2015b) The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity. London/New York: Routledge. Jacobsen K L (2017) On Humanitarian Refugee Biometrics and New Forms of Intervention. Journal of Intervention and Statebuilding 11(4): 529–551. Jasanoff S (1993) India at the Crossroads in Global Environmental Policy. Global Environmental Change 3(1): 32–52. Jasanoff S (2004a) The Idiom of Co-Production. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 1–12. Jasanoff S (2004b) Ordering Knowledge, Ordering Society. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 13–45. Jasanoff S (ed.) (2004c) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge. Jasanoff S (2010) A New Climate for Society. Theory, Culture & Society 27(2–3): 233–253. Koddenbrock K (2015) Strategies of Critique in International Relations: From Foucault and Latour Towards Marx. European Journal of International Relations 21(2): 243–266. Latour B (1987) Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge: Harvard University Press. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Law J and Mol A (2008) Globalisation in Practice: On the Politics of Boiling Pigswill. Geoforum 39(1): 133–143. Leander A (2013) Technological Agency in the Co-Constitution of Legal Expertise and the US Drone Program. Leiden Journal of International Law 26(4): 811–831. Lundborg T and Vaughan-Williams N (2015) New Materialisms, Discourse Analysis, and International Relations: A Radical Intertextual Approach. Review of International Studies 41(1): 3–25.
Co-production 41 Mac Ginty R (2017) A Material Turn in International Relations: The 4x4, Intervention and Resistance. Review of International Studies 43(5): 855–874. Martin B and Richards E (1995) Scientific Knowledge, Controversy and Public Decision Making. In Jasanoff S, Markle G E, Petersen J C & Pinch T J (eds.) Handbook of Science and Technology Studies. Newbury Park: Sage, 506–526. Nexon D H and Pouliot V (2013) “Things of Networks”: Situating ANT in International Relations. International Political Sociology 7(3): 342–345. Pinch T J and Bijker W E (1987) The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other. In Bijker W E, Hughes T P & Pinch T J (eds.) The Social Construction of Technological Systems. Cambride/London: MIT Press, 17–50. Pin-Fat V (2013) Cosmopolitanism and the End of Humanity: A Grammatical Reading of Posthumanism. International Political Sociology 7(3): 241–257. Ringmar E (2018) The Problem with Performativity: Comments on the Contributions. Journal of International Relations and Development online first: 10.1057/s41268-0180159-8. Rothe D (2017) Seeing like a Satellite: Remote Sensing and the Ontological Politics of Environmental Security. Security Dialogue 48(4): 334–353. Salter M B (ed.) (2015) Making Things International 1: Circuits and Motion. Minneapolis: University of Minnesota Press. Sayes E (2014) Actor–Network Theory and Methodology: Just What Does It Mean to Say that Nonhumans Have Agency? Social Studies of Science 44(1): 134–149. Schindler S (2014) Man versus State: Contested Agency in the United Nations. Millennium – Journal of International Studies 43(1): 3–23. Squire V (2015) Reshaping Critical Geopolitics? The Materialist Challenge. Review of International Studies 41(1): 139–159. Suchman L, Follis K and Weber J (2017) Tracking and Targeting: Sociotechnologies of (In)Security. Science, Technology, & Human Values 42(6): 983–1002. Walters W (2014) Drone Strikes, Dingpolitik and Beyond: Furthering the Debate on Materiality and Security. Security Dialogue 45(2): 101–118. Woolgar S and Lezaun J (2013) The Wrong Bin Bag: A Turn to Ontology in Science and Technology Studies? Social Studies of Science 43(3): 321–340. Wynne B (1989) Sheepfarming after Chernobyl: A Case Study in Communicating Scientific Information. Environment: Science and Policy for Sustainable Development 31(2): 10–39.
3
Configuring warfare Automation, control, agency Matthias Leese
Let us begin with a little scenario. An Unmanned Aerial Vehicle (UAV) – a “drone” – flies over a so-called operational zone, a territory in which armed conflict takes place. The UAV carries a couple of air-to-surface missiles, and it is equipped with a number of sensors (high-resolution optic cameras, heat signature diagnostics, infrared sensors, radar, etc.). And while it follows a pre-set course for the purpose of intelligence collection, the UAV is, on the basis of the data that it collects through its sensors, able to dynamically interact with its environment. As the drone follows its course, it detects an object on the ground. Using live algorithmic analyses of the incoming data that the UAV collects on the object, it classifies the object as a tank. Notably, further analysis reveals that it is an enemy tank. Considering all available information, the UAV comes to the conclusion that the enemy tank poses a threat and therefore needs to be taken out. Subsequently, it computes the optimal parameters for engagement (e.g., the best angle, the amount of force to be applied) and goes on to destroy the tank. No human decision-making was involved, the UAV acted on its own. This is a simplified, yet rather common example of how, against the backdrop of recent developments in algorithmic data processing, combined with advances in robotics and engineering, the future of warfare is imagined and discussed under the headline of “Lethal Autonomous Weapons Systems” (LAWS).1 Based on the capacity to make use of a variety of sensors to gather data about their surroundings, and then algorithmically process such data for the sake of identification, tracking, prioritizing, cueing, and engagement of targets, future weapons systems could indeed, at least in theory, be able to make decisions about the use of lethal force and execute such decisions without human intervention. The US Department of Defense (2012: 13) in this sense defines a LAWS as “system that, once activated, can select and engage targets without further intervention by a human operator.” The possibility of such future “killer robots” (Human Rights Watch, 2012), as reflected by the regulatory discussions at the United Nations Convention on Certain Conventional Weapons (CCW) in Geneva, speaks to several core concerns of International Relations (IR): the question of the legitimate use of force; possible modes of regulation of warfare and violence
Configuring warfare 43 through International Humanitarian Law (IHL); the possibility of (preventive) arms control against the backdrop of dual-use technologies; and not least the question of who or what can be considered an actor in international contexts. The key concern in the current debates about the future of warfare is that machines could at some point in the future escape human control and undercut assumptions of human morality, dignity, justice, and law that are crucial in the international regulation of combat through IHL (Asaro, 2013). In order to foreclose the possibility that weapons systems could ever autonomously execute lethal force, the search for ways in which human control could be maintained vis-à-vis LAWS is thus key. As Heyns (2016: 13) summarizes the focal point: “If there is such control, there are not any special objections to AWS; if there is not such control, there is a strong case to be made that such weapons and their use should be prohibited.” In light of larger questions of technology and agency in international politics, so the argument I put forward here, the notion of control can serve as an analytical angle to explore the ways in which relations between humans and machines become defined such that technology remains controllable. The notion of control speaks closely to issues that are already pertinent in existing military weapons systems today. Many states do in fact have highly automated systems at their disposal, some of which require very little human action except for approving or disapproving pre-selected engagement preferences, often paired with extremely short time periods during which a human decision must come about (e.g., Scharre and Horowitz, 2015; Dunn Cavelty et al., 2017). Anti-missile defense systems, for example, critically hinge on the capability to algorithmically process sensor data within extremely short time frames, and the decision to engage an incoming missile must be taken within minutes or less. Even if the eventual decision whether to engage or not remains with a human operator, time criticality often leaves severely limited options for the operator to double-check and possibly challenge suggestions for action offered by the system (Hawley et al., 2005). Such constellations therefore tend to produce so-called “automation bias” (Cummings, 2004), i.e. the enhanced probability that a human operator will simply approve of a suggestion made by a system rather than challenge the logic and accuracy of its computations. It is thus important to take a closer look at the interplay of system functions and human action in military weapons systems if we seek to attain a more nuanced understanding of what could be at stake in future warfare. Indeed, as Elish (2017: 1122) argues with regard to practices of UAV warfare, it is only “by paying attention to the divisions of labor and reconfigurations of human agency as it is transposed within human-machine networks [that] we can begin to shed light on the everyday and often invisible structures of contemporary war.” An analysis of automation and control in existing military weapons systems then brings to the fore how the notion of control structures the relations and interactions between humans and machines within socio-technical
44 Matthias Leese systems. In other words, it directs attention to the question of how humans and machines become drawn together in specific constellations, and how their interactions come to matter for key concerns of the international. In order to carve out how the notion of control becomes designed into military weapons systems, this chapter builds on Suchman’s (2007, 2012) concept of “configuration.” Suchman suggests studying the ways in which specific cultural imaginaries are built into socio-technical systems via engineering and design practices, and thereby engender specific relations between human operators and system functions. Following Suchman’s analytic suggestion, the chapter turns to conceptual work on workload distribution between humans and machines and highlights the complexity and multiplicity of design options in military systems. Control, from such an angle, must then not be understood as a fixed and stable relationship between operator and system, but as something that is produced and reproduced through heterogeneous modes of interaction between humans and system functions.
Agency and configuration In order to understand who can act in this world and how action comes about, scholars from Science and Technology Studies (STS) and New Materialism have developed concepts such as “mangle” (Pickering, 1995), “co-production” (Jasanoff, 2004), “actant” (Latour, 2005), or “intra-action” (Barad, 2007) to analyze the ways in which action comes into being. What these approaches have in common is the assumption that agency is not something that could be a priori attributed to someone (or something), but that agency emerges through the interactions of humans, objects, artifacts, and matter itself. This assumption fundamentally rests on an understanding of the world as ontologically “flat,” thus tearing down the presupposed primacy of the human subject within the world that has long dominated modernist liberal philosophy and social theory. Instead, STS and New Materialism introduce a symmetrical reading of the relationship between humans and non-humans and draw attention to the relations between heterogeneous actors and the ways in which they create agency in entangled, distributed, and at times messy ways. More recently, these ideas have increasingly resonated within IR, as IR scholars have paid reinforced attention to the ways in which humans and non-humans act or work together, either deliberately or coincidentally, and how such productive encounters play out with respect to questions of the international (e.g., Barry, 2013; Best and Walters, 2013; Bueger, 2013; Connolly, 2013; Acuto and Curtis, 2014; Mayer et al., 2014; Salter, 2015; McCarthy, 2018). Building on a relational understanding of agency in IR (Jackson and Nexon, 1999), contributions have offered in-depth explorations of how materiality and non-human elements come to matter in contexts of international relevance, such as critical infrastructure protection (e.g., Collier and Lakoff, 2007; Aradau, 2010), airport and aviation security (e.g., Bellanova and Duez, 2012; Schouten,
Configuring warfare 45 2014; Valkenburg and van der Ploeg, 2015), or drone warfare (e.g., Leander, 2013; Walters, 2014). This literature has been particularly productive in advancing an understanding of international political action that takes into account the role of technologies and their relations and interactions with humans. Contrary to powerful academic analyses of agency as distributed, entangled, and emergent, there is however a sustained insistence from politicians, lawyers, or ethicists that agency should remain tied to the idea of the conscious human subject that speaks to the ways in which legal and moral categories of accountability and responsibility rest on the presupposition of volition and free will. The notion of human control vis-à-vis LAWS can in this sense be understood as a political mode of restraining the possibility of non-exclusively human agency – and instead subjecting the machine to exclusively human agency. To analyze how such control is realized within socio-technical systems, I suggest to turn to Suchman’s (2007, 2012) concept of “configuration.” Configuration, for Suchman (2012: 49; emph. in orig.), refers to the “question of how humans and machines are figured together – or configured – in contemporary technological discourses and practices, and how they might be reconfigured, or figured together differently.” Conceiving of technology as an ontologically symmetric assemblage that is comprised of heterogeneous human and non-human elements, configuration highlights the importance of relations between humans and non-humans and how these relations come about and produce meaning. At the same time, it draws attention to the fact that these relations are not a given but that they are constructed – and thereby relates them back to cultural imaginaries of what technology should look like and how it should be positioned vis-à-vis humans and society. In Suchman’s perspective, which is informed by STS as much as it is by design studies, it is clear that technology is involved in the production of agency. For her, interaction between humans and systems “respecif[ies] sociomaterial agency from a capacity intrinsic to singular actors to an effect of practices that are multiply distributed and contingently enacted” (Suchman, 2007: 267). Such a view on agency is very much in line with the views developed by Pickering, Jasanoff, Latour, Barad, and others who have argued for a rearticulation of the presupposed boundary between the human world and its material counterpart. For Suchman, this is however not the main point. For her, the notion of configuration rather draws attention to the ways in which interaction is structured in specific forms through design practices, which are in turn informed by wider societal ideas about how technology should look like, what it should be doing, and how it should be controlled. This is what she calls the “cultural imaginaries” (Suchman, 2007) that inform and underpin engineering and design practices. In order to understand how such cultural imaginaries come into being, for Suchman it is pertinent to take the presupposed divide between the human and the non-human into account, as the idea of anthropocentrism
46 Matthias Leese has to a large extent informed the construction of contemporary political, legal, and moral categories. Paying attention to this “modern constitution” (Suchman, 2007: 261) does of course not mean that we should turn back to a conceptualization of agency as an exclusively human attribute, but that we should instead analytically foreground the ways in which agency, as an effect that is produced through interaction, becomes enabled or constrained through deeply engrained ontological imaginaries of what agency is supposed to look like and who should have it. In the words of Suchman and Weber (2016: 100), it is only through this analytical detour that we can then “explore the ways in which our agencies are entangled with, and dependent upon, the technological world today and to analyse our particular agencies within the assemblages that we configure and that configure us.” The “modern constitution” as a background canvas is however not only key for an analysis of the specific forms of agency that are being produced through particular configurations of human-machine relationships. At the same time, it epitomizes the contradiction between theory-building around symmetry and flat ontologies on the one hand, and the ongoing reproduction of an ontological boundary between the human and the material on the other hand. As Suchman (2007: 285) summarizes the troubling relationship between academic theory-building and the ways in which “real-world” technologies are being designed: The legacy of twentieth-century technoscience posits autonomous agency as a primary apparatus for the identification of humanness and takes as a goal the reiteration of that apparatus in the project of configuring humanlike machines. Initiatives to develop a relational, performative account of sociomaterial phenomena indicate a different project. Conceiving of this contradiction however not as a question of who is right and who is wrong, but rather as a productive analytical site, Suchman argues that the presupposed boundary between humans and machines can in fact serve to understand the ways in which particular configurations of technology take shape. The establishment and maintenance of an ontological boundary between operator and system, for Suchman (2007: 285), is something that requires active work, as “boundaries between humans and machines are not naturally given but constructed in particular historical ways and with particular social and material consequences.” The continued reproduction of the boundary between humans and machines in this sense finds its expression in the only seemingly contradictory design imaginary that technological systems should ultimately “become human” (i.e. that they should look and act like humans). Machine autonomy, as currently discussed in the context of future warfare, would in this sense present the ultimate reification of the divide between the human and the material, as it would establish a true counterpart to human autonomy and thus do away with the muddy realm
Configuring warfare 47 of interaction within socio-technical systems. In other words, until machines really do become autonomous in the sense that they can make their own morally informed decisions, they must still be subjected to human control.2 For Suchman, an analysis of human-computer interaction through the boundary work that goes into the maintenance of the divide between the human and non-human world must then foreground the engineering and design choices that are predicated upon the imaginary of the machine-ashuman. In order to understand how the larger discursive trajectory of the autonomous human vs. the soon-to-be-autonomous machine-as-human is sustained and reinforced, Suchman suggests turning our attention to the ways in which engineers and designers configure socio-technical systems such that they adhere to cultural understandings of what a machine should look like and how the relations to its users or operators should be structured. As she has it, analyses […] that describe the active role of artifacts in the configuration of networks inevitably seem to imply other actors standing just offstage for whom technologies act as delegates, translators, mediators; that is, human engineers, designers, users, and so on. (Suchman, 2007: 270) In other words, in order to grasp the analytical productivity inherent in the apparent contradiction between socio-material, flat-ontology understandings of technology and the actual design practices that uphold modernist anthropocentrism, we must enter the realm of engineering and design and take into account “that the persistent presence of designers-users in technoscientific discourse is more than a recalcitrant residue of humanism: that it reflects a durable dissymmetry among human and nonhuman actors” (Suchman, 2007: 270). The idea of configuration in this sense closely “resonates [with] the everyday language of information systems engineering” (Suchman, 2012: 51). Suchman thus advises us to pay attention to how engineers and designers think about the interface between humans and computers. This is an important cue, as it facilitates the analysis of how configurations come into being. Engineering and design are mostly problem-oriented fields that seek to provide elegant and efficient solutions for pre-specified objectives. For Suchman, most pertinent in this regard is the field of Human-Computer Interaction (HCI) that has engaged questions of automation, cognition, trust, and communication between humans and computer systems for a long time. Conceptual work from HCI and its influence on the engineering and design of military weapons systems thus presents us with an opportunity to understand the boundary work that goes into the construction of socio-technical systems and thereby resonates with the cultural imaginaries that are instructive throughout engineering and design processes of (military) technology. In Suchman’s (2012: 48; emph. in orig.) words, “configuration in this
48 Matthias Leese sense is a device for studying technologies with particular attention to the imaginaries and materialities that they join together.” We will thus throughout the rest of chapter look into the imaginary of control vis-à-vis the backdrop of automation and its socio-technical configuration in military weapons systems.
Drawing the boundary: automation and control The notion of human control vis-à-vis notions of Artificial Intelligence (AI), sensor data and robotics forcefully illustrates Suchman’s point about boundary work in the configuration of socio-technical systems. With regard to military weapons systems, human control usually becomes qualified by the amendment that such control would need to be “meaningful,” and there are numerous attempts to define what meaningful human control in weapons systems with automated functions would need to entail (for a good overview of the debates, see Horowitz and Scharre, 2015; Rosert, 2017). The NGO Article 36, for example, highlights “timely human judgment and action” based on “accurate information for the user” (Article 36, 2016: 4); the International Committee of the Red Cross sees a need for “knowledge and accurate information about the functioning of the weapon system and the context of its intended or expected use” (International Committee of the Red Cross, 2016: 83); and the US Department of Defense foregrounds the necessity for “operators to make informed and appropriate decisions” (Department of Defense, 2012: 2). What these and other definitions of meaningful human control have in common is that they revolve around the relationship between a system and its operator. They conceptualize this relationship as one that is coined by a general condition of information asymmetry: as the human operator does not have cognitive access to the data that have been collected by multiple sensors and that have already been algorithmically processed, the operator needs to be informed by the system about the ways in which preferences or recommendations have been computed. Only then, so the assumption that underpins the idea of meaningful human control, could the user come to an informed and responsible decision on the use of lethal force. The user in this sense needs to know on what data basis the system has operated, whether other options for action might be available, what their respective anticipated consequences would be, and so on. The system is here envisioned as a support tool that carries out certain tasks quicker and more reliably than a human ever could, and collects and processes information that is inaccessible to humans in the first place. While the system possesses an informational edge based on its technical capacities, it does however lack the fundamental human capabilities that are required to make a decision that is not based on facts alone. The need for control and the ontological boundary between the human and the machine that it is predicated upon then speak closely to the existing
Configuring warfare 49 legal system and its mechanisms for establishing accountability and responsibility for the use of lethal force. IHL, as Asaro (2013: 700) summarizes, “explicitly requires combatants to reflexively consider the implications of their actions, and to apply compassion and judgement in an explicit appeal to their humanity.” In order to determine accountability and responsibility as legal categories, law thus presupposes consciousness and deliberate decision-making in the sense of “the ability of a ‘self ’ to choose the principles that ‘rule’ its conduct or indeed ignore them on the basis of its own intrinsic valuing” (Welsh, 2018: 13). These capacities are, in the tradition of the “modern condition” and the anthropocentrism that it purports, seen as exclusively inherent in humans – and fundamentally not in machines, never mind how “intelligent” or “smart” they allegedly would be. Thus, from an ethical-legal point of view, a boundary between the human and the machine must be established and upheld within socio-technical systems in order not to blur categories of accountability and responsibility and the mechanisms of sanctioning and social justice that they enable. In the current debates about possible autonomy of military weapons systems, numerous commentators have put forward doubts that, against the vision of future “killer robots,” this clear-cut boundary between humans and machines could be upheld for much longer, and have thus called for a preventive ban of such systems (for prominent accounts, see for example Human Rights Watch, 2012; Asaro, 2013; Sharkey and Suchman, 2013). It is at this pinnacle of regulatory debates where the concept of configuration can help us understand the notion of “control” as the constructed boundary between humans and machines, and how control is rendered necessary through the imaginary of “autonomous” systems in the first place. The notion of autonomy in military weapons systems thrives on proceedings in AI, machine learning, robotics, and other technical fields, as well as on industrial and military narratives. These narratives in turn become echoed and mobilized by NGOs and activists in the fight against dystopian futures (of both warfare and wider society). It is only through the discursive construction of future weapons systems as truly autonomous agents that they become relatable – and at the same time dangerous – for established categories of ethics and law. The idea of (meaningful human) control should, in the sense of Suchman’s concept of configuration, then be conceived of as a productive category that informs engineering and design practices. At the same time, however, it is the prescription of control as a design principle in the configuration of military weapons systems that relates back to the idea of the autonomous machine that would eventually, at some point in the future, level the ontological hierarchy between the human and the system. As Suchman (2007: 213–4) explains this somewhat twisted relationship: Having systematically established the divisions of humans and machines, technological imaginaries now evidence worry that once separated from us machines are rendered lifeless and, by implication, less. They need to
50 Matthias Leese be revitalized, restored to humanness – in other words, to be made like us – in order that we can be reunited with them. In order to understand how the construction and re-construction of the boundary between humans and machines through the notion of control plays out in practice, the remainder of this chapter will look into how control becomes specified in engineering and design practices for military weapons systems, and how it structures the relations between human operators and their non-human counterparts. Looking into the configurations of human operators and automated functions within military weapons systems thereby foregrounds the fundamental contingency of the social construction processes of technology and highlights the fact that control is necessarily not a fixed or static category, but that different forms of control come into being through a multiplicity of possible configurations of human/nonhuman relations that each engender specific forms of interaction and agency. This idea of contingency points to the choices involved in engineering and design processes – choices which are in turn enabled or constrained through the imaginary of control and its workings within warfare. As Suchman (2007: 227–8) argues in this sense, “the effects of figuration are political in the sense that the specific discourses, images, and normativities that inform practices of figuration can work either to reinscribe existing social orderings or to challenge them.”
Human-computer interaction If design figures as the junction where cultural imaginaries become operationalized through the configuration of socio-technical systems, the decisive analytical question must be: what does control in military weapons systems look like? Conceptual literature from the field of HCI proves helpful in this regard. Since the early days of computer systems in applied engineering (Fitts, 1951), HCI has been concerned with the division of labor between humans and computer systems. One of the paramount questions of the field concerns the issue “which system functions should be automated and to what extent?” (Parasuraman et al., 2000: 286) Starting from the idea that machines can perform certain tasks better or more efficient than humans (e.g., do the “heavy lifting”; carry out tasks quicker and more reliably; carry out monotonous or dangerous tasks; provide access to additional information through different modes of cognition), automation is in HCI design defined as “the full or partial replacement of a function previously carried out by the human operator” (Parasuraman et al., 2000: 287). In commercial aviation, for example, automation has early on been established as a way to provide enhanced reliability, efficiency, comfort, and safety. Flight deck operations therefore traditionally involve a large degree of automation in tasks that originally needed to be carried out by pilots themselves (e.g., Wiener and Curry, 1980; Endsley, 1987; Billings, 1997).
Configuring warfare 51 In order to understand how much automation would be desirable for a given task within a particular operational and organizational environment, and how a specific degree of automation would structure the relationship between system and operator, Sheridan and Verplank (1978) have specified ten Levels of Automation (LOAs), ranging from no assistance offered by the system at all (Level 1) to a fully automated system that carries out all tasks by itself and completely ignores the human operator (Level 10). This conceptual differentiation allows engineers and designers to specify how much workload should be delegated to automated processes within the system and how the interface between operator and system would need to look like within a given configuration of workload distribution. Table 3.1 shows the LOAs as specified by Sheridan and Verplank, as well as what each level would imply for the relations between system and operator. As becomes apparent throughout Table 3.1, “automation can differ in type and complexity, from simply organizing the information sources, to integrating them in some summary fashion, to suggesting decision options that best match the incoming information, or even carry out the necessary action” (Parasuraman et al., 2000: 286). Tinkering with – literally – different configurations of automation and control vis-à-vis legal, organizational, psychological, cognitive, and not least moral aspects is then from an HCI perspective seen as a way toward the establishment of an optimal trade-off between automation and human control within specific operational and organizational contexts. The notion of configuration can here serve as a guideline to understand how different design and engineering choices speak to wider societal, political, economic, organizational, and operational requirements that a particular socio-technical system needs to fulfill. In Suchman’s terms, these requirements are expressed in the idea of the cultural imaginaries that inform and underpin the design of technology. In HCI, maximum possible automation was long perceived as the ideal model, as it was assumed that this would free up resources that humans could then spend otherwise (de Greef, 2016: 139). This assumption must also be understood vis-à-vis the fact that, as Miller and Parasuraman (2007: 58) argue with regard to industry narratives of novelty and innovation that are used as sales pitches for new products, “technologists tend to push to automate tasks as fully as possible.” This view has however, against the backdrop of human involvement within socio-technical systems, been challenged by the insight that high LOAs are not necessarily always a good thing, but that they tend to produce a set of problems, notably negative effects on operator awareness and on trust in the system, and a decline in operator skill sets (e.g., Parasuraman et al., 1993; Endsley and Kiris, 1995; Wickens et al., 1998). In other words, as more processes are rendered invisible, more complexity is hidden in black-boxed architectures, and more argumentative authority is granted to the system, technological assistance becomes less understandable and retraceable for human
52 Matthias Leese Table 3.1 Levels of Automation; Sheridan and Verplank (1978) Level
Degree of Automation
1
human does everything, no system assistance
2
system offers full set of options
3
system offers selection of options
4
system offers one option
5
human approval necessary for execution of option
6
human has restricted time to veto before automatic execution of option
7
option will automatically be executed, human is informed afterwards
8
option will be automatically executed, human will only be informed upon request
9
option will be automatically executed, human will only be informed if system decides to
10
machine acts autonomously, ignores human
operators. Coupled with short time frames for decision-making, these factors are seen as pertinent in the production of “automation bias,” i.e. the overreliance on automated system functions (Cummings, 2004). This is particularly pertinent for critical environments, i.e. when socio-technical systems are used for safety tasks or military operations. Parasuraman and Wickens (2008: 514) in this sense put forward that “in high-risk settings such as air traffic conflict prediction and battlefield engagement, decision automation should be set at an LOA that allows operator input into the decision-making process.” Such an understanding of critical environments ties in with organizational and legal concerns that prescribe human decision-making in operations that involve human lives and health (Wickens et al., 1998). These concerns are, again, aptly illustrated in the current calls for “meaningful human control” over military weapons systems. Thus, since the human is “still vital after all these years of automation” (Parasuraman and Wickens, 2008: 511) and likely will be for the foreseeable future, HCI has over the past decades paid increased attention to “human factors” within the design of socio-technical systems and the ways in which interfaces between operators and systems could be optimized. Suggestions for human-centered design involve concepts such as “flexible automation” (Miller and Parasuraman, 2007), “team play” (Klein et al., 2004), or “machine etiquette” (Parasuraman and Miller, 2004). This turn to human factors presupposes two important implications. First of all, in order to realize modes of workload distribution that speak to the need for human control, medium LOAs have been foregrounded as the primary area of study. As Miller and Parasuraman (2007: 58) argue, the design of a sociotechnical system then
Configuring warfare 53 requires that neither human nor automation be exclusively in charge of most tasks but, rather, uses intermediate levels of automation (LOAs) and flexibility in the role of automation during system operations and places control of that flexibility firmly in the human operator’s hands. This is particularly pertinent with regard to the design of military weapons systems, where both the advantages of automation and the risks caused by automation (i.e. automation dysfunction or malfunction, automation bias, operator complacency) must be carefully balanced. The second implication is the acknowledgment of human cognition as a key factor in HCI design. Parasuraman et al. (2000: 288) have, in order to account for this, suggested to specify design practices according to a model of human information processing that contains four steps: (1) information acquisition; (2) information analysis; (3) decision and action selection; and (4) action implementation. Each of these steps, according to them, “has its equivalent in system functions that can be automated” (Parasuraman et al., 2000: 288) and thus speaks to how relations between operator and system must not be defined at a single point in time, but as dynamic interactions over time. The differentiation between separate steps of information processing thus adds a crucial extension to the LOAs proposed by Sheridan and Verplank, as it highlights that in order to fulfill a given task, several consecutive subtasks might need to be carried out – and that each of these subtasks may involve a different level of automation that brings into being specific human-machine relationships for the duration of the subtask. Combining a model of LOAs with a model of information processing then results in a matrix that visualizes the many ways in which HCI design can tinker with different configurations of human-machine interactions over time. In the following, we will use such a matrix to re-engage our initial UAV scenario.
Configuring warfare An approach to LOAs as flexible and differentiated throughout distinct system functions is pertinent for military applications, as it can inform designers how automation could be best integrated into weapons systems such that it efficiently supports soldiers in battle and beyond, while at the same time leaving a sufficient degree of freedom for human decision-making (e.g., Endsley, 1987; Svensson et al., 1997; Endsley and Kaber, 1999; Arciszewski et al., 2009; Wang et al., 2009; Neyedli et al., 2011). Studies on the design of military weapons systems often mobilize LOAs/information processing matrices, as they can be used as analytical tools that speak closely to military conceptualizations of socio-technical weapons systems vis-à-vis tasks in combat and can yield insights about supposedly optimal LOAs for given tasks or subtasks (e.g., Clough, 2002; Hawley et al., 2005; Arciszewski et al., 2009; Williams, 2015). Information processing is in military jargon
54 Matthias Leese usually referred to as the “loop” of military action, building on the idealtypical sequence of “observe – orient – decide – act” (OODA) that is supposed to inform behavior in combat (Coram, 2002).3 Arciszewski et al. (2009), in their work on automation in naval combat, have suggested breaking down the loop into five core stages that can then be subjected to an analysis of optimal levels of automation within a given operational and organizational environment: (1) correlation (detecting an object); (2) classification (determining the type of the object); (3) identification (determining the identity and allegiance of the object); (4) threat assessment (determining whether the object poses a threat); and (5) engagement (deciding to apply force and executing this decision). As argued above, the design of military weapons systems must adhere to a number of specific requirements. The use of lethal force is arguably the most “critical” function imaginable that can be automated within sociotechnical systems, thus requiring the “meaningful human control” that is prevalent in current regulatory debates. Indeed, as “an automatic target recognition (ATR) device to aid surveillance will undoubtedly make some misclassifications” (Wickens et al., 2006: 210) and there are “troubling […] repercussions of such targeting errors that might occur when the payload consists of weapons” (Cooke, 2006: xix), military weapons systems are particularly prone to severe consequences from automation malfunctions, and high LOAs are therefore generally regarded as non-desirable. This is the reason why UAVs, even if they might in theory be capable of carrying out their tasks without any human intervention, are operated by a whole team of humans that, in large missions that potentially include the use of lethal force, includes the likes of multiple technical operators, military officers, and legal advisers (Cooke et al., 2006; Elish, 2017). The decisive question, then, is how control in critical environments can be structured. Automation and control are closely entwined concepts, with (meaningful) human control fading out of the picture as LOAs rise. In order to visualize what this means, let us return to our initial scenario – the UAV flying over an operational zone. Based on the discussion of both HCI literature and military perspectives on automation and control, we must now slightly modify the scenario. The UAV does not, as originally proposed, act “autonomously,” but it is connected to a remotely located (team of) operator(s) via an interface that presents them with information on the whereabouts and status of the UAV, as well as with information about what data the UAV collects on its environment. What the operator is exactly presented with, which choices and decisions the operator can make, and whether the operator can challenge or contest any suggestions made by the system crucially hinges on the specific configurations that the design of the setup engenders. Table 3.2 shows the analytical matrix of operator-system relations that results from the scenario when combining 5 select LOAs with the loop task categories suggested by Arciszewski et al. (2009). Instead of numbering the LOAs, they have in Table 3.2 been given names that
Configuring warfare 55 correspond with the role that the system occupies vis-à-vis the operator. “Manual” means that no automation is available to the human operator, and all tasks must thus be executed manually without any assistance. On the other end of the spectrum, “System” means that all responsibilities have been delegated to the system and fully automated, such that the human operator has no possibility for interventions of any kind. The three middle categories (“Advice”; “Consent”; “Veto”) differentiate the division of labor between humans and machines and the respective interfaces between them. As we follow our UAV – which is now conceptualized more aptly as a socio-technical system instead of an autonomously acting machine – through the different stages of detecting an object, classifying the object as a tank, identifying the tank as hostile, assessing the tank as dangerous, and deciding to engage and destroy it, Table 3.2 needs to be read from the left to the right. As different tasks can involve different LOAs, we can draw different “paths” throughout the consecutive stages from “Correlation” to “Engagement.” A straight path on the “Manual” level would indicate that no automation is available in any of the tasks, and that the full workload would therefore need to be carried out by the operator. A straight path on the “System” level, on the other hand, would imply that all tasks have been fully delegated to the system which acts in an automated fashion. The human operator would thereby possibly still be able to monitor the actions of the UAV, but would not have an opportunity to interact with the system, or veto any of its actions. The latter path would, at least from a technical point of view, signify that the UAV would act “autonomously.” However, neither path would be a realistic representation of how the relations between operator and system are configured in a given military operation (Hawley et al., 2005). Starting from the assumption that control implies a division of workload between human and machine, while at the same time upholding ethical and legal categories of accountability and responsibility, some combination of “Advice,” “Consent,” or “Veto” relations are more likely to be encountered throughout the loop – however possibly combined with “System” level automation of some tasks. It might for example be the case that the chain of “Correlation”–“Classification”–“Identification”–“Threat assessment” is carried out on the “System” level, whereas the “Engagement” stage is kept at the “Manual” or “Advice” level in order to ensure a human decision precedes the use of lethal force. A very different setup would be a configuration in which “Correlation” takes place on the “System” level, whereas “Classification,” “Identification,” and “Threat assessment” require “Consent” between operator and system. Given the assumption that the human operator was closely involved in the evaluation of the situation, “Engagement” could then by executed in a fully automated fashion on the “System” level, thus ensuring maximum efficiency and precision, while at the same time reducing collateral damage. In the first configuration example, the human operator is in full control of the actual execution of lethal force, whereas in the second configuration
Classification determining the type of an object
human classifies object as tank human classifies object as tank; can check whether the system comes to the same conclusion human classifies object as tank; system actively signals that it comes to the same conclusion
Correlation sensor information becomes integrated over time to detect objects
human detects object
human detects object; can check whether the system comes to the same conclusion
human detects object; system actively signals that it has detected an object
Manual no automation available
Advice human keeps all responsibility; system advice available upon request
Consent human keeps all responsibility; system actively offers advice
human identifies tank as hostile; system actively signals that it comes to the same conclusion
human identifies tank as hostile; can check whether the system comes to the same conclusion
human identifies tank as hostile
Identification determining identity or allegiance in terms of hostile/ neutral/friendly
human assesses tank as dangerous; system actively signals that it comes to the same conclusion
human assesses tank as dangerous; can check whether the system comes to the same conclusion
human assesses tank as dangerous
Threat Assessment assessing the dangerousness of an object
human decides to engage target; system actively advices to engage and proposes optimal parameters
human decides to engage target; can check whether the system comes to the same conclusion and consult for advice on optimal parameters
human decides to engage target and fire missile
Engagement decision to apply various levels of force and execute the decision
Table 3.2 Selected Levels of Automation vis-à-vis loop tasks; scenario: UAV flying over operational zone; task categories based on Arciszewski et al. (2009)
system classifies object as tank; human can reject classification (“it’s a civil vehicle”)
system classifies object as tank
system detects object; human can reject detection (“there is nothing”)
system detects object
Veto human delegates responsibility to system; retains veto right
System human delegates all responsibility to system; no interaction between human and system
system identifies tank as hostile
system identifies tank as hostile; human can reject identification (“it’s an ally vehicle”) system assesses tank as dangerous
system assesses tank as dangerous; human can reject assessment (“it poses no threat”)
system decides to engage target and fire missile
system decides to engage target and suggests optimal parameters; human can reject/correct engagement decision and parameters
58 Matthias Leese example, the system pulls the trigger without human involvement. Does that mean the former configuration complies with IHL requirements while the latter does not? One could very well argue differently. As in the first configuration example, the operator is not involved in most of the information processing activities that lead to the decision to use lethal force, their decision-making might be subject to automation bias. In other words: such a configuration, while formally establishing human control over the use of (lethal) force, would arguably not qualify as “meaningful.” In the second configuration example, whereas the act of executing lethal force is in fact completely delegated to a machine, arguably there was a greater amount of meaningful human control involved in the process. Throughout both configurations, specific kinds of agency were co-produced through the interaction between operator and system at different levels, as they shared or split the workload throughout the loop. Neither the weapons system nor the operator acted alone, but their actions were mutually enabled and emerged at the interface between human and machine. These two examples illustrate the complexity and multiplicity of humanmachine configurations in a given setting – and they present only two of the many possible paths through the matrix. Mathematically speaking, there are 3.125 distinct configurations of how system and operator can be drawn together throughout a five-stage loop model with five possible LOAs. Some of them do make more sense from a design perspective informed by military requirements, and some certainly make less sense. In each one, however, human-machine relations are structured differently, depending on the involved degrees of automation and corresponding degrees of control that are ascribed to the human vis-à-vis each automated function. Moreover, the overall scenario discussed here is of course a highly simplistic one that hardly does justice to the complexities of real UAV operations, dynamic battlefield environments with multiple participants and objects (friendly/neutral/hostile), or technical challenges and fallacies in data collection and live algorithmic analysis. If one would, for example, opt for a more fine-grained differentiation of LOAs, for a further sub-division of loop tasks, or for multiple loops and multiple objects, the already huge number of possible configurations would increase exponentially (e.g., for 10 LOAs and 10 loop tasks, there would be 10 billion possible configurations). The very simplicity of the scenario does however forcefully illustrate the complexity and multiplicity of human-machine relations within a given sociotechnical system. Design options for military weapons systems are, as we have seen, to a certain degree pre-structured by the imaginary of meaningful human control – but how such control would need to look like in practice is a very much different question. The point here is obviously not to echo HCI debates about how operational and organizational requirements could best be accommodated within socio-technical systems such that an optimal level of automation could be realized. The point is rather to illustrate the possibilities to tinker with variegated configurations of operator-system relations and to demonstrate
Configuring warfare 59 how they unfold different implications for the ways in which humans and machines co-produce the use of lethal force. A UAV, from a military perspective, is in fact best described as “a system of systems” (Hottman and Sortland, 2006: 86), and some of these (sub-)systems might be subjected to more rigorous forms of human control, whereas others might be highly automated. A more differentiated perspective on human-machine relations in this sense refuses to spell out the notion of control as a totalizing category (i.e. either everything is automated, or the human is in full control). As HCI scholars themselves acknowledge, “automation does not supplant human activity; rather, it changes the nature of the work that humans do, often in ways unintended and unanticipated by the designers of automation” (Parasuraman and Riley, 1997: 231).
Conclusions This chapter has argued that in order to understand what might be at stake in future warfare vis-à-vis ever more advanced sensing and algorithms, combined with engineering and robotics, we must pay attention to the specific ways in which humans and machines share or split tasks, and how their relationship revolves around notions of automation and control. Building on Suchman’s concept of configuration, it has highlighted the role of cultural imaginaries that inform the construction of socio-technical systems, notably the idea of “meaningful human control” over automated system functions. In doing so, it has drawn specific attention to the presupposed boundary between humans and computers that is within socio-technical systems engendered through the notion of control and how anthropocentrism is reproduced through engineering and design practices, while these practices at the same time aspire to level the presupposed ontological asymmetry through the notion of the machine-as-human. Conceiving of this apparent contradiction not as a question of who is right or who is wrong, but as a productive analytical ground, the chapter has analyzed how relations between human operators and system functions become structured through LOAs. Refusing to reproduce totalizing narratives of control vs. machine autonomy currently mobilized in debates about LAWS, the chapter has therefore engaged conceptual HCI work and its application in the design of military weapons systems in order to demonstrate the complex configurations of operator-system relations and the specific kinds of control that they engender. From the analysis, I would like to draw out a number of implications, both for the subject matter (the future of warfare) and for the study of technology and agency through the concept of configuration. First of all, studying the configurations of technology through engineering and design practices brings to the fore that the idea of automation is always defined vis-à-vis its counterpart in the form of human control. This results in the reproduction of the ontological boundary between human and non-human parts in socio-technical systems. As Suchman (2012: 57) writes, “in the case of technology, configuration orients us to the entanglement of
60 Matthias Leese imaginaries and artefacts that comprise technological projects.” The paramount imaginary when it comes to the future of warfare is the notion of human control over military weapons systems that places the human at the center of the system. This corresponds with the modernist-liberal idea that agency should be conceived of in clear and non-entangled ways that adhere to established ethical-legal categories. As Suchman and Weber (2016: 98) have put forward, “the project of machine intelligence is built upon, and reiterates, traditional notions of agency as an inherent attribute and autonomy as a property of individual actors.” It thus firmly anchors agency in the human world and pre-empts the question how non-exclusively-human forms of agency could be accommodated within the ethical-legal categories that structure international politics. This paradox should be regarded as an analytically productive site for the study of technology and agency in IR. AI, machine learning, sensing technologies, and robotics will not go away, but will be further developed. Technologies emerging at their intersection need to be understood as socio-technical systems that are entangled in social, political, legal, moral, and economic contexts – and this includes the cultural imaginaries that inform engineering and design. This acknowledgment is, secondly, an important insight for the study of the politics and possible modes of regulation around such socio-technical systems. Automation – the prevalent mode of workload distribution between humans and machines – is not a question of either full human autonomy or full machine autonomy. In this sense, the more fine-grained (yet still quite schematic) analysis provided here can serve as an intervention into totalizing discourses about “killer robots” as currently prevalent in the debates about future warfare. As Elish (2017: 1104) notes with regard to practices of warfare, “new technologies do not so much do away with the human, but rather obscure the ways in which human labor and social relations are reconfigured.” At the same time, however, an emphasis on variegated LOAs in existing military weapons systems foregrounds the fact that many of the dangers that are believed to lurk in the idea of future “killer robots” are actually already prevalent today. Human control – whether meaningful or not – in socio-technical systems presupposes interaction between operator and system, and thereby always already produces complex, distributed, and dynamic forms of agency. For regulatory debates about military weapons systems and presupposed autonomy, this means that an acknowledgment of such fundamental complexity is needed in order to inform debates about which specific configurations would offer a form of control that could be considered meaningful – and which do not. As Sharkey (2016: 37) has put forward in this sense, “we need to map out exactly the role that the human commander/supervisor plays for each supervised weapons system” – even if this is a time-consuming and difficult quest. Third, and building on the notion that we need to grapple with complexity instead of totalizing discourses about technology and agency, Suchman’s focus on boundary work provides a fruitful analytical intervention. If we start from the premise that “boundaries are necessary for the creation of meaning and,
Configuring warfare 61 for that reason, are never innocent” (Suchman, 2007: 285), the control boundary that we find in military weapons systems is neither a hard one nor a singular one. Multiple different configurations of control and automation signify that within such complex systems, the boundary between the human and the machine multiplies as well. As even in simplified scenarios and systems, control boundaries turn out to be dynamic and shifting over time, a static and fixed boundary between the human and the non-human world is highly unlikely. The relationship between the human and the non-human, although still subject to a clear-cut hierarchy in the cultural imaginary, becomes through the empirical multiplicity of control configurations a more permeable one that might in fact be able to accommodate complex and nuanced relations between humans and machines and the types of agency that they co-produce. In this sense, conceiving of automation and control as a set of multiple and dynamic relations provides a crack in the seemingly cemented opposition between symmetrical readings of the world on the one hand, and discourses and practices of design and technology on the other.
Acknowledgments This paper has largely benefitted from discussions at the 2018 EWIS Workshop “New Technologies of Warfare: Implications of Autonomous Weapons Systems for IR” in Groningen. I am grateful to the workshop participants, as well as to Marijn Hoijtink and Myriam Dunn Cavelty, for their critical and constructive engagement with my work.
Notes 1 Depending on whether one presupposes that the system carries a payload or not, commentators refer to “AWS” (Autonomous Weapons Systems) or “LAWS.” As the key concern in current debates is indeed the assumption of possible lethality of autonomous machine actions, I will here use the latter term. 2 Whether this could at any point become the case is subject to fierce debates in philosophy, computer science, and other disciplines. For a comprehensive overview, see Welsh (2018). 3 The loop model also serves as a regular point of reference in current debates about the regulation of possibly autonomous weapons systems of the future, resulting in different (simplified) models/policy-choices where the human operator could be placed vis-à-vis the OODA sequence: (1) “in the loop” (i.e. the human operator has full control over system functions); (2) “on the loop” (i.e. the human operator supervises system functions); or (3) “out of the loop” (i.e. the human operator has no control over system functions).
References Acuto M and Curtis S (2014) Assemblage Thinking in International Relations. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan, 1–16.
62 Matthias Leese Aradau C (2010) Security that Matters: Critical Infrastructure and Objects of Protection. Security Dialogue 41(5): 491–514. Arciszewski H F R, de Greef T E and van Delft J H (2009) Adaptive Automation in a Naval Combat Management System. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 39(6): 1188–1199. Article 36 (2016) Article 36 Reviews and Addressing Lethal Autonomous Weapons Systems: Briefing Paper for Delegates at the Convention on Certain Conventional Weapons (CCS) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). Geneva, 11–15 April 2016. Available at www.article36.org/wp-content/ uploads/2016/04/LAWS-and-A36.pdf (accessed 31 October 2018). Asaro P (2013) On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making. International Review of the Red Cross 94(886): 687–709. Barad K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham/London: Duke University Press. Barry A (2013) The Translation Zone: Between Actor-Network Theory and International Relations. Millennium – Journal of International Studies 41(3): 413–429. Bellanova R and Duez D (2012) A Different View on the ‘Making’ of European Security: The EU Passenger Name Record System as a Socio-Technical Assemblage. European Foreign Affairs Review 17(2/1): 109–124. Best J and Walters W (2013) “Actor-Network Theory” and International Relationality: Lost (and Found) in Translation. International Political Sociology 7(3): 332–334. Billings C E (1997) Aviation Automation: The Search for a Human-Centered Approach. Mahwah: Lawrence Erlbaum. Bueger C (2013) Actor-Network Theory, Methodology, and International Organization. International Political Sociology 7(3): 338–342. Clough B T (2002) Metrics, Schmetrics! How the Heck do you Determine an UAV’s Autonomy Anyway? Proceedings of the 2002 Performance Metrics for Intelligent Systems Workshop (PerMIS-02). Gaithersburg, MD, 13–15 August 2002. Available at www.dtic.mil/dtic/tr/fulltext/u2/a515926.pdf (accessed 31 October 2018). Collier S J and Lakoff A (2007) The Vulnerability of Vital Systems: How ‘Critical Infrastructure’ Became a Security Problem. In Dunn Cavelty M & Kristensen K S (eds.) Securing ‘The Homeland’: Critical Infrastructure, Risk and (In)Security. London: Routledge, 17–39. Connolly W E (2013) The ‘New Materialism’ and the Fragility of Things. Millennium – Journal of International Studies 41(3): 399–412. Cooke N J (2006) Why Human Factors of “Unmanned” Systems? In Cooke N J, Pringle H L, Pedersen H K & Connor O (eds.) Human Factors of Remotely Operated Vehicles. Amsterdam: Elsevier, xvii–xxii. Cooke N J, Pringle H L, Pedersen H K and Connor O (eds.) (2006) Human Factors of Remotely Operated Vehicles. Amsterdam: Elsevier. Coram R (2002) Boyd: The Fighter Pilot Who Changed the Art of War. New York: Little, Brown & Company. Cummings M (2004) Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 1st Intelligent Systems Technical Conference. Chicago, IL. de Greef T (2016) Delegation and Responsibility: A Human-Machine Perspective. In di Nucci E & Santoni de Sio F (eds.) Drones and Responsibility: Legal, Philosophical, and Sociotechnical Perspectives on Remotely Controlled Weapons. Milton Park/ New York: Routledge, 134–147.
Configuring warfare 63 Department of Defense (2012) Directive Number 3000.09: Autonomy in Weapon Systems. 21 November. Dunn Cavelty M, Fischer S-C and Balzacq T (2017) ‘Killer Robots’ and Preventive Arms Control. In Dunn Cavelty M & Balzacq T (eds.) Routledge Handbook of Security Studies, Second Edition. London/New York: Routledge, 457–468. Elish M C (2017) Remote Split: A History of US Drone Operations and the Distributed Labor of War. Science, Technology, & Human Values 42(6): 1100–1131. Endsley M R (1987) The Application of Human Factors to the Development of Expert Systems for Advanced Cockpits. Proceedings of the Human Factors Society Annual Meeting 31(12): 1388–1392. Endsley M R and Kaber D B (1999) Level of Automation Effects on Performance, Situation Awareness and Workload in a Dynamic Control Task. Ergonomics 42(3): 462–492. Endsley M R and Kiris E O (1995) The Out-of-the-Loop Performance Problem and Level of Control in Automation. Human Factors 37(2): 381–394. Fitts P M (ed.) (1951) Human Engineering for an Effective Air Navigation and Traffic Control System. Washington: National Research Council: Division of Anthropology and Psychology Committee on Aviation Psychology. Hawley J K, Mares A L and Giammanco C A (2005) The Human Side of Automation: Lessons for Air Defense Command and Control. Aberdeen Proving Ground: Army Research Laboratory: Human Research and Engineering Directorate. Heyns C (2016) Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death. In Bhuta N, Beck S, Geiß R, Liu H-Y & Kreß C (eds.) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge University Press, 3–20. Horowitz M C and Scharre P (2015) Meaningful Human Control in Weapon Systems: A Primer. Washington: Center for a New American Security. Hottman S B and Sortland K (2006) UAV Operators, Other Airspace Users, and Regulators: Critical Components of an Uninhabited System. In Cooke N J, Pringle H L, Pedersen H K & Connor O (eds.) Human Factors of Remotely Operated Vehicles. Amsterdam: Elsevier, 71–88. Human Rights Watch (2012) Losing Humanity: The Case Against Killer Robots. Available at www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf (accessed 31 October 2018). International Committee of the Red Cross (2016) Expert Meeting: Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. Versoix, Switzerland, 15–16 March 2016. Available at https://shop.icrc.org/ autonomous-weapon-systems.html?___store=default (accessed 31 October 2018). Jackson P T and Nexon D H (1999) Relations before States: Substance, Process and the Study of World Politics. European Journal of International Relations 5(3): 291–332. Jasanoff S (2004) The Idiom of Co-Production. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 1–12. Klein G, Woods D D, Bradshaw J M, Hoffman R R and Feltovich P J (2004) Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity. IEEE Intelligent Systems 19(6): 91–95. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Leander A (2013) Technological Agency in the Co-Constitution of Legal Expertise and the US Drone Program. Leiden Journal of International Law 26(4): 811–831.
64 Matthias Leese Mayer M, Carpes M and Knoblich R (eds.) (2014) The Global Politics of Science and Technology – Vol. 1: Concepts from International Relations and Other Disciplines. Dordrecht: Springer. McCarthy D R (ed.) (2018) Technology and World Politics: An Introduction. Milton Park/New York: Routledge. Miller C A and Parasuraman R (2007) Designing for Flexible Interaction between Humans and Automation: Delegation Interfaces for Supervisory Control. Human Factors 49(1): 57–75. Neyedli H F, Hollands J G and Jamieson G A (2011) Beyond Identity: Incorporating System Reliability Information into an Automated Combat Identification System. Human Factors 53(4): 338–355. Parasuraman R and Miller C A (2004) Trust and Etiquette in High-Criticality Automated Systems. Communications of the ACM 47(4): 51–55. Parasuraman R, Molloy R and Singh I L (1993) Performance Consequences of Automation-Induced ‘Complacency’. The International Journal of Aviation Psychology 3(1): 1–23. Parasuraman R and Riley V (1997) Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors 39(2): 230–253. Parasuraman R, Sheridan T B and Wickens C D (2000) A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 30(3): 286–297. Parasuraman R and Wickens C D (2008) Humans: Still Vital after All These Years of Automation. Human Factors 50(3): 511–520. Pickering A (1995) The Mangle of Practice: Time, Agency, and Science. Chicago: University of Chicago Press. Rosert E (2017) How to Regulate Autonomous Weapons: Steps to Codify Meaningful Control as a Principle of International Humanitarian Law. PRIF Spotlight 6/2017. Frankfurt a.M.: Peace Research Institute Frankfurt/Leibniz Institut Hessische Stiftung Friedens- und Konfliktforschung. Salter M B (2015) Introduction: Circuits and Motion. In Salter M B (ed.) Making Things International 1: Circuits and Motion. Minneapolis: University of Minnesota Press, vii–xxii. Scharre P and Horowitz M C (2015) An Introduction to Autonomy in Weapon Systems. Washington: Center for a New American Security. Schouten P (2014) Security as Controversy: Reassembling Security at Amsterdam Airport. Security Dialogue 45(1): 23–42. Sharkey N (2016) Staying in the Loop: Human Supervisory Control of Weapons. In Bhuta N, Beck S, Geiß R, Liu H-Y & Kreß C (eds.) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge University Press, 23–38. Sharkey N and Suchman L (2013) Wishful Mnemonics and Autonomous Killing Machines. AISB Quarterly 136: 14–22. Sheridan T B and Verplank W L (1978) Human and Computer Control of Undersea Teleoperators. Cambridge: Man-Machine Systems Laboratory: Department of Mechanical Engineering. Suchman L (2007) Human-Machine Reconfigurations: Plans and Situated Actions, 2nd Edition. Cambridge: Cambridge University Press. Suchman L (2012) Configuration. In Lury C & Wakeford N (eds.) Inventive Methods: The Happening of the Social. London/New York: Routledge, 48–60.
Configuring warfare 65 Suchman L and Weber J (2016) Human-Machine Autonomies. In Bhuta N, Beck S, Geiß R, Liu H-Y & Kreß C (eds.) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge University Press, 75–102. Svensson E, Angelborg-Thanderez M, Sjöberg L and Olsson S (1997) Information Complexity-Mental Workload and Performance in Combat Aircraft. Ergonomics 40 (3): 362–380. Valkenburg G and van der Ploeg I (2015) Materialities between Security and Privacy: A Constructivist Account of Airport Security Scanners. Security Dialogue 46(4): 326–344. Walters W (2014) Drone Strikes, Dingpolitik and Beyond: Furthering the Debate on Materiality and Security. Security Dialogue 45(2): 101–118. Wang L, Jamieson G A and Hollands J G (2009) Trust and Reliance on an Automated Combat Identification System. Human Factors 51(3): 281–291. Welsh S (2018) Ethics and Security Automata: Policy and Technical Challenges of the Robotic Use of Force. Milton Park/New York: Routledge. Wickens C D, Dixon S R and Ambinder M S (2006) Workload and Automation Reliability in Unmanned Air Vehicles. In Cooke N J, Pringle H L, Pedersen H K & Connor O (eds.) Human Factors of Remotely Operated Vehicles. Amsterdam: Elsevier, 209–222. Wickens C D, Mavor A S, Parasuraman R and McGee J P (eds.) (1998) The Future of Air Traffic Control: Human Operators and Automation. Washington: National Academies Press. Wiener E L and Curry R E (1980) Flight-deck Automation: Promises and Problems. Ergonomics 23(10): 995–1011. Williams A P (2015) Defining Autonomy in Systems: Challenge and Solutions. In Williams A P (ed.) Autonomous Systems: Issues for Defence Policymakers. Norfolk: NATO Capability Engineering and Innovation Division, 27–62.
4
Security and technology Unraveling the politics in satellite imagery of North Korea Philipp Olbrich
Introduction International Relations (IR) and critical security studies have never ignored technology. Nuclear weapons, information and communication technologies (ICT) or drones have long been high on the agenda of foreign policy observers and scholars alike when it comes to military power, geopolitical shifts, or the conduct of warfare (Asal and Beardsley, 2007; Byman, 2013; Deudney, 1993; Rosenau, 1990; Waltz, 1979). More recently, however, these contributions have been criticized for their restrictive concepts of technology as either an autonomous driving force of (in)security or a neutral instrument for security actors. Both understandings render the wider implications of technology largely exogenous to politics and, hence, political analysis (Herrera, 2006; McCarthy, 2013). The public and academic discourse about the commercialization of satellite imagery, which allows non-state actors to monitor human security situations on a global scale, mirrors such understandings of technology: the non-governmental access to satellite observation is said to promote “a new generation of ‘imagery activists’” (Baker, 2001: 533) and that “[t]his avalanche of images will create an unprecedented database of the entire planet, one that can be used to stop forest fires and maybe even wars” (Burningham, 2016). In short, in this view, the commercialization of satellite imagery enables non-state actors to use satellite technology as a neutral instrument for their purposes to produce knowledge and effect change. In an attempt to challenge this apolitical understanding of satellite technology, the chapter turns to the case of the Democratic People’s Republic of Korea (hereafter DPRK or North Korea). Given the consistent, negative reports about the security and human rights situation in North Korea, the regular satellite surveillance of the country by non-state actors has remained largely unquestioned and beyond dispute. This reinforces the acceptance of satellite technology as neutral and depoliticizes the practice of satellite observation. Drawing on recent innovations in critical security studies, the chapter puts satellite technology at the center of analysis to unravel the politics in satellite imagery of the DPRK. More specifically, it shows how satellite technology participates in the construction of North Korea as a
Security and technology
67
security threat and how security assemblages of practices, things, and actors are constructed around that specific technology. Recent scholarship has particularly looked towards New Materialism and Science and Technology Studies (STS) to refine our understanding of technology in the study of security. First, the chapter takes stock of the theoretically rather diverse writings on technology in order to identify three unifying, common features. They include attributing objects a sense of material agency, a focus on relations within security networks or assemblages, and an understanding of such assemblages as contingent and unstable, which implies an empiricist approach to trace performative relations on the micro level. Second, and building on them, the chapter continues to outline a few analytical starting points: a problematization of security renders the use of technology desirable and legitimate, and enrolls human and non-human actors into a security governance assemblage; since relations within assemblages are inherently unstable they require constant stabilization through material and discursive practices. Once an assemblage is successfully stabilized it locks in particular security practices. Third, the chapter looks into the non-governmental satellite observation of the DPRK. With roots in Cold War power politics, the increasing availability of commercial satellite imagery has brought satellite surveillance into reach of nonstate actors. The limited access to the DPRK elevates satellite imagery to one of the few entry points into the country. Taken together, two broader arguments are put forward. First, a socio-material understanding of technology reveals how satellite imagery draws together various practices, things and actors to assemble a seemingly legitimate and desirable security assemblage to monitor, document and manage (in)security in the DPRK. Second, while adding to the knowledge base of a North Korean nuclear and missile program, current practices of satellite surveillance reify an adversarial relation reminiscent of the Cold War that continuously discredits North Korea as a dialogue partner. The conclusion acknowledges the limits of sociomaterial approaches to security and calls for an increased theoretical exchange within Security Studies under the common concern about the growing role of technology in global security.
Making sense of the role of technology in global security Three common features of socio-material approaches to security William Walters (2014) observes that New Materialist and STS-informed approaches to security have largely been focusing on questions of governance. In doing so, they draw on a diverse pool of disciplines1 which creates an equally diverse and varied research program. This ultimately complicates efforts to introduce socio-material understandings of technology into mainstream debates about international security. In order to facilitate such discussions across research programs and theoretical leanings, three common
68 Philipp Olbrich features are carved out that are broadly shared among New Materialist and STS-inclined security research which will henceforth be called socio-material. First, socio-material approaches to security technology have given rise to a prolific research program whose proponents pay renewed attention to the role of materiality in security (e.g., Acuto and Curtis, 2014; Amicelle et al., 2015; Bellanova and Duez, 2012; Bourne, 2016; Bousquet, 2014; Hoijtink, 2017; Jeandesboz, 2016; Leese, 2015; Mayer, 2012; Schouten, 2013). Arguably, their most fundamental proposition concerns the question of material agency. Initial reactions have made the notion of material agency figure prominently in debates on the raison d’être of new materialism. In this context, Christopher Breu (2016) wonders about such strong reactions elicited by material agency and speculates that “[t]he fact that we can’t see this as a relatively straightforward claim suggests the continuing power of correlationism and, even more, social construction as doxa” (Breu, 2016: 13). The notion stirs up especially heated controversies when it is misunderstood as ascribing rationality and intentions to material artifacts, the common-sense hallmarks of human agency. To be sure, there is some diversity in the degrees of material agencies ascribed across the literature. The basic claim, however, should not lead to much controversy. The essential idea of material agency is probably best understood by way of the so-called symmetry principle. It refers to the basic analytical equality of the human and the non-human (Latour, 2005). This does not mean that human beings, things, institutions, and concepts matter in the same way, produce equal externalities or are even interchangeable in a given setup. Rather, following the symmetry principle “simply means not to impose a priori some spurious asymmetry among human intentional action and a material world of causal relations” (Latour, 2005: 76; emph. in orig.). In doing so, material agency takes issue with strict distinctions between humans and things, science and politics, knowledge and policy that suggest a hierarchy in that humans command things without conceptualizing the reverse. Accepting relations between technologies and human actors as reciprocal, the symmetry principle enables what is called material agency. Importantly, however, it needs to be noted that agency here is not understood in the traditional, or moral, sense of (human) agents pursuing a specifiable end with intentional actions (Bennett, 2010; Cudworth and Hobden, 2013). Rather, agency is to be sought in a distributed assemblage of things and humans that is characterized by its capacity of making a difference in relations rather than by its intentionality (Latour, 2005). Put differently, material agency denotes the force of objects to have an impact on actions in terms of dispositions, potentialities, resistance, or constraints. Second, many new materialist studies are concerned with tracing relations in complex networks. This is grounded in a pertinent skepticism towards taken-for-granted entities such as the state, international organizations, or technology. However, it does not stop at deconstructing them into isolated smaller pieces but instead examines how they are reassembled into
Security and technology
69
functioning wholes (Breu, 2016: 18). Consequently, tracing the socio-material relations that make up these entities becomes the responsibility of the researcher as such established notions cannot do justice to the specific political, historical, and material contexts which they are believed to represent. Dissolving the conceptual unity of human and non-human actors, respectively, socio-material accounts merge them as parts of the same assemblage or actor-network of distributed agency (e.g., Acuto and Curtis, 2014; Bellanova and Duez, 2012; Schouten, 2013). Different scholars have drawn on understandings as found in Latourian Actor-Network Theory (ANT) (Callon, 1986; Law, 1991; Leander, 2013) and in Deleuzian concepts of the assemblage (Acuto and Curtis, 2014; DeLanda, 2009; Srnicek, 2014). Practice approaches correspond with such an understanding by bringing materials into conceptual IR and critical security thinking. Moreover, new materialist and practice approaches are closely connected on an ontological level as both are theorized as overcoming a Cartesian dualism of mind and body (Mayer, 2012; Pouliot, 2010). Taken together, socio-material approaches introduce a networked understanding of human-material relations that is better suited for studying the contingent and assembled nature of (in)security on the ground. Third, and very crudely, an emphasis on contingency challenges wide-ranging theoretical abstractions and gives an important role to the micro level and empiricism in socio-material approaches. Calling out a misleading anthropocentrism, the analytical focus shifts towards assemblages and the associated idea that complex human-material associations as constitutive of the world. In this respect, assemblages are characterized by a contingent and volatile nature in that the heterogeneous relations among human and material actors require constant enactment through material and discursive practices (Mayer, 2012: 168–9). Applied to the security domain this means that “security actors perform security by enrolling, assembling and translating heterogeneous elements into stable assemblages that can be presented as definitive security solutions or threats” (Schouten, 2014a: 32). An analysis that follows the symmetry principle walks a tightrope in that it neither favors material nor human factors but focuses on their interplay, thereby highlighting contingency over linearity and becoming over continuity. As a consequence, a large part of the socio-material literature eschews investigating macro phenomena but rather concentrates on specific, traceable relations that are accessible through detailed ethnographic, interview, or other qualitative methods. Moreover, this complexity suggests exchanging wide-ranging social concepts for an empiricism that “requires us to attend at once to the specificity of materials, to the contingencies of physical geography, the tendencies of history and the force of political action” (Barry, 2013: 183). In attempts to try to capture the specificity of human–non-human relations, sociomaterial approaches can be characterized as an interpretative-empiricist program that “provide a parsimonious and open ontological vocabulary meaningful for conducting empirical research” (Bueger, 2014: 60). Having outlined three central tenets of socio-material approaches, it is now possible to spell out more clearly some analytical starting points to study
70 Philipp Olbrich how security assemblages emerge and sustain themselves. In so doing, the next section foregrounds the notions of problematization and stabilization as useful analytical devices to study how security assemblages are constructed around specific technologies. This way it becomes possible to analytically grasp the desire for enrolling technical devices, how this is legitimized and the intricate processes that stabilize a security assemblage. Analytical concepts of socio-material security This section discusses two particular moments in the emergence of security assemblages that illustrate the usefulness of socio-material approaches. Focusing on the practices and relations among human and non-human elements within an assemblage opens the black box of security governance and enables us to study how actors produce a security problem, enroll technology, and lock in certain practices. In this setup, technology is dissolved into a complex network of humans and things, which renders its effects a function of contingent relations. Consequently, the analyst is tasked with tracing the socio-material relations that enact security practices. Two notions are highlighted as particularly useful starting points for studying the use of technology in security governance, i.e. problematization and stabilization, which will be attended to in turn. Socio-material approaches to security abandon preconceived notions of security threats. Instead, they highlight the contingent process of how security is problematized. When taking apart specific security practices, technology is usually directed at a security problem. The creation of a security problem, as will be discussed below, initiates stabilization processes of a governance assemblage that addresses the problem. As a part of this, particular technologies emerge as useful, desirable and legitimate responses to a security threat. In turn, and building on the premise that “technology is society made durable” (Latour, 1991), specific objects or material devices are particularly strong anchors that cement specific problematizations of security. Going further, material objects introduce potentialities and constraints into security practices of actors, which then become more difficult to change. This way, recalling the fundamental instability of assemblages, technology plays an important part in coproducing a security problem and prescribing preferred relations and practices because “if order requires resisting inherent transformation, then ordering – stabilizing relations – is most effectively done exactly when the ‘actors’ are not human” (Schouten, 2014b: 86; emph. in orig.; see also Callon and Latour, 1981). Importantly, however, while the performative effects of technologies produce certain constraints, security practices retain the possibility for change and, thereby, political intervention (cf. Law and Singleton, 2014). Once a security problematization has been initiated, a second analytical step examines the ways in which an assemblage of human and non-human actors is formed around it. As outlined above, socio-material relations are
Security and technology
71
considered contingent and unstable. As a result, the focus of analysis turns towards instances of stabilization and how security assemblages are consolidated to the point when their assembled character is effectively veiled (cf. Bourne, 2012), or in other words, when the complex net of socio-material relations have become black boxed in “a process that makes the joint production of actors and artifacts entirely opaque” (Latour, 1999: 183). The notion of a black box highlights the provisional and contested nature of assemblages and demarcates a preliminary endpoint to stabilization processes (Bellanova and Duez, 2012; Bourne, 2016; Schouten, 2014a). Such stabilization processes are characterized by the insertion of material objects and enacted relations. To better grasp this idea, the notion of translation serves as an auxiliary concept. It marks the traceable associations between two entities2 and emphasizes that the relations between them cannot be conceived of as direct, causal, pure, or unmediated, but as transformative (Latour, 2005: 106–9). Graham Harman (2009: 15) illustrates translation with reference to Stalin’s and Zhukov’s orders to encircle Stalingrad. Such a strategy cannot be understood as a frictionless force that seamlessly moves through the military network. In contrast, the broader plan needs to be translated into maps, support lines, and localized orders that are adapted for the different levels of officers and soldiers who ultimately have to move their bodies and weapons. Accordingly, translation brings out the mediated form of assembled relations so that what is transported through these networks can get a different spin on its way and does not stay the same. If we turn these ideas to security technologies it becomes clear that they are themselves the result of specific processes of translation: There is no unequivocal line of causality from any self-evident or universal idea of security to the design of particular security devices. Rather, in the trajectory of the development of a security device and the operational practice surrounding it, generic definitions of security problems are “translated” step by step into concrete policy, technological requirements and specifications, designs of security workflows, and indeed technological configurations. At each step of translation, many sociopolitical and technological factors influence the trajectory and co-produce specific security devices. (Valkenburg and van der Ploeg, 2015: 327) Accordingly, what happens in the development, adoption, and operation of a security technology is that materials and practices stabilize the relations within a security assemblage. In doing so, they lock in particular agential potentialities and constraints in terms of how the technology is used and, effectively, what kind of effects it generates. Taken together, accounting for the contingent micro processes involved in socio-material security assemblages requires us to (a) trace the specific problematizations called upon to render the use of technology desirable and
72 Philipp Olbrich legitimate, and (b) examine the materials and practices in stabilization processes that lock in particular security assemblages which eventually (c) create relatively durable and recurring effects. This three-step model constitutes a flexible and empiricist research approach. Most importantly, it offers practical and systematic starting points for tracing socio-material relations (Law, 2009; Law and Singleton, 2014). Considering the potential pool of actors, practices, and relations in any number of problematizations, the identification of who or what matters and how is an inductive process (cf. Mayer, 2012: 178). The chapter turns to the ways in which technology becomes enrolled in security practices in order to show how satellite surveillance of North Korea is rendered legitimate and desirable and reinforces a hostile image of North Korea that complicates dialogue. For this purpose, it draws on ten interviews with DPRK-focused non-governmental satellite observation experts and imagery analysts from non-governmental organizations (NGOs), universities and the corporate sector as well as NGO reports, official policy documents and written satellite imagery analyses of North Korea.3 The following section inquires into the problematization that renders the satellite observation of North Korea desirable and legitimate. Next, the chapter examines the specific stabilization of the socio-material assemblage in response to this problematization before outlining its more durable effects.
Satellite observation of North Korea: of myths, mystery and mistrust Problematization of North Korea A socio-material approach to security takes a step back and traces the specific problematization of the North Korean security assemblage to examine what the satellite technology is targeting. Accordingly, questions that need to be answered are why satellite imaging is adopted by a set of actors and what makes space-based surveillance a desirable and legitimate practice for them. For the case of North Korea, this comes down to four factors including pervasive uncertainty in the study of North Korea, the country’s pariah status, permissive international law and a sense of technological curiosity. While field visits are the bread-and-butter of many area studies – as are nuclear inspections and humanitarian on-the-ground assessments in their respective domains – the limited physical access and lack of reliable government information render studying North Korea a painstaking and uncertain craft. Although the academic debate on North Korea is quite varied, most scholars agree that data generation remains difficult so that they regularly advise to take results with a grain of salt due to remaining uncertainties. The alleged secrecy of North Korea is mirrored in the many titles of publications trying to compensate for the lack of transparency by promising to demystify, to reveal the hidden, to speak truth, or to describe the “real” North Korea (Cumings et al., 2006; Hassig and Oh, 2009; Lankov, 2013; Lintner, 2005; Park, 2012). The same problematization is found in the
Security and technology
73
human rights and security discourse about the country and feeds the desire for persistent satellite surveillance. The political prison camp system is arguably the most salient issue for NGOs working on North Korean human rights. The camps are identified as sites of crimes against humanity including unlawful detention, torture, excessive labor, and executions (UN Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea, 2014). Despite broad international accusations, the regime keeps denying the existence of such prison camps in official communication arguing that the evidence is unreliable and “based on the false testimonies by the ‘defectors from the north’” (Permanent Representative of the Democratic People’s Republic of Korea to the United Nations Office at Geneva, 2015). In response to the encompassing uncertainty and the regime’s reluctance to concede to the existence of political prison camps, satellite imagery is deemed a suitable way to cope with this problem of uncertainty and policymaking. The Committee for Human Rights in North Korea (HRNK) regularly releases reports on individual camps employing satellite imagery analysis (e.g., Bermudez et al., 2015; Scarlatoiu and Bermudez, 2015). The title of its flagship series “Hidden Gulag” expressively encapsulates the fundamental problem with North Korean human rights violations (Hawk, 2003). The mystery and severity had not changed when “Hidden Gulag” went into its fourth update which still attests North Korea an “extreme secrecy” so that “the world outside North Korea has practically no knowledge of the fates or whereabouts of the former and present prisoners” (Hawk, 2015: 4–5). Similar problems of uncertainty are echoed by Amnesty International and other human rights NGOs (Amnesty International, 2013; OHCHR, 2015) and, perhaps unsurprisingly, also apply to the nuclear program (e.g., Albright et al., 2014; Dinville and Bermudez, 2016; Hansen and Liu, 2014; Liu, 2016). Satellite observation is portrayed as an effective remedy for the uncertainty problem. More specifically, a concrete list of geographical sites experience constant surveillance including the Nyŏngbyŏn nuclear facilities, the P’unggye-ri nuclear test site, various political prison and labor camps and the Sohae satellite launching station. Given the limited options to actually enter North Korea to conduct research, these sites – sensed through commercial satellite imagery – serve as material proxies to produce knowledge about the country. Referring to the reliance on satellite observation an analyst of North Korea’s nuclear program pointed out that the limited information is often based on propaganda so that “satellite imagery is the key to what we can do” (interview with satellite imagery analyst, August 2014). In short, the promise of delivering material or technical evidence makes satellite data particularly influential in situations of uncertainty. Uncertainty, however, is a common factor in global security. In addition to the uncertainty problem, the application of satellite surveillance is further legitimized and made desirable by North Korea’s perception as a pariah state, lax international regulations and a sense of curiosity with satellite
74 Philipp Olbrich technology itself. Despite the global coverage of satellite observation, satellite surveillance is not panoptic, but it has a selective gaze. High-resolution observation satellites are tasked and pointed to particular areas of interest. North Korea is singled out due to its nuclear weapons program and human rights violations, which are seen as a threat to international security and regional peace (interviews with satellite imagery analysts, September and October 2014). According to this reasoning, North Korea’s self-imposed secrecy, nuclear brinkmanship and political prison camps make it a morally uncontested and legitimate target for non-governmental satellite surveillance, which transfers the observations from hitherto governmental and diplomatic arenas into the public at large. Moreover, analysts dismiss legal sovereignty as an obstacle to space-based observation. In fact, international law grants observation satellites considerable leeway, as also one interlocutor noted: “The thing about satellites is that they are in space … So that’s quite nice because I’m not infringing someone’s sovereignty. I’m not going into their territory” (interview with satellite imagery analyst, August 2014). Rather than putting legal constraints onto the desirability of satellite surveillance, international law and the ongoing marketization of satellite imagery normalize non-governmental satellite observation as a security practice. Lastly, the encouraging legal environment is met with a sense of curiosity and fascination with the fast development of satellite observation technologies. Imagery analysts enjoy working with satellite imagery and meticulously follow the publications of new imagery covering their areas of interest in North Korea. In a way of self-actualization, this allows them to pursue their genuine fascination with the various sites and independently verify reports with “something tangible” or even be the first to discover something (interviews with satellite imagery analysts, August and September 2014). Resounding academic efforts, practitioners see themselves faced with widespread secrecy and liken working on North Korea’s nuclear program to “solving a mystery” (interview with satellite imagery analyst, August 2014). At the same time, the problem definition is not independent of satellite technology when the technology is identified as an adequate solution to the uncertainty problem. Rather, the specific problematizations of security correspond with the technology’s socio-material potential and constraints and are adapted accordingly: the satellite surveillance by non-state actors is limited to specific proxies of international security and human rights that relate to large built infrastructure such as nuclear power plants, tests sites and political prison camps. Their visual representations are created to offer a strong opposition to North Korea’s secrecy and verbal assertions of innocence. Lastly, in a feedback loop, it is the repeated sight of unmasking satellite imagery that further legitimates and feeds the desire for more satellite observation. From a political perspective, this specific problematization of security governance of North Korea is not without alternative but, in principle, constitutes an open and accessible moment for intervention. The
Security and technology
75
following section examines how this one alternative of satellite observation of North Korea is stabilized. Actors, practices, and things as stabilizing anchors After having clarified that the observation satellites target the secrecy surrounding North Korean security threats, this section examines how satellite observation of North Korea is stabilized through an expansion of human and non-human relations. It will be shown how these relations provide a particular authority to satellite observation. While visibility remains a strong marker for validity among analysts, all interviewees agreed that satellite imagery is far from self-evident but requires an interpretative process that is liable to technical and human errors. In a self-reflective piece on the difficulties of anticipating future North Korean nuclear tests via satellite imagery, two imagery analysts concede that “[n]either governments nor nongovernmental organizations have a particularly good track record predicting the North’s behavior […] because of analytical shortcomings” (Hansen and Liu, 2014). Satellite imagery is particularly useful to analyze large-scale changes and the images of the nuclear test site and political prison camps appear far removed from the horrors they are to depict (Hong, 2013). As a result, the regular public updates of North Korea’s nuclear sites often point to smaller, not clearly classifiable activities and, hence, inconclusive analyses (cf. Choe, 2016). In addition to the difficulties and qualified conclusions of interpretation, cloud cover can compromise vision and the fixed orbits of satellites make the times of satellite observation relatively easy to determine.4 North Korea can exploit this predictability for camouflage, concealment and deception (CCD) tactics leading to a “hide-and-seek played around Pyongyang’s advanced weapons programmes” (Hewitt, 2016). DPRK authorities, for instance, can execute particular activities in accordance with satellite flyovers or use underground facilities, environmental covers, and canopies to obstruct vision. In essence, it becomes difficult to distinguish real from pseudo-activity and to determine whether a snow-cleared road is indicative of facility maintenance, missile test preparations, or deliberate confusion. In tackling this interpretation problem, three distinct steps are highlighted. They make clear how the assemblage grows through different stages in picking up additional practices, things, and actors to stabilize in spite of the interpretation problem. First, satellite imagery analysts tackle this problem through specific coping strategies such as cautious phrasing, communicative validation, and a multiplication of sources. Being aware of the difficulties and uncertainties involved in satellite imagery analysis, they attempt to improve techniques to construct an image that is always an even better representation of reality. In the accompanying texts, estimating or quantitative language expresses how likely analysts believe the interpretations to be accurate, for example, in the identification of a certain object or event: “there is 20% chance,” possible,
76 Philipp Olbrich probable, even chance, unidentified, or improbable (e.g., Liu, 2014). Moreover, as satellite imagery analysis requires considerable know-how, analysts draw colleagues into the assemblage and confer about certain interpretations while NGOs and think tanks regularly include third-party experts such as former military analysts or private imagery intelligence companies (e.g., Amnesty International, 2013; Scarlatoiu and Bermudez, 2015; Scarlatoiu and Rios, 2014). Moreover, if possible, satellite data of North Korea is assembled in relation to additional material from refugee testimony, state propaganda or media releases (interview with satellite imagery analyst, August 2014; Hawk, 2012). In one remarkable instance, analysts printed out high-resolution satellite imagery of political prison camps on large paper rolls and together with North Korean refugees walked across the printed satellite image. After a while, the refugees could orientate themselves and recognized particular structures otherwise hidden to the analysts and revealed their function (interview with satellite imagery analyst, September 2014). In doing so, North Koreans were actively enrolled into the assemblage, which has given their written or spoken testimony a material anchor. To put it briefly, tracing the relations of a satellite image reveals a complex set of practices, objects and actors that help stabilize the socio-material assemblage. Second, the uncertainty of satellite surveillance is further tackled through technical improvements and imagery annotations. It is suggested that higher-resolution satellite imagery is able to reveal even more information on human rights violations in North Korea (Hawk, 2012: VII–VIII; UN Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea, 2014: 14) and damage the regime as better imagery “will go a long way towards helping public sentiment understand the gravity of what is happening in North Korea, and help strengthen thousands of eyewitness and personal testimonies by defectors, including former prison guards” (Hong, 2011). For this, mid-resolution imagery of one provider is used to tip and cue the analyst for a notable change. Only, then higher and more expensive imagery is enrolled into the assemblage for a more detailed analysis. Arguably, less blurry and colored images increase an emotional appeal with policymakers and the public although the identification of particular objects still remains difficult for untrained observers. That is why conclusions even from high-resolution satellite imagery cannot stand without an explanatory text. Analysts usually annotate satellite imagery with arrows, circles, and captions to make them legible – effectively explaining to the readers what they are to “see” in an image. So after the satellite data stream is transmitted to a ground station, computed, transformed into an image, perhaps pan-sharpened and made legible, it becomes a mobile thing that is neither purely knowledge nor material. Rather, it is part and result of an assemblage of cameras, rocket technology, government regulations, international law, GIS software, analysts, and so on. This new mobile thing is then relatively stable in various contexts such as intelligence briefings, lecture rooms, television shows, or newspapers (Rothe, 2015: 117).
Security and technology
77
Third, the high cross-contextual mobility of annotated satellite imagery draws more stabilizing relations into the assemblage as it is transferred into the public. This final stabilizing move connects satellite imagery to the authority of other organizations and political systems that widen the sociomaterial assemblage to render the satellite image credible and functional. This moment of interaction becomes manifest in NGO reports of political prison camps by Amnesty International or HRNK (Hawk, 2015; Scarlatoiu and Bermudez, 2015), when major newspapers pick up on satellite imagery analyses on the likelihood of nuclear tests in North Korea (e.g., Choe and Sanger, 2014; Fifield, 2016) but also when satellite imagery is enrolled in institutionalized political processes such as in testimony by witnesses in hearings before committees of the US Senate (e.g., Cha, 2015) and House of Representatives (e.g., Scarlatoiu, 2014) or in public exchanges in the UN (e.g., UN Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea, 2014). When satellite imagery of North Korea becomes cited as evidence in public news outlets, NGO reports, and official political statements it casts further stabilizing anchors across a wide network of human and material actors whereby established social authorities endorse the use of satellite observation as an effective means to combat North Korea’s secrecy and threat to international and human security. Locking in a hierarchy of evidence The preceding sections demonstrate how the security assemblage around North Korean satellite imagery is stabilized. The interpretation problem of satellite observation is not solved as such but rather crafted into complex relations within a socio-material assemblage. In an almost sequential process, the governance assemblage is stabilized during the imagery analysis, its conversion into a mobile thing and its introduction into public political processes. In doing so, the complexity, heterogeneity, and uncertainty inherent in the practice of satellite observation are forgotten and black boxed (cf. Bourne, 2016), effectively constructing satellites as a functional, effective, and uniform technology in the security governance of North Korea. Nongovernmental satellite observation of North Korea as presented above actualizes one of multiple socio-material potentialities and has NGOs and think tanks simulate Cold War military-intelligence practices. It packs up remaining uncertainties and establishes satellite observation as a legitimate and effective technology. Knowledge derived from satellite imagery gains by way of enrolling practices, things, and actors that enable an allegedly impartial observation of built infrastructure – similar to a lab scientist studying cell growth with a microscope. In the process, satellites lock in a hierarchy of evidence so that “I don’t have to believe your word if you can prove it to me with an image” (interview with satellite imagery analyst, August 2014). The way security threats and human rights violations are captured in material terms engenders three political implications concerning the way North
78 Philipp Olbrich Korea is represented, the externalization of responsibility for insecurities and the potential for political dialogue. Already during the advent of commercial observation satellites in global politics it was reasoned that “they arise in a context of ongoing struggles for control and authority, amplifying certain voices and inhibiting others” (Litfin, 1998: 208). In this respect, the constant surveillance of North Korea via satellite imagery runs the risk of (re)producing North Korea as a hostile, foreign, and backward other (cf. Hong, 2013; Shim, 2014). That is not to say that individual revelations about human rights violations and security threats are principally misguided or even unjustified – to the contrary. However, if the increasing frequency and envisioned real-time satellite observations (Olbrich and Witjes, 2016) of North Korea only provides limited new information on the state of human security, they actualize a perhaps unintended potentiality of further delegitimizing the country. This suggests that there is a difference between the potential of observation satellites to discover something new and to repeatedly confirm the status quo. In other words, continual satellite surveillance is a balancing act between advancing human rights and security of the North Korean population and further blaming, undermining, and destabilizing the regime (cf. Hong, 2013). Meanwhile, potential contestations by North Korea that refer to political discourse to justify, explain, or relativize its actions are effectively preempted: despite the packed-up and hidden uncertainties, the satellite assemblage speaks more credibly than North Korea itself as no “valid” counter-narrative can be offered which follows the same hierarchy of material evidence. In doing so, contemporary satellite observation of the country risks performing an antagonistic division that is reminiscent of the Cold War. Moreover, reinforcing this trend, the global view of satellites concentrates on a handful of sites that function as proxies to demonstrate the country’s poor human rights record and nuclear threat while the rest as of now remains largely unseen and, therefore, excluded from public discourse. The satellites’ targeted and unidirectional gaze absolves others from accountability for existing insecurities as their material origins are limited to North Korean territory. In doing so, the hierarchy of evidence risks reducing the complexity of the insecurities to material manifestations and externalizing the sole responsibility to North Korea. Put differently, the continuous view from above is conducive to oversimplifications of security problems. The specific practices and relations that are in place to tackle the problematized uncertainty surrounding North Korea produce an authoritative knowledge account of the human rights and security situation at the risk of casting aside significant contextual factors. Only to name a few tangible examples, this arguably bears the danger of obscuring the relevance of UN sanctions, China’s mixed track record of implementing them or its position towards North Korean refugees who are classified as economic migrants and repatriated to experience punishment at the hand of their own government. Similarly, this applies to the deployment of roughly 29,000 US soldiers on the
Security and technology
79
Korean peninsula, annual large military maneuvers of the US and South Korea, and the role of China’s military modernization in US security strategy. Lastly, the original use of satellite surveillance in the military domain has led to intimate relations of different degrees between state, business, and non-profit actors even after the commercialization of satellite imagery and raises concerns regarding independence, motivation, and effects of satellite observation (Herscher, 2014; Parks, 2012; Rothe, 2015; Witjes and Olbrich, 2017). Satellites do not allow for neutral observation but rather become entangled in the security problem itself. To a certain degree, non-state actors adopt a military technology, ways of seeing and speaking, and buy into state-informed security practices. In doing so, non-governmental satellite observation adds to the ways in which the observers are linked to contextual security dynamics (cf. Jacobsen, 2015). Moreover, satellite imagery risks to act as a stand in for North Korea as a dialogue partner and lays the ground for a policy strategy that discourages dialogue but rather aims for regime change (cf. Hong, 2011). This strategy becomes evident in “Hidden Gulag” (Hawk, 2003: 9) and – surprisingly blunt – in an interview with UN official Marzuki Darusman, then Special Rapporteur on the situation of human rights in the DPRK and member of the UN Commission of Inquiry on Human Rights in North Korea. He stated that the only way to end atrocities in North Korea is “if the Kim family is effectively displaced, is effectively removed from the scene, and a new leadership comes into place” (Gladstone, 2015). In this line of thinking, satellites lock in a hierarchy of evidence in the production of security knowledge. Thus, security governance is stabilized in a way that barely needs North Korea but as a projection surface for satellite imagery of human rights violations and security threats. NGOs and other non-state actors need to weigh the consequences of being enrolled into practices that potentially discourage dialogue but render North Korea as a passive enemy to be monitored and disciplined. All this is not to say that it is not important to monitor security developments and document human rights violations in North Korea. Constant, public satellite surveillance, however, does not imply equally frequent revelations but often re-confirms existing knowledge. At the same time, this comes with the hitherto less debated implications that North Korea is represented as an adversary, that responsibility for insecurities is externalized and that the potential for political dialogue is diminished.
Conclusion Addressing recent calls for theoretical and empirical analyses of technology in IR and critical security studies (Burgess, 2014; Carpenter, 2016), this chapter points to socio-material approaches as valuable entry point. More specifically, these approaches enable us to make sense of the rationales and
80 Philipp Olbrich ways in which technologies become central to security assemblages. Acknowledging the variety of New Materialist and STS-inspired approaches, three common features are outlined. Most importantly, sociomaterial approaches in varying degrees highlight the agency of matter. It is argued that the strong reactions to material agency are rather surprising and perhaps misplaced given the actual content of the claim. More often than not, material agency is invoked to (re-)introduce material affordances and constraints of technology into the political analysis. In doing so, a focus on material agency highlights how technology, together with intentional human actors, coproduces political problems and how they are addressed. Such an approach usefully complements understandings of technology as apolitical instrument or autonomous driver of politics that remain prominent in much IR research. Taken together, this reading opens up space for an important debate about the role of materiality and technology in global security beyond New Materialism and STS. In order to show how human and material agencies are entangled, the chapter turned to an empirical analysis of the satellite observation of North Korea by non-state actors. It demonstrated how satellite technology takes part in the problematization of North Korea as a secretive state and renders the use of commercial satellite imagery desirable and legitimate. Second, the notion of stabilization highlights how practices, things, and actors are enrolled into an assemblage to address the security problem. Once a security governance assemblage is successfully stabilized, it produces relatively durable effects. While satellite observation in specific contexts can contribute to the documentation of security threats and human rights violations, the continuous space-based surveillance also engenders other effects. Remaining uncertainties inherent in the analysis of satellite imagery become materially anchored in a diverse assemblage of practices, actors and things, which lock in a hierarchy of evidence that speaks more authoritatively than North Korea itself. The constant disciplining view from above risks reifying adversarial relations, externalizes sole responsibility to North Korea and discourages political dialogue. In this sense, satellites become an important transition point when it comes to framing North Korea’s security and human rights situation and it occupies a powerful position to determine a sense of urgency and (pre-)define policy options. In this reading, the agency of matter becomes manifest in the stability of the definition of security threats and how they are governed. At the same time, socio-material approaches to security highlight the unstable and non-determinist character of assemblages which creates space for political resistance, intervention and, ultimately, human agency. While any analysis of a security assemblage cannot be exhaustive, it resembles a scattered, contingent network that locks in specific relations and practices. In turn, the assembled character of such a security arrangement implies the possibility of alternatives of satellite observation that do not frame North Korea as an enemy but facilitate political dialogue.
Security and technology
81
While the analysis focuses on the non-governmental satellite observation of North Korea, the wider implications of these findings also illuminate other instances when commercial satellite imagery is used to monitor human security. The crisis in Rakhine State where Myanmar security forces are accused of ethnic cleansing illustrates that it has become increasingly difficult to hide in a fog of war, even in the early phases of operations (Amnesty International, 2018). However, perhaps somewhat disillusioning, the growing availability and use of commercial satellite imagery by non-governmental actors does not directly lead to responses that call out and stop atrocities from happening. Relatedly, the goal promoted by the Satellite Sentinel Project to use satellite imagery to monitor security hotspots in Darfur in near real-time in order to deter further human rights violations has not fully come to pass. While non-governmental satellite surveillance can help to increase awareness of conflicts and human rights violations, perpetrators are unlikely to be deterred when their actions remain without political or legal consequences. In conjunction with other ICT-enabled practices, the growing availability of satellite imagery contributes to trends of remote management of security, development, and humanitarian action in the NGO sector (Kalkman, 2018). Among other things, this allows for small groups of people to claim and execute global mandates but risks creating more geographical and emotional distance and ignoring regionally specific conditions. Lastly, a focus on security assemblages constitutes an open and empirically oriented way of addressing the role of technology and its (un)intended consequences. However, this flexibility and focus on empirical details comes with particular trade-offs. Avoiding more clearly delineated, pre-structured frameworks delegates responsibility to the researcher in empirically defining the contours of the assemblage under study (Fine, 2002: 216). Moreover, emphasizing contingency and staying close to the empirical relations at hand comes with limitations in terms of making more general statements about technology in global security at large or even predictive statements about the political implications of emerging technologies. Having said that, such trade-offs are located at the very intersection of New Materialism, STS and more traditional approaches to technology in global security and would benefit from insights across theoretical inclinations.
Notes 1 Among others, this includes anthropology, feminist studies, (political) philosophy, sociology, and science and technology studies (Barad, 2007; Bennett, 2010; Connolly, 2013; Coole and Frost, 2010; DeLanda, 2009; Deleuze and Guattari, 1987; Jasanoff, 2004; Latour, 2005; Law, 2009; Ong and Collier, 2005). 2 Latour invokes the term mediator, not entity, to denote that force does not simply pass through them – that would make them mere intermediaries – but they reshape that force in the process of translation. He states that “[f]or instance, fishermen, oceanographers, satellites, and scallops might have some relations with one
82 Philipp Olbrich another, relations of such a sort that they make others do unexpected things – this is the definition of a mediator” (Latour, 2005: 106; emphasis in original). 3 For access to satellites imagery analyses of North Korea’s nuclear program and the human rights situation, see e.g. www.38north.org; www.hrnk.org; www.isisonline.org. 4 In fact, two thirds of our planet are usually covered in clouds (International Satellite Cloud Climatology Project, 2016) and satellite image captures of North Korea mostly occur sometime between 9 am and 2 pm to exploit the favorable position of the sun and avoid moisture that can influence image quality (Interview with satellite imagery analyst, August 2014).
References Acuto M and Curtis S (2014) Assemblage Thinking in International Relations. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan, 1–16. Albright D, Kelleher-Vergantini S and Kim P (2014) On-Going Activity at North Korea’s Punggye-ri Test Site. Available at: http://isis-online.org/uploads/isis-reports/ documents/Punggye-ri_May9_2014.pdf (accessed 31 October 2018). Amicelle A, Aradau C and Jeandesboz J (2015) Questioning Security Devices: Performativity, Resistance, Politics. Security Dialogue 46(4): 293–306. Amnesty International (2013) North Korea: New Satellite Images Show Continued Investment in the Infrastructure of Repression. Available at: www.amnestyusa.org/ sites/default/files/asa240102013en.pdf (accessed 31 October 2018). Amnesty International (2018) Myanmar: “We Will Destory Everything”: Military Responsibility for Crimes against Humanity in Rakhine State. Available at: www. amnesty.org/download/Documents/ASA1686302018ENGLISH.PDF (accessed 31 October 2018). Asal V and Beardsley K (2007) Proliferation and International Crisis Behavior. Journal of Peace Research 44(2): 139–155. Baker J C (2001) New Users and Established Experts: Bridging the Knowledge Gap in Interpreting Commercial Satellite Imagery. In Baker J C, O’Connell K M & Williamson R A (eds.) Commercial Observation Satellites: At the Leading Edge of Global Transparency. Arlington: RAND, 533–557. Barad K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham/London: Duke University Press. Barry A (2013) Material Politics: Disputes along the Pipeline. Oxford: WileyBlackwell. Bellanova R and Duez D (2012) A Different View on the “Making” of European Security: The EU Passenger Name Record System as A Socio-Technical Assemblage. European Foreign Affairs Review 17(2/1): 109–124. Bennett J (2010) Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press. Bermudez J S, Dinville A and Eley M (2015) North Korea: Imagery Analysis of Camp 16. Available at: www.hrnk.org/uploads/pdfs/ASA_HRNK_Camp16_v8_fullres_F INAL_12_15_15.pdf (accessed 31 October 2018). Bourne M (2012) Guns Don’t Kill People, Cyborgs Do: A Latourian Provocation for Transformatory Arms Control and Disarmament. Global Change, Peace & Security 24(1): 141–163.
Security and technology
83
Bourne M (2016) Invention and Uninvention in Nuclear Weapons Politics. Critical Studies on Security 4(1): 6–23. Bousquet A (2014) Welcome to the Machine: Rethinking Technology and Society through Assemblage Theory. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan, 91–97. Breu C (2016) Why Materialisms Matter. Symploke 24(1–2): 9–26. Bueger C (2014) Thinking Assemblages Methodologically: Some Rules of Thumb. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan, 58–66. Burgess J P (2014) The Future of Security Research in the Social Sciences and Humanities. Strasbourg: European Science Foundation. Burningham G (2016) How Satellite Imaging Will Revolutionize Everything from Stock Picking to Farming. Newsweek. Available at: http://europe.newsweek.com/ why-satellite-imaging-next-big-thing-496443?rm=eu (accessed 31 October 2018). Byman D (2013) Why Drones Work: The Case for Washington’s Weapon of Choice. Foreign Affairs 92(July/August): 32–43. Callon M (1986) The Sociology of an Actor-Network: The Case of the Electric Vehicle. In Callon M, Law J & Rip A (eds.) Mapping the Dynamics of Science and Technology: Sociology of Science and the Real World. Basingstoke: Macmillan Press, 19–34. Callon M and Latour B (1981) Unscrewing the Big Leviathan: How Actors MacroStructure Reality and How Sociologists Help Them to Do So. In Knorr-Cetina K D & Cicourel A V (eds.) Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macro-Sociologies. Boston/London/Henley: Routledge & Kegan Paul, 277–303. Carpenter C (2016) The Future of Global Security Studies. Journal of Global Security Studies 1(1): 92–94. Cha V D (2015) Assessing the North Korea Threat and U.S. Policy: Strategic Patience of Effective Deterrence? Statement before the Senate Committee on Foreign Relations, Subcommittee on East Asia, the Pacific, and International Cybersecurity Policy. Available at: www.foreign.senate.gov/imo/media/doc/100715_Cha_Testi mony.pdf (accessed 31 October 2018). Choe S-H (2016) Rumors, Misinformation and Anonymity: The Challenges of Reporting on North Korea. New York Times. Available at: www.nytimes.com/2016/ 09/16/insider/reporting-on-north-korea-is-often-risky-and-controversial.html (accessed 31 October 2018). Choe S-H and Sanger D E (2014) Increased Activity at North Korean Nuclear Site Raises Suspicions. New York Times. Available at: www.nytimes.com/2014/04/23/ world/asia/north-korea-said-to-be-readying-nuclear-test.html (accessed 31 October 2018). Connolly W E (2013) The ‘New Materialism’ and the Fragility of Things. Millennium Journal of International Studies 41(3): 399–412. Coole D and Frost S (eds.) (2010). New Materialisms: Ontology, Agency, and Politics. Durham/London: Duke University Press. Cudworth E and Hobden S (2013) Of Parts and Wholes: International Relations beyond the Human. Millennium – Journal of International Studies 41(3): 430–450. Cumings B, Abrahamian E and Maoz M (2006) Inventing the Axis of Evil: The Truth about North Korea, Iran, and Syria. New York: New Press.
84 Philipp Olbrich DeLanda M (2009) A New Philosophy of Society: Assemblage Theory and Global Complexity. London: Continuum. Deleuze G and Guattari F (1987) A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press. Deudney D (1993) Dividing Realism: Structural Realism versus Security Materialism on Nuclear Security and Proliferation. Security Studies 2(3–4): 5–36. Dinville A and Bermudez J S (2016) Checks and Balances: Thermal Imagery Analysis of Yongbyon. 38 North. Available at: http://38north.org/2016/10/yongbyon102516/ (accessed 31 October 2018). Fifield A (2016) North Korea Could Be Preparing for Fifth Nuclear Test, South Korea’s Park Warns. The Washington Post. Available at: www.washingtonpost.com/ world/north-korea-could-be-preparing-for-another-nuclear-test-south-koreas-parkwarns/2016/04/18/a3acd276-42f0-4da5-a2bb-2c21008f0c92_story.html (accessed 31 October 2018). Fine B (2002) The World of Consumption: The Material and Cultural Revisited. Milton Park/New York: Routledge. Gladstone R (2015) North Korea: Investigator Rules Out Reforms by Kim. The New York Times. Available at: www.nytimes.com/2015/02/03/world/asia/north-koreainvestigator-rules-out-reforms-by-kim.html (accessed 31 October 2018). Hansen N and Liu J (2014) Why a Nuclear Test May Not Be Imminent: Update on North Korea’s Punggye-Ri Nuclear Test Site. 38 North. Available at: http://38north. org/2014/05/punggye051314/ (accessed 31 October 2018). Harman G (2009) Prince of Networks: Bruno Latour and Metaphysics. Melbourne: re: press. Hassig R C and Oh K D (2009) The Hidden People of North Korea: Everyday Life in the Hermit Kingdom. Plymouth: Rowman and Littlefield. Hawk D (2003) The Hidden Gulag – Exposing North Korea’s Prison Camps. Committee for Human Rights in North Korea. Available at: www.hrnk.org/uploads/pdfs/ The_Hidden_Gulag.pdf (accessed 31 October 2018). Hawk D (2012) The Hidden Gulag – Second Edition. Committee for Human Rights in North Korea. Available at: www.hrnk.org/uploads/pdfs/HRNK_HiddenGulag2_ Web_5-18.pdf (accessed 31 October 2018). Hawk D (2015) The Hidden Gulag IV – Gender Repression & Prisoner Disappearances. Committee for Human Rights in North Korea. Available at: www.hrnk.org/ uploads/pdfs/Hawk_HiddenGulag4_FINAL.pdf (accessed 31 October 2018). Herrera G L (2006) Technology and International Transformation: The Railroad, the Atom Bomb, and the Politics of Technological Change. Albany: SUNY Press. Herscher A (2014) Surveillant Witnessing: Satellite Imagery and the Visual Politics of Human Rights. Public Culture 26(3): 469–500. Hewitt G (2016) The Eyes in the Sky over North Korea. Space War. Available at: www.spacewar.com/reports/The_eyes_in_the_sky_over_North_Korea_999.html (accessed 31 October 2018). Hoijtink M (2017) Governing in the Space of the “Seam”: Airport Security after the Liquid Bomb Plot. International Political Sociology 11(3): 308–326. Hong A (2011) How to Free North Korea. Foreign Policy. Available at: http://foreign policy.com/2011/12/19/how-to-free-north-korea/ (accessed 31 October 2018). Hong C (2013) The Mirror of North Korean Human Rights. Critical Asian Studies 45 (4): 561–592.
Security and technology
85
International Satellite Cloud Climatology Project (2016) Cloud Climatology: Global Distribution and Character of Clouds. Available at: http://isccp.giss.nasa.gov/over view.html (accessed 31 October 2018). Jacobsen K L (2015) The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity. London/New York: Routledge. Jasanoff S (ed.) (2004) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge. Jeandesboz J (2016) Smartening Border Security in the European Union: An Associational Inquiry. Security Dialogue 47(4): 292–309. Kalkman J P (2018) Practices and Consequences of Using Humanitarian Technologies in Volatile Aid Settings. Journal of International Humanitarian Action 3(1): 1–12. Lankov A (2013) The Real North Korea: Life and Politics in the Failed Stalinist Utopia. Oxford: Oxford University Press. Latour B (1991) Technology Is Society Made Durable. In Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London/New York: Routledge, 103–131. Latour B (1999) Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge: Harvard University Press. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Law J (1991) Introduction: Monsters, Machines and Sociotechnical Relations. In Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London/New York: Routledge, 1–23. Law J (2009) Actor Network Theory and Material Semiotics. In Turner B S (ed.) The New Blackwell Companion to Social Theory. Oxford: Blackwell Publishing, 141–158. Law J and Singleton V (2014) ANT, Multiplicity and Policy. Critical Policy Studies 8 (4): 379–396. Leander A (2013) Technological Agency in the Co-Constitution of Legal Expertise and the US Drone Program. Leiden Journal of International Law 26(4): 811–831. Leese M (2015) “We Were Taken by Surprise”: Body Scanners, Technology Adjustment, and the Eradication of Failure. Critical Studies on Security 3(3): 269–282. Lintner B (2005) Great Leader, Dear Leader: Demystifying North Korea under the Kim Clan. Seattle: University of Washington Press. Litfin K T (1998) Satellites and Sovereign Knowledge: Remote Sensing of the Global Environment. In Litfin K T (ed.) The Greening of Sovereignty in World Politics. Cambridge: MIT Press, 193–221. Liu J (2014) No Sign of Preparations for an Impending Nuclear Test at North Korea’s Punggye-Ri. 38 North. Available at: http://38north.org/2014/12/punggye121014/ (accessed 31 October 2018). Liu J (2016) Is North Korea Preparing for a Fifth Nuclear Test? 38 North. Available at: http://38north.org/2016/02/punggye021616/?utm_source=feedburner&utm_me dium=feed&utm_campaign=Feed%3A+38North+%2838+North%3A+Informed +Analysis+of+North+Korea%29 (accessed 31 October 2018). Mayer M (2012) Chaotic Climate Change and Security. International Political Sociology 6(2): 165–185. McCarthy D R (2013) Technology and “The International” Or: How I Learned to Stop Worrying and Love Determinism. Millennium – Journal of International Studies 41(3): 470–490.
86 Philipp Olbrich OHCHR (2015) Summary Prepared by the Office of the United Nations High Commissioner for Human Rights in Accordance with Paragraph 15 (b) of the Annex to Human Rights Council Resolution 5/1 and Paragraph 5 of the Annex to Council Resolution 16/21. Geneva. Olbrich P and Witjes N (2016) Sociotechnical Imaginaries of Big Data: Commercial Satellite Imagery and Its Promise of Speed and Transparency. In Bunnik A, Cawley A, Mulqueen M & Zwitter A (eds.) Big Data Challanges: Society, Security, Innovation and Ethics. London: Palgrave Macmillan, 115–126. Ong A and Collier S J (eds.) (2005) Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems. Malden: Blackwell Publishing. Park H S (2012) North Korea Demystified. Amherst: Cambria. Parks L (2012) Zeroing in: Overhead Imagery, Infrastructure Ruins, and Datalands in Afghanistan and Iraq. In Packer J & Crofts Wiley S B (eds.) Communication Matters: Materialist Approaches to Media, Mobility and Networks. Abingdon: Routledge, 78–92. Permanent Representative of the Democratic People’s Republic of Korea to the United Nations Office at Geneva (2015) Letter Dated 5 February 2015 from the Permanent Representative of the Democratic People’s Republic of Korea to the United Nations Office at Geneva Addressed to the President of the Human Rights Council. Geneva. Pouliot V (2010) The Materials of Practice: Nuclear Warheads, Rhetorical Commonplaces and Committee Meetings in Russian–Atlantic Relations. Cooperation and Conflict 45(3): 294–311. Rosenau J N (1990) Turbulence in World Politics: A Theory of Change and Continuity. Princeton: Princeton University Press. Rothe D (2015) Von weitem sieht man besser: Satellitensensoren und andere Akteure der Versicherheitlichung. Zeitschrift für Internationale Beziehungen 22(2): 97–124. Scarlatoiu G (2014) The Shocking Truth about North Korean Tyranny. Hearing of the U.S. House of Representatives Committee on Foreign Affairs, Subcommittee on Asia and the Pacific. Available at: http://docs.house.gov/meetings/FA/FA05/20140326/ 101981/HHRG-113-FA05-Transcript-20140326.pdf (accessed 31 October 2018). Scarlatoiu G and Bermudez J S (2015) Unusual Activity at the Kanggon Military Training Area in North Korea: Evidence of Execution by Anti-Aircraft Machine Guns? HRNK Insider. Available at: www.hrnk.org/uploads/pdfs/HRNKInsider_ Greg_Joe_4_29_15.pdf (accessed 31 October 2018). Scarlatoiu G and Rios B (2014) On Human Rights Day, Allsource Analysis (ASA) and HRNK Partner to Monitor North Korean Political Prisons. Committee for Human Rights in North Korea. Available at: www.hrnk.org/events/announcementsview.php?id=20 (accessed 31 October 2018). Schouten P (2013) The Materiality of State Failure: Social Contract Theory, Infrastructure and Governmental Power in Congo. Millennium – Journal of International Studies 41(3): 553–574. Schouten P (2014a) Security as Controversy: Reassembling Security at Amsterdam Airport. Security Dialogue 45(1): 23–42. Schouten P (2014b) Security in Action: How John Dewey Can Help Us Follow the Production of Security Assemblages. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/ New York: Palgrave Macmillan, 83–90. Shim D (2014) Remote Sensing Place: Satellite Images as Visual Spatial Imaginaries. Geoforum 51(January): 152–160.
Security and technology
87
Srnicek N (2014) Cognitive Assemblages and the Production of Knowledge. In Acuto M & Curtis S (eds.) Reassembling International Theory: Assemblage Thinking and International Relations. Basingstoke/New York: Palgrave Macmillan, 40–47. UN Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea (2014) Report of the Detailed Findings of the Commission of Inquiry on Human Rights in the DPRK. Available at: www.ohchr.org/Documents/HRBodies/ HRCouncil/CoIDPRK/Report/A.HRC.25.CRP.1_ENG.doc (accessed 31 October 2018). Valkenburg G and van der Ploeg I (2015) Materialities between Security and Privacy: A Constructivist Account of Airport Security Scanners. Security Dialogue 46(4): 326–344. Walters W (2014) Drone Strikes, Dingpolitik and Beyond: Furthering the Debate on Materiality and Security. Security Dialogue 45(2): 101–118. Waltz K N (1979) Theory of International Politics. Reading: Addison-Wesley. Witjes N and Olbrich P (2017) A Fragile Transparency: Satellite Imagery Analysis, Non-State Actors, and Visual Representations of Security. Science and Public Policy 44(4): 524–534.
5
Vision, visuality, and agency in the US drone program Alex Edney-Browne
Vision and visuality1 play a crucial role in International Relations (IR). IR’s “visual turn” has demonstrated that aesthetic practices matter because of the “type of insights and understandings [of world politics] they facilitate,” and because “politics itself always has an aesthetic” – that is, political actors understand and communicate with and through aesthetics whether they do so consciously or not (Bleiker, 2017: 262). Politics is visual and visuality is political, as aesthetic practices are integral to how “we – as political and cultural collectives – speak, hear, visualise and feel about ourselves and others”; these practices frame “what is thinkable and doable” and are thus “political at their very core” (Bleiker, 2017: 262). Despite the visual turn, with increasingly rich work produced on visuality and world politics particularly in the last fifteen years (see Bleiker, 2018), the political significance of “visual” technologies as weapons and surveillance tools has not provoked much contention amongst IR academics.2 Several scholars have published about these technologies, but few have debated the question of who or what has agency in these human-technology apparatuses, and whether avenues for counter-hegemonic resistance exist and can be exploited to prevent violence and surveillance. This chapter takes up the question of vision, visuality, and agency in the US Air Force drone program specifically, challenging the assertion that vision itself is “becoming weapon” through developments in military technology (Bousquet, 2017). It does so by highlighting the many technological, human, and human-technological flaws in military technological apparatuses, specifically the US Air Force drone program. The chapter contends that if we are to do justice to Bousquet, Grove, and Shah’s (2017) important invocation for IR scholars to become “weapons experts,” this requires doing the sometimes tedious work of reading technical reports about the weapons used in war and engaging interdisciplinarily with existing and emerging theoretical scholarship on technologies and technological apparatuses. In the case of so-called “visual” weaponry, such as US Air Force drones, it is thus necessary to research the technical capacities of the visual sensors employed, as well as the experiences of the “viewing subjects” – the people looking at and interpreting the visual field. This chapter follows
Vision, visuality, and agency
89
these lines of inquiry, demonstrating that vision cannot be fully weaponized, even within institutions as powerful as the US military. The drone program’s visual technologies are not omnipotent or omniscient, but highly fallible because of the limitations of humans and technology, and within the human-technology relationship. Moreover, US Air Force drone personnel (the viewing subjects) possess the agency to interpret visual information in myriad ways – both hegemonic (within a “weaponized” visuality) and counter-hegemonic. It is important to locate these moments or sites containing potential for counter-hegemonic resistance. While critical IR scholars overwhelmingly condemn drone warfare (and violence in the so-called war on terror more broadly), it is less clear what we think should be done about it. Turning our antipathy for drone violence into action first requires identifying cracks and fissures that can be exploited in this seemingly all-powerful technological apparatus. The chapter will first demonstrate the impossibility of governing vision. Vision is certainly vulnerable to dominant and hegemonic cultural- and socio-political influence, but it cannot be fully controlled. The second section further develops this idea, arguing that drone vision is highly fallible (because of its human users, the technology, and the human-technology interaction therein) and is open to interpretation. I argue this by drawing upon publicly available testimonies of drone personnel and a recent technical report produced by CorpWatch – “Drone Inc.: Marketing the Illusion of Precision Killing” (Chatterjee and Stork, 2017) – on the hardware and software that constitutes the technological apparatus of the drone program. The third and final section of this chapter warns against an overly ocularcentric account of weaponry. It considers the role of invisibility and imagination in the drone program, demonstrating the limitations of giving too much attention to what can be seen at the expense of what cannot: that which is invisible or imagined. It argues that even the US military’s attempts to overcome present flaws in the drone program through technological innovation will be unsuccessful, as radical invisibilities will persist.
The “call to arms”: critical IR scholars as weapons experts It is useful, before getting to the drone program’s visual technologies, to first explain the academic and political importance of technological expertise in IR.3 In a recent Critical Studies on Security special issue, Antoine Bousquet, Jairus Grove, and Nisha Shah (2017) argue that critical IR scholars researching and writing on war need to develop expertise about the weapons so crucial to war’s operation. They contend that the material constitution of weapons, the conditions in which they emerge, their principles of operation and their evolutionary trajectories are not “merely […] concerns best left to technical specialists or weapons enthusiasts,” but ought to be subjects of inquiry for all critical IR scholars (Bousquet et al., 2017: 4). Expanding IR’s research agenda, to include these subjects of inquiry, can “unlock a deeper understanding” of the
90 Alex Edney-Browne role and socio-political significance of weapons in war (Bousquet et al., 2017: 4). It is their hope that increased attention to the production, circulation and technical machinations of weapons will undercut techno-fetishization, “break [ing] the deadlock between techno-fetishism and normative-polemical rejections of weapons” that “together function as obstacles to a greater understanding of the significance of weapons in the making of worlds of violence and war” (Bousquet et al., 2017: 2). This is a thought-provoking and welcome invocation to critical IR scholars. There is an unfortunate tendency in critical IR scholarship on war technologies to be, while theoretically very rich, thin on the technical details of those technologies. This can lead to the circulation of inaccurate and misleading accounts of the capacities of war technologies in academic scholarship. Furthermore, becoming “weapons experts” is not just about promoting scholarly accuracy, but also encourages contestation of dominant narratives of technological omnipotence, omniscience and innovation. In other words, critical IR scholars’ technical knowledge of the failures and limitations of militaries’ weaponry could undercut hegemonic socio-technical imaginaries and discourage technofetishization. This opens up new avenues for political resistance; our ability to speak the language of weapons enthusiasts (while also not losing our philosophical/theoretical insights) makes our arguments more convincing to academic and non-academic readers/listeners alike – including those within government and military institutions. The invocation – or “call to arms,” as they put it – to improve critical IR scholars’ technical expertise of weaponry is thus convincing and of political importance. However, we need to be cautious in these attempts to not get caught up in narratives of technological omnipotence, omniscience, and innovation ourselves. It is perhaps harder to walk the line between technofetishization and an in-depth engagement with technology than Bousquet, Grove and Shah have acknowledged. Carol Cohn (1987: 703–7) found similarly, when conducting her ethnographic research on US nuclear strategic analysts, how difficult it is not to have “fun” when she learned to speak the “sexy” and “masterful” language of nuclear strategy. Cohn (1987: 704) writes that one does not need to be a perverse person to have fun when speaking what she calls “techno-strategic” language, but that the effect is perverse: weapons that cause immense human suffering are made easy to talk about through euphemism and abstraction. It can therefore work in the military’s favor when academics learn to speak the esoteric language of military technology; it makes violence and human misery sound palatable even amongst its critics. Most of the technical resources available on weaponry are produced by branches of the armed forces or by arms manufacturers. In their capacity as “weapons experts,” critical IR scholars will work closely with these materials. However, it is imperative that the de-humanizing language employed is not replicated by academics, and that techno-fetishistic embellishments about the technology’s current (and future) capabilities are identified and critiqued rather than reproduced.
Vision, visuality, and agency
91
Bousquet’s solo contribution to the Critical Studies on Security special issue, somewhat paradoxically given the issue’s stated intention, comes very close to techno-fetishizing visual weaponry, and in doing so, I argue, closes (rather than opens up) avenues for politically resistant thoughts and actions. Bousquet (2017: 63) contends that, in modern warfare, “it is less the weapon that has come to serve as a prosthetic extension of the eye than perception itself which has been caught up in an unrelenting process of becoming weapon.”4 Drawing upon Paul Virilio (1989), Bousquet (2017: 63) argues that human vision itself is becoming militarized; the military has mobilized vision “within the deadly perceptual logistics of the war machine” through four targeting processes: aiming, ranging, tracking, and guiding. These processes train the eye to be violent: to see in ways that aid the infliction of lethal force. In “ranging” (the act of judging distance and artillery trajectories), for instance, Bousquet (2017: 65) argues “we see a harnessing of the phenomenological unit of visual perception into a calculative assemblage of geospatialisation.” The subject’s eye judges the distance of the target and the projectile of the ammunition and pre-emptively corrects firing the weapon so as to increase the likelihood of hitting their target. In the act of “tracking” (following a mobile target), soldiers are trained in the “practice of ‘leading’ the target by firing ahead of its present position” (Bousquet, 2017: 67). Riflemen and, previously, archers, become so habituated with this process, and need to perform it at such speed, that it becomes intuitive; they cannot help but see violently. Operators of targeted weapons have their vision trained by the weapons they use, until their sight becomes undetachable from targeting. Citing Zabet Patterson (2009), who argues that targeting technologies are “specifically dedicated to augmenting, informing and enflaming the soldier’s process of seeing,” Bousquet (2017: 69) contends that “the eye itself [is] disciplined into a visual regime of calculability and control.”
Vision, visuality, and agency The idea that vision is becoming “weaponized” or that military visual technologies are, or soon will be, overwhelmingly powerful is worthy of serious engagement and debate, and it is certainly not an argument advanced by Bousquet alone (Chamayou, 2015: 38–9; Wilcox, 2015: 146–7). Several scholars have written on the power of the “scopic regime” of targeted killing and of high-technology warfare more broadly (Coward, 2014; Grayson, 2012; Grayson and Mawdsley, 2018; Gregory, 2013; Maurer, 2017). Borrowing from Allen Feldman (1997) – who himself borrowed the term from film scholar Christian Metz (1982, 1999 [1974]) – these scholars have used the concept of “scopic regimes” to refer to the socio-technical assemblage of seeing in modern warfare, and have approached sight not as a solely biological or cognitive process, but one also significantly shaped by socio-cultural influences. Feldman (1997: 33) states that the eye is “a sensory organ that can be socially appropriated to
92 Alex Edney-Browne channel and materialise normative power in everyday life” – one’s experience of sight is shaped by the scopic regime. Feldman’s distinction between sight and scopic regimes is very similar to the one Hal Foster (1988: ix) makes between vision and visuality (and the one taken up in this chapter): vision refers to the “mechanisms” and “datum” of sight (“how we see”), while visuality is vision’s “historical techniques” (“how we are able, allowed, or made to see”). Drone warfare’s scopic regime, according to scholars like Grayson (2012: 123), is “one shaped by epistemological and aesthetic realism.” It “operates under the assumption that vision – through technological enhancement – can become an infallible sense which captures the physical world that exists independently of any objective perceptions that we may have of it” (Grayson, 2012: 123). The drone personnel who watch drone surveillance imagery and use this to identify, track and kill targets, are thus allegedly convinced of the objectivity of this imagery. The agency attributed to technology in the drone assemblage in these accounts conceives of “the human body” as “re-educated by the machine to act according to a new paradigm of visuality” (Patterson, 2009: 42). Drone personnel are conceived of as molded by their weaponry into a visuality of omnipotence and omniscience (Bousquet, 2017: 69) – the power asymmetry of which can even “produce a form of pleasure that can be addictive for the one with the privilege of viewing” (Grayson, 2012: 124). Grayson (2012) does implicitly acknowledge that more than one scopic regime exists, noting the “specificity” of targeted killing’s scopic regime. However, until recently (Grayson, 2017; Grayson and Mawdsley, 2018), he did not consider the possibility that multiple scopic regimes co-exist and compete within drone warfare. Even then, this recent admission – that individual’s viewing experiences within the drone program are affected by external influences (outside the dominant scopic regime) – is limited to a consideration of hegemonic discourses that reinforce epistemological and aesthetic realism: “official statements by state agents [Michael Hayden], media reports and popular culture artefacts like Eye in the Sky” (Grayson and Mawdsley, 2018: 12). These “accompanying discourses,” he, together with Mawdsley, argues, focus on drones’ “visual capabilities, ability to facilitate precise kinetic activities, and unambiguous field of view” (Grayson and Mawdsley, 2018: 12). As yet, critical IR scholarship has not engaged with drone personnel’s exposure to counter-hegemonic discourses, how such exposure could produce different scopic regimes within the drone apparatus, and the potential (if any) of this exposure for political resistance. Before IR came to this question, several disciplines long debated the complex relationship between vision, visuality, and agency. Philosophers, psychologists, anthropologists, artists, and art historians have rarely posited a straightforward relationship between the biological or cognitive capacities of sight (vision) and the perception of an external reality (visuality). From at least the early 19th century, scholars and artists have interrogated the biological limitations of human sight and the subjectivity of the viewer in perceiving the so-called real world. These early 19th century developments
Vision, visuality, and agency
93
challenged the hegemonic position held by Cartesian perspectivalism. Cartesian perspectivalism assumed that the world could be understood objectively through sight (Jay, 1998: 4). It was heavily influenced by the findings of the camera obscura: a technological apparatus that alleged to geometrically model the biological function of the eye and provide evidence of the objectivity and universality of sight regardless of the viewing subject. Cartesian perspectivalism was “in league with a scientific world view” that saw external reality “as situated in a mathematically regular spatio-temporal order” (Jay, 1998: 9). Cartesian perspectivalism is precisely the visuality that IR academics today allude to when they remark upon drone surveillance’s omniscience and verisimilitude (Grayson and Mawdsley, 2018). As early as the 1820s and 1830s, however, Cartesian perspectivalism came under scrutiny by philosophers, physiologists, psychologists, and artists. There was “a profound shift,” Crary (1998: 31) writes, “in the way an observer was described, figured, and posited in science, philosophy and in new techniques and practices of vision.” Many early 19th century physicians and physiologists were finding that human sight was, for myriad reasons, fallible and suggestible to a multiplicity of internal and external influences. Blind spots, fatigue, reaction times, after-images, color blindness, among other cognitive eccentricities and so-called weaknesses were studied with increasing regularity. Cultural movements similarly challenged the myth of Cartesian perspectivalism. Long before postmodernism critiqued the objectivity and truthfulness of modernist depiction, the Baroque movement provided viewers with a “dazzling, disorienting, ecstatic” visual experience that encouraged the viewer to reflexively confront the subjectivity of their perception (Jay, 1998: 17). Baroque art had a “strongly tactile or haptic quality,” purposefully undercutting the “absolute ocularcentrism” of Cartesian perspectivalism and engaging the viewer’s bodily sensations (Jay, 1998: 17). It is clear from this scholarship that the question of who or what has agency, or the most agency, in the act of sight has for a long time been contested. If one wanted to create a technology able to “re-educate” human vision, to mold the viewing subject into a particular visuality, it would be necessary to anticipate and stave off myriad (often unknown and unpredictable) cognitive and socio-cultural variables that all influence an individual’s sight. Even still, as W.J.T Mitchell (2002: 175) notes, there remains “an unfortunate tendency to slide back into reductive treatments of visual images as all-powerful forces.” These reductive treatments do not offer a convincing rebuttal of existing scholarship that has consistently shown that the viewing subject’s interpretation of visual images is notoriously subjective for myriad biological, psychological, and socio-cultural reasons. The individual viewing subject, while certainly impressionable to external socio-cultural influence to see within the dominant or hegemonic visuality, is not fated to do so. There are myriad factors producing an individual’s viewing experience, including conflicts between those very socio-cultural influences. It is not that critical IR scholars have entirely missed the
94 Alex Edney-Browne complexity of the relationship between vision, visuality, and agency. Frank Möller (2007: 180, 186) states that “images are intractable and ambiguous”; they are “by nature insusceptible to simple interpretations.” Moreover, Grayson (2017: 328) does acknowledge this to an extent, noting that “the viewing subject centres on what kind of viewing capacities, understanding and limitations are produced in and through the viewer.” He recognizes that the “interpretive ambiguity” of the visual field is conditioned by both the viewing subject’s unique interpretation and attempts to overcome this ambiguity through a hegemonic “harnessing […] [of] cultural logics to lean towards some preferred interpretive possibility” (Grayson, 2017: 328). However, as mentioned above, he gives greater consideration to the hegemonic scopic regime and only mentions external socio-cultural influences (like Eye in the Sky) that reproduce that hegemony. It is important that critical IR engages with the complex dynamic between vision, visuality, and agency, examining both hegemonic and counter-hegemonic potential. Given centuries of scholarship recognizes the complex relationship between vision and visuality, IR scholars who wish to argue that recent and emerging military visual technologies instill a particular scopic regime within (seemingly non-agentic) viewers must prove what exactly it is that has changed in the viewing dynamic: what is it about these newer technologies that makes them more agentic than prior technologies? Answering this question requires an in-depth engagement with debates in Science and Technology Studies (STS) – a more rigorous engagement than this chapter embarks on – over whether technology can possess agency at all. Technological entities do not have intentions, nor do they act according to their own free will. We can only speak of technological agency in relation to human agency; it is only within human-technology interaction that technology can be said to possess agency at all – in enabling or constraining human actions (Sayes, 2014: 143). Humans, however, are agentic, animated, and can express themselves outside of the human-technology interaction (Vandenberghe, 2002: 53–4). Unlike their technological counterparts, humans bring a “whole gamut of individually inflected and socially conditioned skills and attitudes” to human-technology interactions (Drucker, 2011: 6), and, crucially, they are able to reflect upon themselves and their Being-in-the-world (Vandenberghe, 2002). Humans have the agency to interpret their interactions with technology, and this interpretation and analysis is shaped by many factors external to the immediate human-technology interaction. In the case of human-technology interaction in the US drone program, this means that drone personnel are unlikely to have their sight so easily re-educated by the visual technologies they use. If they did find themselves seeing within a weaponized visuality, they are capable of identifying and reflecting on this. The next section draws on technical reports and drone personnel’s testimonies to highlight myriad flaws of drones’ visual sensors. Drone personnel’s awareness of these flaws significantly affects their experiences of using their technologies. Moreover, the ambiguous surveillance imagery produced can be
Vision, visuality, and agency
95
read in various ways according to the human user, allowing for both hegemonic and counter-hegemonic interpretations.
Drone vision and visualities: fallibility and interpretation Fallibility Drone vision overcomes some of the problems with human sight, but it is by no means all-seeing as is commonly characterized by government and military officials, and in aforementioned critical IR scholarship. The drone apparatus collects and relays numerous forms of visual information to human drone crews. Its “visual” hardware – the “eyes” of the drone – comprises of “a suite of cameras that can transmit video and heat signatures” and “radar antennas that can map the precise dimensions of large objects and ground terrain” (Chatterjee and Stork, 2017: 8). More specifically, these visual sensors include two video cameras – a color camera for the pilot to steer the plane and a camera with a zoom lens directed downwards to track suspects below – an infrared camera, and a laser system, all housed within a sensor ball under the drone’s nose. Each of these technologies has its problems. The video camera that is used to track targets has been touted as having such a high resolution that it can read vehicle license plates from two miles away (Chatterjee and Stork, 2017: 10). This is, however, a serious embellishment of the camera’s capabilities under most circumstances. Pratap Chatterjee and Christian Stork (2017: 11) make this point powerfully by using the Video National Imagery Interpretability Rating Scale (V-NIIRS) to rank the drone’s video camera quality. The V-NIIRS is the United States’ nation-wide standard used to judge the resolution (i.e.: clarity) of visual imagery. Chatterjee and Stork (2017: 11) explain what these V-NIIRS ratings mean in the context of drone surveillance as follows: An image with a V-NIIRS rating of 6 has the potential to find an individual who is isolated, but not one in a crowd. A V-NIIRS-7 allows an observer to track an individual wandering through a crowded market; V-NIIRS-7.5 can capture simple hand movements like picking up a mobile phone. V-NIIRS-8 is required to positively identify a man from a woman or child. An image rated at V-NIIRS-8.5 is needed to track a person firing an assault rifle, but still would not be able to identify someone using a small pistol. The RS-170 Versatron cameras in the sensor ball of most Predators until very recently, they explain, only had a V-NIIRS rating of 5: enough to “watch over troops in a vehicle, but definitely not identify them, even by their uniforms” (Chatterjee and Stork, 2017: 11). These cameras transmit pre-digital TV-standard quality (512x480 pixels), which, they write, “is good
96 Alex Edney-Browne enough when filming close up, but from a height of 10,000 feet, even with image intensification technology, the average person is reduced to a splotch” (Chatterjee and Stork, 2017: 11). The Versatron camera does have a powerful 16–160mm zoom lens, so it is true that the sensor operator can zoom in to read the details of a license plate. However, doing so radically restricts the operator’s field of view, creating what is referred to as a “soda straw” effect wherein the operator loses their visual situational awareness (Gregory, 2011: 193). More importantly, the image resolution (whether zoomed in or not) varies depending on the available bandwidth, i.e., the amount of data that can be transmitted in real time to satellites and relayed to Air Force personnel based in the US. Bandwidth availability depends on the capacity of the satellites, how much congestion there is in the network due to other US Air Force drones also transmitting huge amounts of data, in addition to “obstructions” to the transmissions, such as “rain, dust, lightening and mountains” (Chatterjee and Stork, 2017: 26). Newer Reaper drones carry different sensor balls (either the MX-20 or MTS-B) and their imagery approaches “the quality to identify men and women,” but not to positively identify a weapon (Chatterjee and Stork, 2017: 12). Once again, however, transmission of video footage at this definition is still unlikely given the myriad pressures on bandwidth. Despite these improvements, then, the image quality remains less than omniscient for reasons of resolution and transmission. It certainly cannot be touted as clear enough to positively identify a known target based on identifiable features or to confidently claim that the target is carrying a weapon (as opposed to a similarly shaped object). The use of thermal imaging cameras (infrared sensors) in tandem with video cameras supposedly compensates for these shortcomings. In other words, the thermal imaging camera’s strengths are meant to stand in for the video camera’s weaknesses, and vice versa. For example, when the video camera’s line of sight is obstructed by trees or roofs, the thermal imagery camera should, theoretically, pick up on “heat sources such as warm bodies and weapons” underneath those obstructions, thereby seeing through opaque objects and smoke, haze and light fog (Chatterjee and Stork, 2017: 16). Similarly, thermal imagery cameras are often used at night time when the video camera’s vision is restricted to well-lit areas. This sounds convincing in theory, but it does not play out in practice, as is evident in the deaths of Warren Weinstein and Giovanni Lo Porto in a US drone attack in 2015. Weinstein and Lo Porto, American and Italian aid workers respectively, were held hostage in the basement of an Al-Qaeda compound in Pakistan. The CIA had collected over 400 hours of video over several weeks prior to launching the strike, finding no evidence that civilians were in the compound. The footage showed that “only four individuals were present, all of whom they believed to be militants” (Chatterjee and Stork, 2017: 15). The thermal imaging cameras had not picked up on the body heat of the two aid workers held captive in the basement, as they are not powerful
Vision, visuality, and agency
97
enough to penetrate the density of earth or concrete. The CIA did not know that two additional people were in the building until their post-strike damage assessment surveillance revealed two unaccounted for bodies being dragged out from the rubble (Miller and Jaffe, 2016). This is just one wellpublicized case, as the civilian victims were US and Italian citizens. However, these technological flaws have caused the deaths of many Afghan, Iraqi, and Pakistani civilians whose names and faces go unpublished, and lives ungrieved, in the Western media.5 Thermal imagery cameras are also “easily thrown off by hot days” and “the profusion of heat sources in urban areas” (Chatterjee and Stork, 2017: 16). Heat signatures can be misattributed or “lost” entirely. Operators use visual shortcuts in an attempt to quickly discriminate between combatants and civilians, but these are by no means trustworthy. For instance, guided by the theory of black body radiation, the US Air Force assumes that women absorb more body heat and produce darker heat signatures, particularly in Muslim-majority countries where fully covering one’s body is common for women. However, a man can easily insulate just as much body heat if he is wrapped in a blanket, and a person who has recently exercised will have a heat signature that mimics the effects of black body radiation. Inanimate heat sources can cause just as much confusion. A person lighting a cigarette could look as though they are firing a gun. Heat signatures can also disappear: if a surveilled person reduces their body heat enough (by diving into a cold river, for instance), thermal imaging cameras no longer register their warmth.6 The drone apparatus’s visual sensors are thus highly fallible because of video camera resolution, data transmission bandwidth, and the ambiguity (and, sometimes, invisibility) of heat signatures. Drone vision is also fallible because human sight has many limitations, and the human-technology interaction is often far from seamless. As feminist geographer Alison Williams (2011: 386) puts it, it is still the operators’ eyes “that are ultimately responsible for providing the final visual recognition of what the Reaper is ‘seeing.’” Increasingly, algorithms carry out the initial work of identifying what is understood to be suspicious behavior, but these algorithms (as of the time of writing) still alert human analysts who carry out further surveillance of the suspect before targeting decisions are made. Williams argues that the purported “continual watching” of drone surveillance is undone by the “human inability to remain unblinking” (2011: 386). Human blinking disrupts the continual stream of visual imagery offered by the technology. The human-technology relationship is not seamless, but prone to friction. Blinking and saccades (rapid movement of the eyes between moments of fixation) have been shown to reduce one’s recognition of changes in the visual field, even major changes (O’Regan et al., 2000; Volkmann et al., 1982). However, perhaps a bigger impairment, not least because it increases the frequency of blinking, is the impact of fatigue on perception and cognition. It is well-documented (Chappelle et al., 2011, 2012; Otto and Webber, 2013)
98 Alex Edney-Browne that the US Air Force suffers a shortage of drone personnel, that existing staff consequently work long hours (the majority work 50+ hours a week; 6 days on, 2 days off) and are subjected to shift changes every 30 to 90 days (between day, “mid,” and night shifts). Unsurprisingly this means that under-slept drone personnel are responsible for viewing hours of drone surveillance imagery and making visual judgments that have potentially fatal consequences for people in targeted areas. Conceivably, the consequences of fatigue on visual recognition are exacerbated for drone personnel, who have to “visually discriminate and synthesize various images and complex data on several electronic screens” (Ouma et al., 2011: 6). Given blinking, saccades, fatigued vision, and visual multi-tasking, hours of live video imagery from drone cameras goes either partially unwatched (because a viewer cannot take in the whole visual field) or entirely unwatched. Technological and human flaws together (co-)produce a drone vision that is highly fallible and selective – far from omniscient. Interpretation The US Air Force’s hegemonic or dominant visuality reinforces American military power in the war on terror. However, like all visualities, it is open to interpretation and thus contestation. Before getting to these counter-hegemonic visualities, it is necessary to first outline the operation of the dominant and hegemonic visuality. The US drone program is first and foremost a surveillance and assassination apparatus that the US is highly reliant upon – and, increasingly, more of its allies are too – in the so-called war on terror. Fueled by racism and Islamophobia, counter-insurgency’s dominant visuality reads behavior as suspicious by virtue of being “different” to Anglo-Christian norms, therein seeing potential threats where they do not exist (e.g., Amoore, 2007; Anderson, 2011; Kundnani, 2014; Mirzoeff, 2009; Satia, 2014). US drone program veterans have testified to an institutional culture of Islamophobic and racist language, stating that drone operators will often “refer to children as ‘fun-size terrorists’ and liken killing them to ‘cutting the grass before it grows too long’” (Hussain, 2015: n.p.). Drone Warrior, a recent memoir by a former CIA drone pilot, includes such racist and Orientalist remarks as: “In the years I’ve spent hunting and watching in the cesspools of the Middle East, I’ve noticed that people do funny things” (Velicovich and Stewart, 2017: 27).7 The only publicly available transcript of a US drone attack – successfully obtained through a Freedom of Information Act request and published in the LA Times – also confirms this culture of racism and Islamophobia. The drone pilot disparagingly refers to men’s shalwar kameez (traditional clothing) as “man dresses.” Ambiguous information is repeatedly characterized as “shady” by various members of the crew (whether children are out at night time, whether the headlights of the cars they are tracking are on or off, etc.). When the civilians they are tracking stop their cars to get out to pray, this is noted as evidence that they are Taliban members about to perform a violent act:
Vision, visuality, and agency
99
01.48 (SENSOR): […] This is definitely it, this is their force. Praying? I mean seriously, that’s what they do. 01.48 (MC): They’re gonna do something nefarious. (“Transcripts of US drone attack,” LA Times 2015) It goes unacknowledged by the drone crew that Afghanistan is a Muslimmajority country where praying before sunrise (the Fajr prayer) is very widely practiced, normal behavior. The dominant and hegemonic visuality of drone warfare also involves the weaponization of sight, exactly as Bousquet (2017) explains. Drone personnel are trained to see in ways that increase the likelihood of a fatal strike, thereby “correcting” (almost without conscious effort) the idiosyncrasies of drone vision. For instance, much like the archers and riflemen who lead targets by shooting just ahead of their present position, drone personnel are aware of the time lag in video imagery transmission (sometimes as much as several seconds, particularly if the signal passes through several relay stations), and, as such, similarly strike where they expect their human target will be. It is also worthwhile to consider how the aforementioned fallibilities in drone vision might actually increase drone personnel’s likelihood of seeing within this dominant, hegemonic visuality. Fatigue, for instance, may prompt personnel to visualize, uncritically, in the ways they have been socio-culturally trained in the US Air Force. Fatigue has been shown to “induce the need for cognitive closure”; people who are low on energy tend to “over-utilize early cues (e.g.: early appearing information) or highly accessible schemata (e.g.: stereotypes)” because these “afford immediate judgments and obviate the necessity for further energy expenditures” (Webster et al., 1996: 190). Simply put, to approach existing information with skepticism, and to seek out additional information before taking action, requires more cognitive energy than the fatigued person has to hand. The less cognitively taxing option is to act upon early information and to ignore feelings of doubt. In drone warfare, over-utilizing early cues or highly accessible schemata has fatal consequences for civilians on-the-ground, as drone personnel may downplay or disregard evidence that the people targeted are civilians – instead stereotyping targets and jumping to conclusions. Technological and human flaws in vision in these, and similar, instances can therefore work to reify (rather than undermine) the dominant and hegemonic visuality. This is not, however, the only visuality of drone warfare, as visuality is also largely contingent upon the viewing subject. Drone personnel are strongly affected by cultural conditioning during US Air Force basic training and again during their induction into the drone program. These cultural norms are reproduced and further legitimized on a day-to-day basis by institutional protocol and the language and behaviors of their peers and superiors. This affects how they view and interpret drone surveillance imagery. However, drone personnel are less susceptible to this conditioning
100 Alex Edney-Browne if their identities sit uncomfortably with these hegemonic norms, or if they have had exposure to competing norms from their lives outside the US Air Force. Judith Butler (2007) argues a similar point in her discussion of how norms shape the frames of visual representation. In her work on the politics of grief, Butler (2007: 956) suggests that norms decide whose lives are considered worthy of our grief and whose are not; these norms “enter into the frames through which discourse and visual representation proceed.” She makes an important clarification, however, that norms and frames do not determine our response (Butler, 2007: 956). This, she argues, “would make our responses into behaviorist effects of a monstrously powerful visual culture” (Butler, 2007: 956). Instead, the ways in which norms enter into frames “are vigorously contested precisely in an effort to regulate affect, outrage and response” (Butler, 2007: 956). The question then becomes who or what has the most agency in this contest? Who/what is it that ultimately regulates affect, outrage and response? This vigorous contestation highlights the multiplicity of agents in producing scopic regimes. Likewise, in the drone program, the viewing subject’s affective and emotional responses and complex and often contradictory socio-cultural conditioning alters their interpretation of drone surveillance imagery. Drone personnel who are exposed to non-militaristic, non-violent, anti-racist, anti-Islamophobic ideas, or whose personal identities in some way do not sit comfortably with US Air Force norms, are all the more likely to vigorously contest their experiences of drone vision. Cultural studies scholarship on the “male gaze” and the “postcolonial” or “imperial gaze” demonstrates that viewers’ socio-cultural conditioning to see within a dominant and hegemonic visuality is, indeed, very powerful (Kaplan, 1997; Mulvey, 1975; Poole, 1997; Said, 2007 [1978]). Visual imagery is so often constructed, mediated, and viewed in social/communal settings, in ways that strongly encourage a masculinist, sexist, and/or Orientalist and thus an objectifying and de-humanizing interpretation by viewing subjects. This is no different in the US Air Force, whereby the top-down aerial view (Adey et al., 2011; Kaplan, 2006), the representation of people as pixelated images or heat signatures and the use of racist, sexist and Islamophobic language in the viewing setting encourages such an interpretation. However, it also important not to ignore valuable cultural studies scholarship on counter-visualities and resistant viewing practices. Stuart Hall’s (1980) theory of “negotiated” viewing, for instance, posits that viewing subjects (particularly those outside of dominant and hegemonic social groups) will often adopt a “negotiated” position with a mediated text, wherein they identify and challenge its hegemonic construction. Some viewers – particularly those whose identities or experiences fall outside institutional/societal norms – are likely to interpret imagery in ways that challenge dominant or hegemonic ideas. Resistant interpretations of surveillance imagery are all the more likely
Vision, visuality, and agency
101
given each Air Combat Patrol (a mission involving three to four drones) has over 185 people working on it (Gregory, 2011: 194–5). Indeed, military drones are better thought of as hyper-manned rather than unmanned. While not all of these people have access to surveillance feeds, many do – pilots, sensor operators, mission coordinators, commanders, full-motion video and signals intelligence analysts, military lawyers, technicians, and many others linked in via the Combined Air Operations Centre. Moreover, a viewing subject’s awareness of the fallibility of drone vision can undermine the drone program’s dominant and hegemonic visuality. Drone personnel who are troubled by the quality of the resolution, the ambiguity of heat signatures, the time lag on transmission, or who are conscious of their own fatigue, are more likely to engage in a doubtful viewing of drone surveillance imagery. Such a doubtful viewing is evident in the personal testimony of former sensor operator Heather Linebaugh (2013: n.p.): The feed is so pixelated, what if it’s a shovel, and not a weapon? I felt this confusion constantly as did my fellow UAV analysts. We always wondered if we killed the right people, if we endangered the wrong people, if we destroyed an innocent civilian’s life all because of a bad image or angle. A recent New York Times long-form piece (Press, 2018) recounts former drone analyst Christopher Aaron’s similar experiences. Aaron (pseudonym) left the drone program due to the immense psychological and physiological strain he was under. Like Linebaugh, he did not trust the technology: He recalled days when the feed was ‘too grainy or choppy’ to make out exactly who was struck. He remembered joking with his peers that ‘we sometimes didn’t know if we were looking at children or chickens.’ (Press, 2018: n.p.) The omniscience that drone technology is reputed for is therefore not experienced by many of its human users. Once drone personnel realize these technological flaws (and it does not take long), contesting the dominant visuality becomes standard practice. Ambiguous imagery is open to myriad interpretations by analysts, many of whom feel the weight of responsibility for making lethal decisions and will therefore worry about the “what if ?” What if it is a shovel and not a weapon (Linebaugh, 2013)? What if it is a child and not an animal (Press, 2018)? What if it is a “man trudging down a road with a walking stick” and not “an insurgent carrying a weapon” (Press, 2018)?
102 Alex Edney-Browne
Invisibilities and imagination Invisibilities It is also necessary to consider technologies and human experiences beyond those typically classified as solely visual. Much drone warfare scholarship is ocular-centric, taking drone vision as its subject of inquiry. This is likely because of our societal privileging of sight over and above the other senses. Less attention is given to how the drone cockpit sounds, smells, or feels.8 Perhaps this is also due to the communicability of visual surveillance technologies. Drones’ visual sensors are easier to understand because most of us recognize them from similar visual technologies in our lives, whereas less is known about radar algorithms or International Mobile Subscriber Identity (IMSI) catchers, for instance. We need to be cautious, however, about reinforcing this ocular-centrism in scholarship, as it precludes consideration of other equally significant technological features of the drone apparatus, and human experiences beyond the visual (including the auditory, olfactory, and haptic). Indeed, part of being a critical IR “weapons expert” entails studying and explaining technologies that are not yet popularly understood. Moreover, dedicating our attention to what can be seen distracts us from what cannot. This section will consider the operation and effects of “invisibility” (van Veeren, 2018) and imagination in the US drone program, positing that these concepts are central to any inquiry into vision, visuality, and agency within technological apparatuses. Drones are constantly coming up against the invisible; that which cannot be seen or known. Elspeth van Veeren (van Veeren, 2018: 199) argues that “invisibility and visibility are always intertwined; working together rather than against one another.” She contends that visual prosthetics, while technologically enhancing human vision in some regards, will inevitably create invisibilities in that very process. Photography, for example, “captures moments, allowing more viewers to see greater detail and slower, but loses out movement and context” (van Veeren, 2018: 198). Moreover, as Ryan Bishop (2011: 275) reminds us, there are two types of invisibility: the contingent and the radical. Contingent invisibilities are of the kind Van Veeren explains: they are “potentially visible,” as they can be made visible through technological development (Bishop, 2011: 275). Radical invisibilities, on the other hand, are forever “unknown unknowns” (to borrow from Donald Rumsfeld) – “things we do not know we don’t know” and, crucially, will never know. Bishop (2011: 275) contends that these invisibilities can “never be rendered visible” regardless of technological interventions. The drone program does not overcome the challenge of invisibility. Its recent technological developments create new invisibilities as they render previous ones visible. The soda straw effect of the drone’s zoom camera, mentioned briefly above, concurrently produces visibility and invisibility. If a drone crew zooms in to attempt positive identification of a target, for
Vision, visuality, and agency
103
instance, their situational awareness will be temporarily lost. They may be able to read the license plate of the target’s vehicle (if the camera resolution and transmission speed are good enough), but, in doing so, will be blinded to where the vehicle is headed. Similarly, the US Air Force’s shared (and limited) bandwidth for transmission means that if one Air Combat Patrol has a higher definition and faster video feed, that bandwidth is effectively “stolen” from another Air Combat Patrol whose video feed quality decreases. In other words, while one mission’s drone crews will be faced with fewer invisibilities, they will produce more invisibilities for their colleagues on another mission. Information overload in the drone program is another instance in which visibilities and invisibilities are co-produced. Drone sensors collect vast amounts of surveillance imagery and signals intelligence about people in targeted countries. Each day, the US Air Force collects “1000 hours of video, 1000 hours of high-altitude spy photos and hundreds of hours of ‘signals intelligence’ – mostly cell phone calls” (Shanker and Richtel, 2011: n.p.). Ostensibly, the United States knows more about its adversaries than in any conflict prior to the war on terror. However, much of the information it collects sits unseen by analysts, so remains invisible. Sensor data “accumulate faster than human hands can collect it and faster than human minds can comprehend it” (Andrejevic and Burdon, 2015: 26). The US Air Force touts future technological innovations (such as algorithms and big data analytics) as able to overcome their information overload problem by significantly increasing capacity to analyze surveillance imagery and signals intelligence. It is important, however, that critical IR scholars do not accept, or worse, reproduce, the military’s technological fantasies at face value. While academics cannot predict the future either, it is necessary to keep powerful institutions like the US military in-check by undertaking an informed evaluation of the future technologies they tout as able to achieve omniscience and omnipotence. The technologies the US Air Force is currently working on to overcome limitations to drone vision’s omniscience are also fallible, as they will simultaneously produce new contingent invisibilities and cannot overcome the problem of radical invisibilities. The US military is working on technological “solutions” to some of the above-mentioned invisibilities: the Gorgon Stare to overcome the soda straw effect, and algorithms and big data analytics to comb through sensor data and identify “suspicious” and “operationally relevant” people and patterns. The Gorgon Stare is a wide-area airborne surveillance system (WAAS) installed on US Air Force’s MQ-9 Reapers. The Gorgon Stare houses nine cameras (video/electro-optical and thermal) and is designed to maintain situational awareness, as zooming in with one camera does not hinder the other eight cameras from providing a wide area view. The first increment of Gorgon Stare was plagued by low image quality, black triangular “blind spots” (at the edges where the different camera
104 Alex Edney-Browne feeds were mosaicked together) and bandwidth problems, and all data was stored on-board and downloaded for post-flight (rather than live) analysis (Welsh, 2011). Gorgon Stare’s second increment has entailed the integration of the ARGUS-IS (Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System), which is said to provide 65 different high-resolution video streams, cover 50 square kilometers and allow live streaming and analysis. However, this is equivalent to “600 gigabytes of data per second or 6000 terabytes of video data per day” (Lippincott, 2016: 21). There is only enough available bandwidth to live-transmit some (not all) of the visual information captured, so much of it needs to be downloaded and analyzed post-flight – therein adding to the mountain of data the US Air Force already has stored. The technological innovation designed to overcome one set of invisibilities, those produced by poor situational awareness, therefore compounds another, information hidden away in storage, as there are too few analysts to sift through it. Search algorithms, video analysis algorithms and big data analytics are said to overcome the information overload problem. Yet, algorithms still require people to program them, and those programmers think within the existing paradigm of what is deemed “operationally relevant” information. Algorithmic data analysis therefore keeps coming up against radical invisibilities, as it takes place within the rigid confines of programming by humans who “don’t know what we do not know.” Moreover, while big data analysis seems illuminating in that it “offer[s] new perspectives on shifting patterns within groups,” it simultaneously “pushes out outliers and uncertainties” (van Veeren, 2018: 198). By shifting the focus to commonly occurring (“trending”), macro-level phenomena, big data analytics pushes rare or micro-level, but potentially significant, phenomena into the ignored, and thus invisible, margins. Alexander Galloway (2011: 91) argues that this is an unavoidable problem in the communication of big data; this he calls the “dilemma of unrepresentability lurking within information aesthetics.” When big data is visualized, it fails to communicate all of its findings because it must adopt “one form at the expense of all others.” In other words, the representation of data in one way (by focusing on certain trends, for example), precludes its representation in other ways therein detracting from other trends. Galloway (2011: 91) argues that, most often, “the augmentation of functional (algorithmic) efficiency goes hand in hand with a decline in symbolic efficiency.” The data visualization, “even as it flaunts its own highly precise, virtuosic level of detail,” simultaneously “proves that there is another story happening behind and beyond the visible” (Galloway, 2011: 91). Algorithms and big data analytics are therein confronted with radical invisibilities: how do they know what they should look for? How should they interpret arbitrary or ambiguous information? Technological developments do not mean the US Air Force is any closer to achieving omniscience.
Vision, visuality, and agency
105
Imagination Invisibilities within drone warfare require drone personnel to be imaginative, and this has both hegemonic and counter-hegemonic potential. When drone technology fails to provide relevant information, drone personnel need to fill in the gaps. Decisions are thus made not on what is perceived or otherwise sensorially experienced, but on what is expected to happen according to how the human operator has forecast it. The color camera on the drone’s nose that pilots use for steering, for instance, offers only a partial view of the sky surrounding the drone’s physical body (Williams, 2011: 386). This view is significantly more restrictive than human peripheral vision. As one UK RAF drone pilot puts it: Aside from our instruments, though, our only way of ‘seeing’ is through a fixed camera lens in the nose of the aircraft that provides a view of about 30 percent of the sky, so you have almost no peripheral vision or awareness. You really have to think yourself into the [flight-deck] and … it requires a lot of imagination. (Loveless, 2010: 196) The pilot’s imagination therefore has to compensate for an approximate 70 percent reduction in the visual field (Loveless, 2010: 196). This is significant because, just as I have argued above with regards to visual interpretation, an individual’s imagination is open to shaping by external influences (institutions, colleagues, superiors, social norms, etc.), but it cannot be fully controlled. Moreover, these influences can conflict with each other: they can be hegemonic and counter-hegemonic; conformist and resistant. The ambiguity of a heat signature or a low-resolution image means that drone personnel must use their imaginations to decide what action is required of them. Imagination can lead a drone sensor operator to project malevolent intent onto a group of civilian teenage boys, but, conversely, it can also lead to imagining the childhood experiences or future ambitions of these teenage boys; the impact their deaths would have on their families; the likelihood that they are actually civilians. It is here that we must acknowledge the capacity for counter-hegemonic thoughts and actions in the drone program. This is similar to Appadurai’s (2000: 6) contention that imagination has a “split character”: “on the one hand it is in and through the imagination that modern citizens are disciplined and controlled […] But it is also the faculty through which collective patterns of dissent and new designs for collective life emerge.” The aforementioned “what if ?” (“what if it’s a shovel and not a weapon?”) that Heather Linebaugh and her colleagues asked themselves is one example of how imagination can function as dissent in the drone program. Imagination is evident in the testimonies provided by drone personnel, wherein they provide fabricated details about the personalities and the lives of the people they have targeted. Anthropologist Hugh Gusterson refers to
106 Alex Edney-Browne this as “narrativization”: the act of drone personnel “creating mental stories that help make sense of the people they watch” (2016: 65–6). This involves “mak[ing] interpretive leaps, fill[ing] informational gaps and provid[ing] framing moral judgment” (Gusterson, 2016: 66). Two examples I have considered previously (Edney-Browne, 2017: 27) include an active-duty drone pilot referred to as “Mike,” who recounts watching “an old man startled by a barking dog” (quoted in Hurwitz, 2013: n.p.). How does Mike know the old man was “startled,” or that the dog was “barking”? The drone’s video cameras are not, as stated above, high resolution enough to determine facial expressions. Unless the startled reaction involved considerable body movement, Mike must have imagined it and projected it onto the old man. Likewise, the dog’s “bark” would have presumably been inaudible. If the old man was “startled,” then perhaps it was merely by the dog’s presence or appearance and not by its bark, yet Mike’s imagination attributes it to this. In a Democracy Now! (2013) interview, former drone sensor operator Brandon Bryant describes his experience of surveilling a group of three men through a drone’s thermal imaging camera. He states that the “two individuals in front were having a heated discussion” and the “guy at the back was kind of watching the sky” (Democracy Now!, 2013: n.p.). It is not possible to confidently determine whether a person is “watching the sky” from a heat signature (perhaps this is what Bryant’s “kind of” acknowledges). He likely assumes this, however, because most of us have experienced being the uncomfortable third party to a private interaction; Bryant can easily imagine how this person would behave and therefore interprets the image in keeping with these expectations. The role of invisibility and imagination raises important questions about technology, agency and world politics. In the drone program, human users may be strongly directed towards viewing surveillance imagery in ways that reinforce American hegemony. However, users’ imaginations significantly shape how they interpret the information provided by their technologies. The inability of the US Air Force’s technological interventions to overcome the problem of invisibility means that human imagination will always fill in the gaps. One’s private imagination cannot be fully controlled by external agents – regardless of the power of these institutions and technologies – so there are always avenues for resistant and counter-hegemonic imaginings. The difficulty, however, is in acting upon these imaginings. Imagination that leads to recognition and empathy (or at least an abhorrence to inflicting violence), or to conceiving an alternative and more peaceful foreign policy, has little political significance unless it is shared and acted upon beyond the individual level. However, it is likely that US Air Force drone personnel who partake in resistant imaginings are living an Orwellian nightmare wherein their freedom of expression is strongly policed. Active duty drone personnel cannot share these kinds of imaginings with colleagues or superiors (without risking interpersonal or professional backlash) or with people outside the institution (because of classification restrictions). Drone program
Vision, visuality, and agency
107
veterans who have become whistleblowers are subject to the threat of legal charges under the Espionage Act. The difficulty, then, comes in mobilizing counter-hegemonic imaginations to destabilize the hegemonic socio-technical imaginary of the drone. As Sheila Jasanoff (2015: 327) puts it, “one person’s vision does not make an imaginary any more than one swallow calls a summer into being.” An imaginary develops over time and must be believed in, and shared, by a large group of people. “Heterodox imaginations,” she continues, “are by no means guaranteed to succeed, especially when the dominant imaginary itself is strongly rooted in culture and history” (Jasanoff, 2015: 330). Dismantling the dominant socio-technical imaginary of the drone is thus no easy task. The US Air Force is a globally powerful institution that occupies a privileged place in the West’s culture and history. It readily makes promises about the omnipotence and omniscience of its existing and future technologies that citizens, many of whom trust the US military and lack technological expertise, are unlikely to question. Critical IR scholars therefore have the important task of disempowering dominant socio-technical imaginaries, by developing expertise about the fallibility and limitations of military technology and by making alternative imaginations seem possible to wider society. They can also provide safe (privacy-assured) opportunities for drone program veterans to speak to academic researchers, so that their experiences and imaginings reach a wider audience. On a similar note, the existing activism of drone veterans is yet to prompt academic research. Instead, it seems critical IR is steadfast on representing the drone program as an all-powerful, rather than fragile, technological apparatus.
Conclusion The call for critical IR scholars to become weapons experts is a worthwhile endeavor only if doing so multiplies, rather than precludes, possibilities for counter-hegemonic resistance. We should feel deeply unsettled, as Carol Cohn (1987) did, if we find ourselves having “fun” while writing on technologies that maim, kill, and inflict psycho-social suffering. Moreover, we must not take military institutions’ (frequently embellished) descriptions of their technologies at face value, as doing so can embolden troubling sociotechnical imaginaries. Learning to speak the language of weapons experts has counter-hegemonic potential, as it can increase critical IR’s ability to challenge military institutions by identifying and highlighting the many flaws in their technologies. However, unless done with caution, we also risk producing techno-fetishistic and inaccessible research that draws attention away from human suffering in war. These techno-fetishistic accounts can also discourage political resistance – paralyzing readers by portraying military technological apparatuses as omnipotent rather than fragile. This chapter has demonstrated how these tensions play out in critical IR scholarship on drone warfare. This scholarship raises interesting questions
108 Alex Edney-Browne regarding vision, visuality and agency in the drone program. Several academics (Coward, 2014; Grayson, 2012; Grayson and Mawdsley, 2018; Gregory, 2013; Maurer, 2017) have written on the power of targeted killing’s “scopic regime”, arguing that drones’ visual sensors prompt drone personnel to see within the visuality of epistemological and aesthetic realism. Seeing within this hegemonic visuality convinces viewers of the drone program’s alleged omniscience and accuracy. Moreover, Bousquet (2017) contends, viewers’ sight becomes “weaponized” by military visual technology; viewers are trained to see in ways that will increase the likelihood of a fatal strike through the processes of aiming, ranging, tracking and guiding. These arguments may truthfully describe the experiences of many drone personnel. I argue, however, that drone personnel have the agency to see outside of this hegemonic visuality. It is important to acknowledge that counter-visualities operate within the drone program that can unsettle hegemonic power. Drone technology is plagued with flaws, of which drone personnel are often aware; they are not all convinced of drones’ alleged omniscience. Furthermore, the visual field is always open to interpretation and drone personnel (particularly those whose identities or experiences sit uncomfortably with US Air Force norms) can interpret surveillance imagery in ways that challenge the dominant scopic regime, prompting them to ask the crucial “what if?” – “What if that person is a civilian?”. Scholarship that draws attention to these viewing experiences helps to expose the drone program as one that is highly fallible and imprecise, leading to the injuries and deaths of thousands of civilians. I also discourage ocularcentrism in drone warfare scholarship, as dedicating too much attention to what can be seen distracts us from what cannot: that which is invisible and imagined. I contend that no amount of technological innovation will overcome the problem of invisibility and that the drone program will continue to be fallible and imprecise. New contingent invisibilities will be produced as old ones are made visible, and radical invisibilities will persist. Critical IR scholars who ethically object to drone warfare ought to focus their attention on identifying these fragilities in the drone apparatus, as this will help to inform direct actions aimed at preventing future violence and surveillance.
Notes 1 Vision refers to the act of sight; the biological and cognitive capacities and limitations of perception. Visuality, on the other hand, refers to the cultural and sociopolitical influences that direct people to see within particular “visual/ scopic regimes” or “visual economies.” 2 The term “visual technologies” is a misnomer, as any technology engages several senses. W.J.T Mitchell (2002: 172) powerfully articulates this point, writing about so-called “visual” media. For the purposes of this chapter, however, “visual technologies” is used as shorthand to refer to technologies that have the primary purpose of capturing and documenting imagery, even if the practice of such capture and documentation is not experienced solely or even primarily as a “visual” phenomenon by the user of the technology.
Vision, visuality, and agency
109
3 Regardless of where one stands on whether tools (even the most rudimentary) are always already technology, it is clear that weaponry has become increasingly technologically mediated and automated over the 20th and 21st centuries, so the terms “weapons expertise” and “technological expertise” can be used interchangeably. Weapons expertise means technological expertise; scholars working on drone warfare should seek to be knowledgeable about drone technology, those working on cyber-warfare – the internet, etc. 4 See also his most recent book The Eye of War: Military Perception from the Telescope to the Drone (Bousquet, 2018), published after this chapter was written. 5 The US also carries out drone attacks in Syria, Yemen, Somalia and Libya. 6 Information about the misattribution and loss of heat signatures was provided to the author in her interviews with former US Air Force drone personnel. These interviews are the subject of forthcoming publications. 7 Despite (or, more likely, because of) its Orientalism, the memoir received widespread favorable publicity upon its release in the United States, and is currently being turned into a Hollywood blockbuster by director Michael Bay (see EdneyBrowne and Ling, 2017). 8 For rich scholarship on the auditory, haptic, olfactory and otherwise embodied experiences of war, see: J. Martin Daughtry’s (2015) Listening to War: Sound, Music, Trauma and Survival in Wartime Iraq and Kevin McSorley’s (2013) War and the Body: Militarisation, Practice and Experience.
References Adey P, Whitehead M and Williams A J (2011) Introduction: Air-Target: Distance, Reach and the Politics of Verticality. Theory, Culture & Society 28(7–8): 173–187. Amoore L (2007) Vigilant Visualities: The Watchful Politics of the War on Terror. Security Dialogue 38(2): 215–232. Anderson B (2011) Facing the Future Enemy: US Counterinsurgency Doctrine and the Pre-Insurgent. Theory, Culture & Society 28(7–8): 216–240. Andrejevic M and Burdon M (2015) Defining the Sensor Society. Television & New Media 16(1): 19–36. Appadurai A (2000) Grassroots Globalization and the Research Imagination. Public Culture 12(1): 1–19. Bishop R (2011) Project ‘Transparent Earth’ and the Autoscopy of Aerial Targeting: The Visual Geopolitics of the Underground. Theory, Culture & Society 28(7–8): 270–286. Bleiker R (2017) In Search of Thinking Space: Reflections on the Aesthetic Turn in International Political Theory. Millennium 45(2): 258–264. Bleiker R (ed.) (2018) Visual Global Politics. London/New York: Routledge. Bousquet A (2017) Lethal Visions: The Eye as Function of the Weapon. Critical Studies on Security 5(1): 62–80. Bousquet A (2018) The Eye of War: Military Perception from the Telescope to the Drone. Minneapolis: University of Minnesota Press. Bousquet A, Grove J and Shah N (2017) Becoming Weapon: An Opening Call to Arms. Critical Studies on Security 5(1): 1–8. Butler J (2007) Torture and the Ethics of Photography. Environment and Planning D: Society and Space 25(6): 951–966. Chamayou G (2015) A Theory of the Drone. New York: The New Press. Chappelle W, McDonald K, Thompson B and Swearengen J (2012) Prevalence of High Emotional Distress and Symptoms of Post-Traumatic Stress Disorder in US Air
110 Alex Edney-Browne Force Active Duty Remotely Piloted Aircraft Operators. Air Force Research Laboratory Report. School of Aerospace Medicine Wright Patterson. Chappelle W, Salinas A and McDonald K (2011) Psychological Health Screening of Remotely Piloted Aircraft (RPA) Operators and Supporting Units. USAF School of Medicine Department of Neuropsychiatry Report. Chatterjee P and Stork C (2017) Drone Inc.: Marketing the Illusion of Precision Killing. Allen T (ed.). San Francisco: CorpWatch. Cohn C (1987) Sex and Death in the Rational World of Defense Intellectuals. Signs: Journal of Women in Culture and Society 12(4): 687–718. Coward M (2014) Networks, Nodes and De-Territorialised Battlespace: The Scopic Regime of Rapid Dominance. In Adey P, Whitehead M & Williams A J (eds.) From Above: War, Violence and Verticality. London: C. Hurst & Co Publishers, 95–117. Crary J (1998) Modernising Vision. In Foster H (ed.) Vision and Visuality. New York: The New Press, 29–44. Daughtry J M (2015) Listening to War: Sound, Music, Trauma, and Survival in Wartime Iraq. Oxford: Oxford University Press. Democracy Now! (2013) A Drone Warrior’s Torment: Ex-Air Force Pilot Brandon Bryant on His Trauma from Remote Killing. Democracy Now! 25 October. Available at www.democracynow.org/2013/10/25/a_drone_warriors_torment_ex_air (accessed 31 October 2018). Drucker J (2011) Humanities Approaches to Interface Theory. Culture Machine 12: 1–20. Edney-Browne A (2017) Embodiment and Affect in a Digital Age: Understanding Mental Illness among Military Drone Personnel. Krisis Journal for Contemporary Philosophy 1: 18–32. Edney-Browne A and Ling L (2017) Don’t Believe the Dangerous Myths of ‘Drone Warrior’. Los Angeles Times, 16 July. Available at www.latimes.com/opinion/op-ed/ la-oe-browne-ling-drones-memoir-brett-velicovich-20170716-story.html (accessed 31 October 2018). Feldman A (1997) Violence and Vision: The Prosthetics and Aesthetics of Terror. Public Culture 10(1): 24–60. Foster H (ed.) (1988) Vision and Visuality. New York: The New York Press. Galloway A R (2011) Are Some Things Unrepresentable? Theory, Culture & Society 28(7-8): 85–102. Grayson K (2012) Six Theses on Targeted Killing. Politics 32(2): 120–128. Grayson K (2017) The Problem of the Viewing Subject. Global Discourse: An Interdisciplinary Journal of Current Affairs and Applied Contemporary Thought 7(2–3): 327–329. Grayson K and Mawdsley J (2018) Scopic Regimes and the Visual Turn in International Relations: Seeing World Politics through the Drone. European Journal of International Relations. online first: 10.1177/1354066118781955. Gregory D (2011) From a View to a Kill: Drones and Late Modern War. Theory, Culture & Society 28(7–8): 188–215. Gregory D (2013) Dis/Ordering the Orient: Scopic Regimes and Modern War. In Barkawi T & Stanski K (eds.) Orientalism and War. London/New York: Routledge, 128–138. Gusterson H (2016) Drone: Remote Control Warfare. Cambridge, MA: MIT Press. Hall S (1980) Encoding/Decoding. In Hall S, Hobson D, Lowe A & Willis P (eds.) Culture, Media, Language: Working Papers in Cultural Studies. London: Free Association Books, 128–138.
Vision, visuality, and agency
111
Hurwitz E S (2013) Drone Pilots: “Overpaid, Underworked, and Bored”. Mother Jones, 18 June. Available at www.motherjones.com/politics/2013/06/drone-pilotsreaper-photo-essay/ (accessed 31 October 2018). Hussain M (2015) Former Drone Operators Say They Were “Horrified” by Cruelty of Assassination Program. The Intercept, 19 November. Available at https://theinter cept.com/2015/11/19/former-drone-operators-say-they-were-horrified-by-cruelty-ofassassination-program/ (accessed 31 October 2018). Jasanoff S (2015) Imagined and Invented Worlds. In Jasanoff S & Kim S-H (eds.) Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago: University of Chicago Press, 321–342. Jay M (1998) Scopic Regimes of Modernity. In Foster H (ed.) Vision and Visuality. New York: The New Press, 1–23. Kaplan A E (1997) Looking for the Other: Feminism, Film and the Imperial Gaze. London: Routledge. Kaplan C (2006) Mobility and War: The Cosmic View of US ‘Air Power’. Environment and Planning A 38(2): 395–407. Kundnani A (2014) The Muslims Are Coming! Islamophobia, Extremism, and the Domestic War on Terror. London: Verso Books. Linebaugh H (2013) I Worked on the US Drone Program. The Public Should Know What Really Goes On. The Guardian, 29 December. Available at www.theguardian. com/commentisfree/2013/dec/29/drones-us-military (accessed 31 October 2018). Lippincott D (2016) UAV Data Imaging Solutions Push Limits of Embedded Technologies. Journal of Military Electronics & Computing (April): 18–21. Loveless A (2010) Blue Sky Warriors: The RAF in Afghanistan in Their Own Words. Yeovil: Haynes Publishing. Maurer K (2017) Visual Power: The Scopic Regime of Military Drone Operations. Media, War & Conflict 10(2): 141–151. McSorley K (2013) War and the Body: Militarisation, Practice and Experience. London/New York: Routledge. Metz C (1982) The Imaginary Signifier: Psychoanalysis and the Cinema. Bloomington: Indiana University Press. Metz C (1999 [1974]) Some Points in the Semiotics of the Cinema. In Braudy L & Cohen M (eds.) Film Theory and Criticism: Introductory Readings. Oxford: Oxford University Press, 65–70. Miller G and Jaffe G (2016) U.S. Agrees to Pay Nearly $3 Million to Family of Italian Killed in CIA Strike. The Washington Post, 16 September. Available at www.washing tonpost.com/world/national-security/us-agrees-to-pay-nearly-3-million-to-family-ofitalian-killed-in-cia-strike/2016/09/16/5c213af6-7c1a-11e6-bd86-b7bbd53d2b5d_story. html (accessed 31 October 2018). Mirzoeff N (2009) War Is Culture: Global Counterinsurgency, Visuality, and the Petraeus Doctrine. PMLA 124(5): 1737–1746. Mitchell W J T (2002) Showing Seeing: A Critique of Visual Culture. Journal of Visual Culture 1(2): 165–181. Möller F (2007) Photographic Interventions in Post-9/11 Security Policy. Security Dialogue 38(2): 179–196. Mulvey L (1975) Visual Pleasure and Narrative Cinema. Screen 16(3): 6–18. O’Regan K J, Deubel H, Clark J J and Rensink R A (2000) Picture Changes during Blinks: Looking without Seeing and Seeing without Looking. Visual Cognition 7(1– 3): 191–211.
112 Alex Edney-Browne Otto J L and Webber B J (2013) Mental Health Diagnoses and Counseling among Pilots of Remotely Piloted Aircraft in the United States Air Force. Medical Surveillance Monthly Report. Ouma J A, Chappelle W L and Salinas A (2011) Facets of Occupational Burnout among US Air Force Active Duty and National Guard/Reserve MQ-1 Predator and MQ-9 Reaper Operators. Wright-Patterson AFB: Air Force Research Laboratory. Air Force Research Laboratory Report. Patterson Z (2009) From the Gun Controller to the Mandala: The Cybernetic Cinema of John and James Whitney. Grey Room 36: 36–57. Poole D (1997) Vision, Race, and Modernity: A Visual Economy of the Andean Image World. Princeton: Princeton University Press. Press E (2018) The Wounds of the Drone Warrior. The New York Times, 13 June. Available at www.nytimes.com/2018/06/13/magazine/veterans-ptsd-drone-warriorwounds.html (accessed 31 October 2018). Said E (2007 [1978]) Orientalism. London: Penguin Classics. Satia P (2014) Drones: A History From the British Middle East. Humanity: An International Journal of Human Rights, Humanitarianism, and Development 5(1): 1–31. Sayes E (2014) Actor–Network Theory and Methodology: Just What Does It Mean to Say that Nonhumans Have Agency? Social Studies of Science 44(1): 134–149. Shanker T and Richtel M (2011) In New Military, Data Overload Can Be Deadly. The New York Times, 16 January. Available at www.nytimes.com/2011/01/17/technology/ 17brain.html (accessed 31 October 2018). van Veeren E (2018) Invisibilities. In Bleiker R (ed.) Visual Global Politics. London/ New York: Routledge, 196–200. Vandenberghe F (2002) Reconstructing Humants: A Humanist Critique of ActantNetwork Theory. Theory, Culture & Society 19(5–6): 51–67. Velicovich B and Stewart C S (2017) Drone Warrior: An Elite Soldier’s Inside Account of the Hunt for America’s Most Dangerous Enemies. New York: HarperCollins Publishers. Virilio P (1989) War and Cinema: The Logistics of Perception. London: Verso. Volkmann F C, Riggs L A, Ellicott A G and Moore R K (1982) Measurements of Visual Suppression During Opening, Closing and Blinking of the Eyes. Vision Research 22(8): 991–996. Webster D M, Richter L and Kruglanski A W (1996) On Leaping to Conclusions When Feeling Tired: Mental Fatigue Effects on Impressional Primacy. Journal of Experimental Social Psychology 32(2): 181–195. Welsh W (2011) Gorgon Stare Test Uncovers Major Glitches. Defense System, 24 January. Available at https://defensesystems.com/articles/2011/01/24/gorgon-stare-testshows-serious-glitches.aspx (accessed 31 October 2018). Wilcox L B (2015) Bodies of Violence: Theorizing Embodied Subjects in International Relations. Oxford: Oxford University Press. Williams A J (2011) Enabling Persistent Presence? Performing the Embodied Geopolitics of the Unmanned Aerial Vehicle Assemblage. Political Geography 30(7): 381–390.
6
What does technology do? Blockchains, co-production, and extensions of liberal market governance in Anglo-American finance Malcolm Campbell-Verduyn
At the height of global market turmoil in 2008 a mysterious white paper authored by an individual or group of individuals using the pseudo name Satoshi Nakamoto was circulated on a mailing list for cryptographers. This technical paper proposed the design of a digital electronic cash system called Bitcoin (Nakamoto, 2008). In a nod to earlier systems of global exchange centered around precious metals like the gold standard of the early twentieth century, Bitcoin was widely likened by its advocates to “digital gold” (Champagne, 2014; Popper, 2015). Materializing in 2009, Bitcoin failed over its initial decade to displace national currencies, and the American dollar in particular, a central component to the global financial system. What instead occurred is that blockchain, the technology underlying the first so-called cryptocurrency, achieved widespread prominence for providing a novel manner of undertaking, recording, and publishing digital transactions. National, regional, and international governmental, non-governmental, and professional organizations all jumped on the “blockchain bandwagon” in proposing as well as undertaking experiments with this emergent technology. While literally rising to mountainous heights in being touted at meetings of the world elite at Davos and elsewhere, the blockchain bubble began to deflate as wild optimism regarding this technology progressively shifted to more profound skepticism over its actual uses and usefulness. Fundamental questions initially overlooked or merely glossed over began to be more carefully considered, including what actually are blockchains? What can and should this technology be used for? And what, if any, lessons do initial applications of this technology to Bitcoin and its now thousands of competitors provide for the flurry of wider blockchain applications continually being touted? Answers to these and further questions have been advanced in a small but rapidly growing interdisciplinary blockchain literature with major booklength contributions provided by computer scientists, economists, and legal scholars (Narayanan et al., 2016; Ammous, 2018; Di Filippi and Wright, 2018). Scholarly discussion of this technology has however taken on a largely technical, economistic, and legalistic flavor. Analysis has tended to remain focused on narrower issues of hackability, monetary characteristics,
114 Malcolm Campbell-Verduyn and potential integration of blockchain applications within existing laws and regulations. To these as well as wider interdisciplinary debates on emergent technologies, International Relations (IR) scholarship can, and should, add normatively grounded analyses that build on and further engage Science and Technologies Studies (STS). To broaden analysis of this particular set of technologies whilst contributing to the central conceptual themes of agency emphasized in this volume, this chapter combines insights from IR and STS in situating blockchains and their evolving applications within evolving patterns of authority in global governance. Generally referring to the legitimate exercise of power (Krieger, 1977), authority is considered in IR to be central to developing both formal regulations, rules and hard laws, as well as to more informal modalities of ordering through norms, social codes of “proper” conduct, and so-called soft laws (Lake, 2010; Weiss and Wilkinson, 2014). This chapter harnesses insights from Social Construction of Technology (SCOT) in tracing how authority is co-produced between technologies and their users in highlighting the wider implications that processes of socio-technical change pose for both establishing as well as resisting the legitimation of power in global governance. The central argument advanced is that blockchains legitimate the power of their human users, who in turn legitimated the power of blockchains, in an evolving process of co-production that has provided enabling conditions for extending liberal governance modalities since the 2007–8 global financial crisis. Tracing and exposing processes in which authority in global governance is co-produced by humans and technologies highlights possibilities for resisting and challenging market-based forms of governance that, not only persisted, but became increasingly embedded in transnational activities despite their profound contestation at the height of what was the most severe period of volatility since the Great Depression. These arguments are elaborated across five sections. The first section identifies technologies in general and blockchains in particular as neglected factors in interdisciplinary efforts to understand post-2008 extensions of liberal modalities in both formal and informal governance. The second section reviews how an emerging segment of the blockchain literature has helpfully highlighted the roles of this set of technologies in legitimating the power of their users at two key registers of the international: the global economy, particularly global finance, and its governance, specifically in its AngloAmerican financial “heartland” (Gowan, 2009). The third section then integrates IR insights with SCOT perspectives to conceptualize the parallel agency exercised by technologies and humans in co-producing authority. The fourth section draws on publicly available primary documentation as well as reporting by leading financial and technology media1 to trace how a range of state and non-state actors have accorded blockchains with input, output, and throughout legitimacy. A fifth and final section identifies opportunities for exercising agency in resisting and challenging market-based governance in processes of co-constitution as a value-added that IR can provide
What does technology do?
115
to wider debates on technology, authority, and governance through further engagement with STS.
Financial governance, innovation, and the post-crisis persistence of liberal governance Finance is often referred to as the heart of contemporary global capitalism. Credit and insurance provision, transaction settlement, and other financial services underpin commercial exchange within and across national borders. Global financial governance is intended to ensure that these market processes occur in stable, efficient, and socially useful manners. As with other areas of socio-economic activity, such governance involves both formal regulations as well as more informal norms and social codes of conduct. Although far from dichotomous, the former tends to be state-based and provided by national, regional, and international organizations while the latter are more market-based and provided by firms, professional and industry associations (McKeen-Edwards and Porter, 2013). During the short-lived Bretton Woods period between the Second World War and the early 1970s, global financial governance involved more “hands on” state-based regulation and stricter international controls over finance. By contrast, in both the preceding and proceeding periods, global financial governance has been more market-based as states were more, yet not entirely, “hands off” in the governance of global finance (Sassen, 2008; Germain, 2010; Knafo, 2013). Financial innovations, or so-called “finnovations,” have contributed both to the global spread of financial activities as well as to the decline of Bretton Woods controls and re-appearance of increasingly severe market volatilities since the 1970s. The information and communication technologies associated with the Third Industrial Revolution supported the re-emergence of global financial markets and the decline of the stricter regulations of the post-war era (Helleiner, 1994). These technologies have also been closely linked to reoccurring financial crises (Campbell-Verduyn et al., forthcoming). Most recently, in the 2007–8 global financial crisis, were the so-called asset-backed securities (ABS) that packaged bank loans into tradeable securities, as well as credit default swaps (CDS) and other tradable types of derivatives that promised to spread risks and enhance access to credit. These finnovations were actively encouraged in the liberal UK and US regulatory regimes overseeing the leading global financial centers based in the City of London and Manhattan, respectively (Johnson and Kwak, 2010; Nesvetailova, 2010). Yet ABS and CDS ended up concentrating risks into several systemically important and “too big to fail” financial institutions whose insolvencies might have led to the complete collapse of the global financial system were it not for the large government bailouts of financial service firms in 2008 (Woll, 2014). Although there were various causes to the 2007–8 crisis, the market-based governance of financial innovation was identified as central by several post-crisis inquiry commissions (Financial
116 Malcolm Campbell-Verduyn Services Authority, 2009; The Warwick Commission, 2009; United Nations, 2009). As International Political Economy (IPE) scholars Leonard Seabrooke and Eleni Tsingou (Seabrooke and Tsingou, 2010: 313) put it, the most recent global financial crisis stemmed “from an over-supply of financial innovation and undersupply of regulation in OECD countries.” Despite forming a key contributor to the most recent global financial crisis, liberal modalities of market-based governance have been extended to novel areas of activity in the wake of 2008. In seeking to understand the post-2008 “resilience” (Schmidt and Thatcher, 2013) and the “non-death of neoliberalism” (Crouch, 2011) scholars from IPE and cognate disciplines have pointed to the persistent material power of key market actors, such as the large banks, as well as to the ideational power of entrepreneurial principles in supporting the “natural” extension of market-based governance (Lall, 2012; Rethel and Sinclair, 2012; Watson, 2018). Further studies specify how such forms of power play out at key international forums and professional organizations as well as in processes of “regulatory capture” (Underhill, 2015; Campbell-Verduyn, 2017; Garsten and Sörbom, 2018). Largely overlooked in these and other analyses have been the roles of material applications of expert knowledge, or technologies, and their users in the governance of global finance in the decade following the 2008 crisis. The following sections overcome this gap, first in tracing the techno-agency of blockchains before turning to the agency exercised by technology users in providing the enabling conditions for extensions of liberal governance since 2008.
Blockchains and the production of market authority in the post-crisis period A central goal of the still unidentified authors of the technical blueprint for Bitcoin was to bypass the “financial institutions serving as trusted third parties to process electronic payments” (Nakamoto, 2008: 1). Interdisciplinary studies of blockchains have increasingly recognized how the objective of moving beyond existing governance actors and processes was fundamentally underpinned by libertarian ideology (Karlstrøm, 2014) and “extreme individualism” (Atzori, 2017: 55). These arguments echo wider insights into how the libertarian “Californian Ideology” of many information communication technologies (ICTs) position their users in individualistic and exclusionary social relations. Golumbia (2015: 119–20) for example argues that blockchains advance “a ‘program’ for recruiting uninformed citizens into a neoliberal and (nominally) antigovernment political discourse.” DuPont & Maurer (2015: n.p.) meanwhile characterize as “gamification” the incentives blockchains provide their users with resolving complex mathematical puzzles and verifying peer-to-peer transactions, namely the reward of monetary-like crypto-tokens. Building on the precious metals metaphor invoked by Bitcoin advocates, DuPont and Maurer (2015: n.p.) compare the incentive process to “old-time physical world mining: you’re all digging, digging, but sometimes, eureka!, you strike gold. That is
What does technology do?
117
what keeps you digging.” Undertaking, verifying and broadcasting transactions thereby positions blockchain users as dispersed, atomized, self-interested homo economicus. Blockchain users are made to rely on individual rather than collective interests in accruing monetary-like crypto-tokens. Once verified, however, transactions are broadcast on a permanent, immutable ledger that cannot be altered by any one user, since the “community” must come to a technical consensus in order to alter the ledger of transactions. Yet if crypto-tokens are stolen or lost there is no recourse to a higher common authority available. Blockchains users are left in what Atzori (2017: 55) compares to a Hobbesian “state of nature, in which the law of might – or the laws of the market – prevails.” Tracing how the particular ideas and ideologies underlying blockchain applications positions their users in individualistic social relations helpfully injects further nuance into understanding the enabling conditions for post-2008 extensions of liberal market modalities. Blockchains produce users that are “consumer-citizens” (Swan, 2015) in granting them choice of where, how, and if ever, to form ad hoc formal governance. This “Do-It-Yourself governance” (Atzori, 2017: 46) is widely praised by blockchain supporters for enabling the development of novel forms of organization and management that “can be tailored to the needs of individuals” (Swan, 2015: 44). A leading example of such market-based governance are the services provided by BitNation, a blockchain firm offering a range of voluntary state-like services like registries for marriage, death certificates, and land titles.2 In Anglo-American finance, several competing consortium blockchains have formed between banks, insurers and other corporations that allow their members to opt in and out of such “club governance” at will (Tsingou, 2015; see Stafford and Murphy, 2016; Hackett, 2017; Ralph, 2017).3 The firms involved in industry experiments have developed what have been alternatively labeled as “permissioned,” “private,” or “consortium” blockchains in which one or multiple actors are empowered to decide who is able or unable to participate in shared networks. Beyond highlighting the continually evolving nature and definition of this set of technologies, these trials and experiments with network closure and centralization of power legitimate the types of market-based governance provided by firms rather than governments. The big banks that were centrally implicated in the 2008 financial crisis and continuing technical failures4 gain authority as innovative actors in functioning as gatekeepers to evolving blockchain-based networks. Pioneering the application of this set of emergent technologies enables these and other financial services firms to attract talented personnel that Maurer (2016: 86) has indicated are less keen on fulfilling the “drudgery” of back office operations than pursuing individual “self-development and self-improvement.” Such industry blockchain experimentation legitimates the persistent power of their users, most prominently including the big banks, as well as the loose and club-like market-based industry governance they have advanced.
118 Malcolm Campbell-Verduyn The persistent authority of banks and other firms is most starkly reflected in the consistently hands-off, formal Anglo-American regulation of financial activities enabled by blockchain in the decade since the 2008 crisis. By international standards, British and American regulation of blockchain-based finance has remained liberal (Nian and Chuen, 2015; Kai and Zhang, 2017; Yeoh, 2017). Despite exceptions in prominent cases of illicit activity, such as blatant money laundering schemes as well as frauds (Campbell-Verduyn and Huetten, 2018), a largely “wait and see” approach has been maintained (Herian, 2018: 169). The British Financial Conduct Authority (FCA) and other UK officials have limited themselves to “actively monitoring developments” with blockchains (Macknight, 2016). A “regulatory sandbox” was established that relaxes British law in allowing for controlled and monitored experiments by firms trialing financial applications of blockchains and other emergent technologies over a defined period of time (Financial Conduct Authority, 2015). American regulators, meanwhile, have largely sought to “avoid undue restrictions,” “do no harm,” and rely on the “bottom-up” governance based on the “nature of the technology” as a commissioner of the Commodity Futures Trading Commission put it (Giancarlo, 2016). Again, with exceptions in some of the most glaring cases (e.g., Michaels and Loder, 2018), active monitoring and a hands off approach has characterized formal regulatory activity in even the riskiest of blockchain activities such as the trading of Bitcoin derivatives that threaten to enhance volatility by directly linking the cryptocurrency and mainstream financial systems (Jopson and Wigglesworth, 2017; Campbell-Verduyn and Goguen, 2018). Emphasizing the power of blockchains to produce their users in liberal market relations usefully draws attention to some of the neglected conditions enabling extensions of hands-off liberal governance in the wake of the global financial crisis. Yet, foregrounding the agency of technologies, on its own, can underemphasize the parallel forms of power exercised by human users in legitimating evolving applications of technologies. How have blockchain users in turn sought to justify such extensions market-based forms of governance to risky technology-enabled activities? Anglo-America regulators are well aware of the risks inherent in activities enabled by this novel technology. For example, the network of regulators coordinated by the White House National Economic Council warned in 2016 that blockchain applications were likely to foster further financial instabilities (Jopson, 2016). A year later, the British FCA began urging “caution” to firms and individuals utilizing blockchain technologies (Bobeldijk, 2017). These warnings have been more widely echoed by national, supra-national and sub-national regulators throughout OECD countries (e.g., European Central Bank, 2016; Eckert and Zschäpitz, 2017; Ontario Securities Commission, 2017), as well as by a range of international and intergovernmental organizations (He et al., 2016: 23, 32; Campbell-Verduyn, 2018). In drawing together insights from socio-legal studies, IR analysis of technology, as well as Science and Technology Studies the following sections first conceptualize and then
What does technology do?
119
empirically trace the governance implications arising from the manners in which technologies and their users have exercised agency in co-producing authority in global governance.
Co-producing authority in global governance The forms of techno-agency emphasized in this volume and detailed above echo a longer tradition of socio-legal inquiry illustrating how the ideational and material properties of technologies shape the activities of their human users. Technologies have long been regarded as exercising the types of productive power typically associated with human actors in global governance (Barnett and Duvall, 2005). The specific features and designs of technologies have for instance been recognized as establishing “patterns of power and authority in a given setting” (Winner, 1980: 134). The specific norms and values underpinning technologies are regarded as shaping and constraining the interests and identities of their human users (Hutchby, 2001; Wellman et al., 2003). Rather than a variable that may or may not influence the power and legitimacy of human actors, technologies are thus “political phenomena in their own right,” not unlike “legislative acts or political foundings that establish a framework for public order” (Winner, 1986: 22). Such insights have informed a growing number of IR studies over the past decade. Reid (2009: 611) for instance notes how technologies “constitute humans with specific tendencies and habits.” The specific arrangements and architectures of technologies are considered “arrangements of power” (DeNardis, 2014: 7). Singh (2013) for instance emphasizes the “meta-power” of technologies to transform the meanings informing both the interests and ideas of human actors. Elsewhere, technologies are considered as materializing “social designs” in “normative hardening” processes (Kaufmann, 2016: 81) that trigger “agentic properties: they ‘do’ things” (Davidshofer et al., 2017: 220). Contributors to a volume exploring technology and global politics consider technology a “deeply political phenomenon” (Fritsch, 2014: 117) that is “interwoven” with human action in ways that are “not just objective and neutral” (Mayer et al., 2014a: 19). The contributions of technologies to producing key global governance actors and processes is detailed in securities studies emphasizing the integration of socio-technical “devices”5 for border control, population management, and state security (Wesseling et al., 2012; Leander, 2013; Leese, 2016). IPE studies meanwhile emphasize how specific technical standards, codes, and models underpinning technologies actively shape the goals upon which state and non-state actors base their behavior. Technologies are shown to possess the power to “produce vast patterns of regularized activity” (Porter, 2003: 522), including when their centralized or decentralized characteristics give rise to decentralized or centralized modalities of governance. Regulatory efforts to combat the money-laundering potential of cryptocurrencies, for instance, are more successful when emulating the decentralized and experimental nature of blockchain technologies themselves (Campbell-Verduyn, 2018).
120 Malcolm Campbell-Verduyn While providing useful correctives to the tendency to over-emphasize human agency in IR, such techno-agency should not be considered separately from, or at the expense of, the parallel power exercised by humans. Emphasis on techno-agency must be contextualized within the related forms of agency exercised by humans operating both individually and collectively within firms, professional organizations, states, and social classes. This parallel emphasis can contribute to wider efforts in IR and the wider social sciences more generally to develop a “middle zone” (Mayer et al., 2014b: 2) between social determinism and technological determinism. A particularly useful manner of avoiding such extremes is by linking key IR concepts of governance, legitimacy, and power to insights from what has broadly been defined as the Social Construction of Technology (SCOT). Like constructivist approaches in IR, SCOT includes both middle ground and radical variants. These, however, overlap in a shared stress on how meaning accorded to technologies “emerges as the result of negotiations between technological constraints and social groups” (Manjikian, 2018: 29). One manner in which this negotiation occurs and can be traced is through everyday discourses wherein human actors, whose identities and interests are produced by technologies, in turn produce the legitimacy of technologies. As Manjikian (2018: 32–3) puts it, SCOT stresses how “[a]n object’s meaning is enacted or constituted through a number of factors, including the language which may be mobilized to describe the object.” The idiom of co-production, as developed in the work of Jasanoff (2004a, 2004b), points to the specific values and ideas underpinning human discourses producing non-human authority. Jasanoff (Jasanoff, 2004b: 4) argues that these are helpful to highlight in “revealing unsuspected dimensions of ethics, values, lawfulness and power within the epistemic, material and social formations that constitute science and technology.” Tracing co-productions of authority focuses attention on the values underpinning discourses of “how people recognize [….] and assign meaning” to technologies in manners that are essential for the “stabilization of new objects or phenomena” (Jasanoff, 2004b: 5; see also Jacobsen and Monsees, this volume). SCOT insights and tracing processes in which authority is co-produced provide insights into how the discourses of individual humans and human organizations legitimate novel activities enabled by powerful technologies. A focus on such “discursive negotiation processes between social actors” (Fritsch, 2014: 124) is not unfamiliar in IR and IPE where studies have for instance examined how legitimacy is imbued through the attachment of normatively positive meanings to technology applications and the activities they facilitate (Youngs, 2007; Mayer et al., 2014a: 17). Yet there is a tendency in each of these disciplines to focus on established technologies whose meaning is largely settled and more or less taken for granted. For instance, despite some early consideration of the most notable technology of the past decades,6 the Internet only began to receive more sustained analysis in IR once online connectivity became widespread in OECD countries (Mueller,
What does technology do?
121
2010; Choucri, 2012; McCarthy, 2015). This lag, more recently apparent in the dearth of current IR analysis of the growing range of everyday objects and transnational activities making up the so-called Internet-of-things, has more widely led to an under specification of the processes through which novel technologies are both legitimated and delegitimated as well as contested as their initial applications emerge. Emergent technologies have been defined as novel forms of knowledge whose material application and practical integration into established activities remain largely, if not wholly, unsettled (Einsiedel, 2009). In SCOT terms, objects like blockchains have not yet been subjected to “technological closure” in which their social meaning is regarded as “settled” (Manjikian, 2018: 29). Attracting growing attention beyond their initial community of developers, wider social understanding of what emergent technologies are and what they do tends to be framed in highly simplified everyday discourses (Skolnikoff, 1993: 168; Rotolo et al., 2015). The emergence of these framings is useful to trace in understanding the more general meaning, beyond the technical specifications provided by developer communities, of whether or not technologies contribute to common social concerns. As with other types of legitimation considered in IR and the wider social sciences, the human agency exercised in legitimating non-human objects is fundamentally normative. Social scientists have long recognized that the manners in which power is made to correspond with the values dominant in particular places, and at specific times, is far from an automatic or merely neutral process (Grafstein, 1981; Bourricaud, 1987; Beetham, 1991). Legitimate exercises of power, whether human or non-human, are always grounded in normative discourses emphasizing the congruence with wider norms and values dominant in a time and place. Tracing co-productions of authority through insights from SCOT “alerts us to the fact that power is produced as much through the elision of marginalized alternatives as through the positive adoption of dominant viewpoints” (Jasanoff, 2004a: 280). Such marginalization and countervailing efforts to ensure congruence with dominant norms are never completely fixed. Rather, they remain continually dependent on articulating the normatively positive or negative implications of technologies. In doing so, SCOT and those studies that focus on the co-productions of authority highlight the uneven processes in which meanings are accorded to technology by certain users. The power to legitimate technologies is exercised by both elite and everyday individuals and organizations alike. Indeed, as studies of blockchain applications note, the “technology does not determine social relations unidirectionally; rather, power can be seen as co-produced through an articulation of diverse human and non-human actors” (Rodima-Taylor and Grimes, 2019). Yet, the discursive framings of the social elites tend to bear most directly on, and impact, formal governance in highly technical domains, like global finance, that have long lacked wider democratic input (Porter, 2001; Baker, 2009). How specific applications of technologies are understood by the elite actors most actively involved in
122 Malcolm Campbell-Verduyn technocratic policy processes impacts the hands-on or hands-off nature of formal regulation (Peters, 2005; Birkland, 2006; Breznitz, 2012). As emphasized in global public policy literatures that have been concerned with how a “moral of the story” translates into policy outcomes (Jones and McBeth, 2010: 341), the discursive framings of technologies, their applications, and their implications can crucially inform initial stages of what is referred to as “rule emergence” (Roger and Dauvergne, 2016). Whether novel technology-enabled activities are perceived as requiring formal state-based, informal market-based governance, or some combination of each, fundamentally depends on much narrower forms of legitimation that occur within tightly knit policy networks. Tracing co-productions of authority through a SCOT-inspired approach therefore helps reveal the parallel power exercised by certain human and non-human actors shaping one another’s legitimacy as well as the particular implications posed for the nature of governance. While varied and multifaceted, “co-production occurs neither at random nor contingently, but along certain well documented pathways” (Jasanoff, 2004b: 6). On the one hand, human actors exercise varying power in producing the powerful technologies as legitimate contributors to society. On the other hand, technologies produce the legitimacy of their human users. These parallel dynamics are depicted in a necessarily simplified manner in Figure 6.1. Taken together, they inform patterns of informal and formal governance from the initial design and creation of technologies to their on-going and wider applications. The dynamics of co-productions are therefore useful for tracing particular patterns of authority in varied times and places. Despite the globalized nature of the so-called digital age human and techno-agency is not simply exercised “out there” in unidentifiable locations. As Saskia Sassen (2008: 414; emph. add.) has argued, patterns of authority are “profoundly rooted in local specifics and often derive much of their meaning from nondigital domains.” In other words, co-productions of authority at any one time and place are difficult to generalize as universal experiences. With the important caveat that the particularities of circumstances are unlikely to be reproduced in precisely the same manners elsewhere, the
Technologies (Emergent & Established) Power To Shape &
Legitimate
Constrain
Technology Users (State & Non-State Actors)
Figure 6.1 Co-producing authority in global governance
What does technology do?
123
proceeding section turns to co-productions of authority and implications arising for governance in Anglo-American finance following the 2008 global financial crisis.
Blockchain users producing a socially desirable technology Illustrating agency exercised by a range of human actors in parallel with techno-agency through a SCOT-influenced approach provides further insights into conditions that have enabled extensions of liberal governance since 2008. In the first half-decade following the global financial crisis, discourses stemming from a wide range of influential national and international organizations, media organizations, as well as individual commentators initially stressed the socially undesirable applications of blockchains to cryptocurrencies (e.g., Cyber Intelligence Section and Criminal Intelligence Section, 2012; Engle, 2015; Fernholz, 2015; Irwin and Milad, 2016). The potential for Bitcoin and competing cryptocurrencies to be used for speculation, tax avoidance, as well as trade in illicit goods and services led to simplified characterizations of this intricate set of finnovations as a “threat to the modern liberal state” (Soltas, 2013: n.p.) or, more simply yet, “evil” (Krugman, 2013: n.p.). Narratives surrounding the initial application of blockchains delegitimated the social contributions of this technology. Blockchain-based financial activities in turn became susceptible to calls for states to either apply existing regulations or to develop formal manners of advancing a hands-on approach (e.g., Plassaras, 2013). Although these emerged in states like Russia and China, stricter formal governance including bans on cryptocurrencies were ultimately avoided in the two jurisdictions hosting the world’s largest financial centers, London and New York City, and dominating formal international financial governance organizations like the International Monetary Fund (IMF). This section traces the productive power exercised by a range of elite and non-elite human actors in legitimating blockchains in manners that have provided justifications for extensions of market-based governance to the activities enabled by this set of emergent technologies in Anglo-American finance and beyond. As the previous section emphasized, discourses surrounding emergent technologies are never fixed. They can shift in unexpected manners that pose profound implications for the authority of activities that their unfolding applications enable. A “vocabulary vortex” surrounding blockchains has been identified as fundamentally muddling “what it is, its features, or its flaws” (Walch, 2017: 14). Such discourses Herian (2018: 170) argues “recounts a broad matrix of socio-economic and political issues in a constant state of flux.” Representations of blockchain applications as socially acceptable became widespread about a half-decade after the 2008 global financial crisis. Participants at the annual Davos World Economic Forum gathering of elite individuals and organizations (Kaminska and Tett, 2016; Schwab, 2017) as well as leading media outlets (e.g., The Economist, 2015) began framing the potential contributions of blockchains in much more socially positive
124 Malcolm Campbell-Verduyn manners. Blockchains increasingly began being considered by a range of commentators as magic bullets for resolving many of the complex problems facing the world, from corruption and abuse in global supply chains to establishing identities and ownership rights in conflict zones (United Nations High Commisioner for Refugees, 2018). Blockchain solutionism grew to such dizzying heights that even typically cheerleading consultancies considered the technology to have reached peak hype towards a decade following the publication of Satoshi’s white paper (Panetta 2017). Problems persistently plaguing the initial set of blockchain applications, such as the potential for cryptocurrencies to facilitate money laundering and terrorism financing, became regarded as resolvable not by bans on the technology, but rather by further experiments with the technology. The Managing Director of the IMF, for example, argued that blockchain could be used to “fight fire with fire” (Lagarde, 2018: n.p.) in addressing many of the problems that earlier applications of the technology was regarded as enhancing. Framing blockchain as an increasingly desirable material application of expert knowledge was perhaps most clearly pronounced in the first book dedicated to the technology. This volume enthusiastically exclaimed how the emergent technology could provide “the most efficient and equitable models for administering all transnational public goods, particularly due to their participative, democratic, and distributed nature” (Swan, 2015: 31; emph. add.). A threefold discursive stress on 1) inclusive participation, 2) transparent operation, and 3) efficient transnational decision-making outcomes enabled by wider blockchain applications was advanced and echoed by a variety of state and non-state actors. The productive power exercised by this dispersed range of human actors in situating blockchains as acceptable and legitimate contributors to liberal capitalism can be traced considering each of the major conceptions of legitimacy typically stressed in IR: input, throughput, and output (Risse and Kleine, 2007; Schmidt, 2013). In the first instance, narratives stressing the ability of blockchain applications to involve a wide range of participants in transnational decisionmaking contributed to the input legitimacy of this emergent technology. The open source code of blockchains along with the transaction verification processes in which all users vote on secure and irreversible exchanges was emphasized in widespread praise of the technology by financiers and media analysis, as well as by liberal and libertarian political parties (Sparkes, 2014; Wild, 2015; Del Castillo, 2016). Stock market operators like the NASDAQ (DeMarinis et al., 2017) stressed the ability of the technology to ensure the accuracy of vote counting in tamper-proof electronic elections in which a wide range of individuals around the world can participate (Tapscott, 2016a). Applications of blockchains enabling forms of “liquid democracy,” in which the delegation of votes can be revised in real-time electronic referenda,7 received positive attention in discourses emphasizing how these and other emerging applications of the technology could allow for
What does technology do?
125
“potentially more equality, justice, and freedom available to organizations and their participants” (Swan, 2015: 30). In a second instance, the transparent and accountable operation of transnational decision-making, or throughput legitimacy, was equally emphasized in simplified discursive framings of blockchains and their applications by a range of human actors in Anglo-American finance and beyond. Central bankers extolled the potential of the technology to be applied in providing clearer overviews of the economy (e.g., Ali et al., 2014). A leading global financial association praised blockchains for their potential to provide “a comprehensive, secure, precise, irreversible, and permanent financial audit trail” that may, for example, “improve transparency and oversight” in financial markets (Institute of International Finance, 2015: 4). Blockchains received further praise for their potential to enhance transparency in corporate governance. The layering of blockchain-based ‘smart’ contracts8 on top of one another, forming so-called decentralized autonomous organization (DAOs) or decentralized autonomous corporations (DACs) received positive attention for potentially allowing individual users dispersed worldwide to “look up and confirm the activities of transnational organizations on the blockchain” (Swan, 2015: 31). These pre-programmed automated entities also received praise from prominent technologists for the potential to provide “perfect financial transparency” and ensure that a “company’s finances are visible on the blockchain to anyone” (Tapscott, 2016b: n.p.; see also Tapscott and Tapscott, 2016). The prospect of organizational management beyond the day-to-day control of humans was framed as normatively positive for helping to overcome the “information asymmetry between management and stakeholders” underpinning what was considered to be otherwise opaque decision-making (Tapscott, 2016b: n.p.). The ability of wider blockchain applications to enhance the transparency of a wider range of processes beyond finance was further emphasized by media analysis extolling how the technology could “bring greater transparency to the assertions people make about their educational records and make it easier for students to selectively share their scores with educational tech companies for customized tutoring or support” (Casey and Forde, 2016: n.p.). The chief UK scientific adviser similarly emphasized how government applications of blockchains could “ensure the integrity of government records and services” and thereby “redefine the relationship between government and the citizen in terms of data sharing, transparency and trust” (Sir Mark Walport cited in Cookson, 2016; see also Walport, 2016). As elaborated further below, “censorship resistant” blockchain-enabled electronic voting (BEV) received official praise from the likes of the European Parliamentary Research Service (Boucher, 2016).9 Together these discursive framings further enhanced the throughput legitimacy of applications of a technology that legal scholars argue “derives legitimacy and authority from promises of inter alia radical transparency that are tantalizing” (Herian, 2018: 167).
126 Malcolm Campbell-Verduyn In a third instance, the simplified discourses of a range of human actors further enhanced the output legitimacy of blockchains by emphasizing the contributions of the emergent technology to greater efficiency of outcomes in transnational decision-making. Non-governmental organizations (NGOs) like the Bill and Melinda Gates Foundation (Higgins, 2015), government ministries like the UK Treasury (Her Majesty’s Treasury, 2015: 6), and international organizations like the IMF (He et al., 2016: 6) all framed the potential contributions of blockchain applications in normatively positive manners. Remittances were frequently cited as the most prominent example of how applications of blockchains could empower individuals over centralized financial institutions by enabling migrants to transfer cryptocurrencies around the world without incurring the high fees charged by money transfer firms (Ammous, 2015: 44; Rodima-Taylor and Grimes, 2019). The ability to cheaply undertake such transfers of value was regarded as productively undermining the legitimacy of the billion dollar profits incurred yearly by centralized firms like Western Union while enhancing the legitimacy of more decentralized non-profit organizations facilitating remittances through blockchain-based networks, like the San Francisco-based Stellar Development Foundation (Kramer, 2013). The rapid growth of Stellar10 and positive media attention generated to its partnerships with IBM in experiments seeking to overcome key pathologies associated with Bitcoin (Del Castillo, 2018; Wieczner, 2018), completed a shift towards more legitimate discursive emphasis of even cryptocurrencies, whose social outcomes had originally been regarded as entirely illegitimate. The human agency exercised in producing and re-producing blockchain applications as legitimate contributors to the ordering, organization, and management of liberal capitalism was not merely a strategic affair. Even the financial market actors whose authority was repeatedly slated to be challenged by applications of blockchains exercised agency in stressing the efficiency gains offered by applications of these technologies. For instance, the longstanding power of finance professionals was widely slated to decline as applications of blockchains “significantly reduce the reliance on auditors for testing financial transactions” (Spoke, 2015: n.p.); disrupt the informational basis and interpretations the work of insurers (Mainelli and von Gunten, 2014); and reduce the need for the lengthy and costly interpretations of ambiguous contractual language provided by legal professionals (DuPont and Maurer, 2015). Nevertheless, executives from professional services firms like Ernst & Young and PwC likened blockchain technologies to “the glue that is going to drive a productivity revolution across the globe on par with what Henry Ford did with the automobile” (cited in Eyers, 2015: n.p.) while praising the “broader macroeconomic efficiency” offered by blockchains (PwC, 2015: 5). Similarly, stock exchanges, considered to “no longer be technically necessary” as they “can be replaced by one or more blockchainbased, decentralized exchanges” (Wright and De Filippi, 2015: 27), praised blockchains for their ability to replace paper stock certificates with more
What does technology do?
127
efficient digital records of share ownership (Arnold and Bullock, 2015) in ways that could replace the “labor-intensive process where even the most straightforward trades may require weeks to finalize thanks in part to the fact that paper certificates are still being used” (Institute of International Finance, 2015: 11). Other actors slated for redundancy in blockchain-based financial systems, like banks, applauded the wide array efficiencies applications of these technologies might provide (Committee on Payments and Market Infrastructures, 2015: 7). That the very actors whose authority was slated to be disrupted by blockchain applications contributed to the output legitimacy of the technology was not lost on the leading association of global banks, which noted how “no other industry is dedicating as much money researching blockchain as the one that Bitcoin was created to circumvent – the finance industry” (Institute of International Finance, 2015: 6). Financial sector investment in blockchains, estimated to have reached the $1 billion mark in 2016 (Greenwich Associates, 2016), was justified as contributing to the general rather than particular benefit of improving the “efficiency of cross border payments and the currency exchange market” (Institute of International Finance, 2015: 3). Such justifications were further echoed by state actors like the former British Economic Secretary who applauded blockchains for their potential to make a range of transactions more efficient and secure (cited in Baldwin, 2015), as well as by the Secretary General of the International Organization of Securities Commissions who praised the ability of blockchains to more efficiently collect financial trading data and reduce the collateral held against cross-border financial trades (cited in Webb, 2015). The potential efficiency gains, in settling and clearing a wide range of financial activities through blockchains, also received praise from central banks (e.g., Barrdear and Kumhof, 2016) and professionals (Maurer, 2016). Whether ultimately self-serving and strategically advancing particular interests, the agency exercised by these actors in discursively framing blockchain applications in normatively positive manners further legitimated the hands-off approach of Anglo-American financial regulators to a technology contributing to common interests. A SCOT-influenced approach highlights how the agency exercised by human blockchain users in making discursive linkages between this emergent technology and common values of enhanced transparency, efficiency as well as democracy in governance processes provides the enabling conditions for extensions of liberal forms of controversial market-based regulation in the UK and US following the 2008 global financial crisis. Official emphasis on avoiding “undue restrictions” and relying on “bottom-up” governance based on the “nature of the technology” (Giancarlo, 2016) can be understood as stemming both from narratives justifying emerging blockchain applications as legitimate contributions to a liberal social order, as well as from the production of blockchain users in liberal market relations. Tracing the parallel agency exercised by human and non-human actors in co-
128 Malcolm Campbell-Verduyn producing authority in Anglo-American activities enabled by blockchains provides helps to understand how liberal market modalities were extended in the wake of a crisis credited to precisely such forms of governance. The following and final section of this chapter summarizes and suggests additional pathways for harnessing SCOT and the co-production to understand the growing skepticism and resistance to blockchains and market-based governance.
Conclusions: IR and normative analysis of technology, authority, and global governance IR is increasingly foregrounding technologies and their ramifications for continuity and change across, within, and at national borders. Despite these encouraging trends, it remains far from clear whether this welcome enhanced attention can overcome two longstanding shortcomings afflicting IR analysis of technology. First, as this chapter has noted, IR scholarship tends to consider technologies and their wider implications well after they have become established in transnational activities and once their authority has largely become backgrounded and assumed. The case of the Internet and its on-going extension into everyday objects and practices was briefly invoked, but the relative dearth of IR analysis of blockchains further exemplifies this point. A second and related shortcoming of IR analysis is the wider waxing and waning of interest in technologies. Early expectations that technologies would become central to IR analysis (Ogburn, 1949) materialized into a Cold War stress on nuclear technologies that did provide “some sense of the importance of technological advancements” (Mayer and Acuto, 2015: 664). Yet, in the period of remarkable technological change between 1990 and 2007 technologies were foregrounded in less than one per cent of chapters published in leading IR journals (Mayer et al., 2014a: 14). That technologies have remained of “passing interest” (Palan, 1997: 18) in fields like IPE and largely “exogenous to the concerns of IR theorists” (McCarthy, 2015: 2) serves as an important reminder that just as “technology itself changes rapidly and in unpredictable ways” (Cutler et al., 1999: 8) so, too, do trends in scholarly research. This chapter illustrated one manner of sustaining and extending IR analysis in manners that injects explicitly normative insights into what tend to be technical, legalistic, and economistic discussions of emergent technologies. Harnessing insights from SCOT, processes through which both humans and non-humans exercise agency in co-producing authority in global governance were traced. The power of blockchain technologies and that of their human users was shown to have been legitimated in manners providing justifications and enabling conditions for extensions of liberal governance modalities since the 2008 global financial crisis. In a first step the manners in which blockchains produce their users in individualistic relations were traced by drawing on insights from interdisciplinary studies of blockchains.
What does technology do?
129
The agency exercised by varying individuals and human organizations in discursively linking the activities enabled by the blockchains to key liberal values in the UK and US was then illustrated. Together, the agency of technologies and humans were shown to have provided enabling conditions for extensions of liberal governance in the wake of a crisis that had seemed to fundamentally contest the longstanding dominance of markets and marketbased actors in formal financial regulation as well as the everyday organization and oversight of transnational financial activities. Tracing human agency and techno-agency in co-producing authority in global governance also highlights possibilities for resisting and challenging market-based forms of governance. As this chapter has emphasized, authority is far from automatically generated; it involves the agency of both human and non-human actors, both of which are exercised in often unpredictable manners. Resisting and contesting authority may occur as subtly as the initial legitimation of power. For instance, in merely questioning technology, or its supporting industry, media commentators may contribute to unsettling the taken-forgranted assumption that certain or all applications are more widely beneficial (e.g., Waters et al., 2015; Brooks, 2017). IR can further contribute to on-going debates over blockchains and emergent technology more generally by tracing not only how discursive correspondence between key properties of emergent technologies and wider social values is established, but also how such connections are regarded as failing to materialize in practice. Blockchain applications in Anglo-American finance, for instance, that stress contributions to liberal norms of transparency, democratization, and efficiency have increasingly been contrasted with the growing hierarchies and “digital divides” between users and coders, as well as the opaque decision-making in transnational blockchainbased activities (e.g., Hütten, 2019; Scott, 2016; Gerard, 2017). Yet with some exceptions (Farrell, 2016; Scott, 2016; Shubber, 2016; Kaminska, 2017; Carstens, 2018: 8; Irrera and McCrank, 2018), such challenges and critiques are slowly permeating into and informing the wider legitimacy accorded to this set of technologies in official and media discussions. Moving beyond the technical, economistic, and legalistic nature of much existing analysis, IR can inject more explicitly normative considerations of the manners in which de-legitimation also occurs as well as the implications for market-based governance implications. For example, if the legitimacy of blockchains and other emergent technologies is contested, what implications might arise for formal and informal regulation? In engaging insights from SCOT and other STS perspectives, IR can explore these questions and productively enhance existing accounts of “legitimacy bubbles” and international “crises of legitimacy” (Reus-Smit, 2007). Resistance and challenges to the legitimation of power might also be more widely situated within the various cycles that have sought to model the impacts and trajectories of technologies (e.g., Akaev and Pantin, 2014; Thompson, 1990). Another potential fruitful line of enquiry that IR scholarship can take is investigating the varying roles of technologies and their users in similar or
130 Malcolm Campbell-Verduyn different governance modalities in contexts beyond Anglo-American finance. Given that the ideas underlying technologies and their correspondence with common societal concerns remain “profoundly rooted in local specifics” (Sassen, 2008: 414), further studies might examine the simplified narratives accorded to technologies in societies where alternatives to liberal values are already more widely prioritized. In research on blockchains this might involve studies exploring the particular manners through which the technology is becoming increasingly integrated into the financial governance activities of East Asian governments, particular South Korea and Japan (Huillet, 2018; Milano, 2018). Tracing the parallel agency exercised by technologies and their users in a much wider range of socio-economic contexts beyond Anglo-American finance is likely to add further nuance to the more general dynamics undermining co-productions of authority highlighted in this chapter. In sum, there exist several further pathways for sustaining and expanding the enhanced recent emphasis on technologies in IR. By exploring how both technology and its users both extend and undermine the legitimacy of one another’s power, as well as the implications arising for patterns of governance, IR can inject further nuance into technical, legalistic and economistic debates whilst avoiding the perils of either social or techno-determinism. Further engagement with insights from STS, as this chapter has shown, can enhance understanding of formal and informal patterns of ordering, organization, and management in activities enabled by novel technologies within, at, and beyond the borders of individual nation-states.
Acknowledgements Drafts of this chapter were presented at the 2016 annual meeting of the Midwest Political Science Association, as well as the 2017 meetings of the International Studies Association and Pan-European Conference on International Relations. Julian Gruin, Tony Porter, Mark Salter, JP Singh, David Wolfe, and the editors of this volume provided careful comments and helpful suggestions that greatly improved this chapter. Any remaining errors or omissions are solely the responsibility of the author.
Notes 1 Including UK-based CoinDesk, The Economist, Financial Times as well as USbased Bloomberg, Forbes, Fortune, New York Times, and Wired. 2 Bitnation relies on the Pangea blockchain in building what it calls “The Internet of Sovereignty,” defined as a “Decentralized Opt-In Jurisdiction where Citizens can conduct peer-to-peer arbitration and create Nations” (Tempelhof et al., 2017). 3 See https://b3i.tech/about-us.html, https://entethalliance.org/members/ and www. r3.com/about/ (accessed 31 October 2018). 4 Such as malfunctioning ATMs that have left customers without access to their funds. For a comparison of bank technical problems with those of other
What does technology do?
5 6 7 8
9 10
131
industries see http://spectrum.ieee.org/static/the-staggering-impact-of-it-systemsgone-wrong (accessed 31 October 2018). See for instance special issues edited by Amicelle et al. (2015) in Security Dialogue, as well as Firchow et al. (2017) in International Studies Perspectives. Such as Choucri (2000), Deibert (2000), Keohane and Nye (1998). Agora Voting was one system used by the Spanish party Podemos in 2014 primary elections (Frediani, 2014). Competitors include http://ethelo.org/, also https://heliosvoting.org/, and https://votem.com/ (accessed 31 October 2018). A blockchain-based “smart” contract is coded to self-enact action(s) to be undertaken upon the fulfilment of pre-specified conditions. For instance, a financial payment is processed once a good or service is verified to have been received; or a dividend is issued when profit levels verifiably reach a pre-specified sum. See for example www.expanse.tech/ as well as http://votewatcher.com/ (accessed 31 October 2018). Despite remaining dwarfed by Bitcoin, Stellar has consistently been amongst the top five cryptocurrencies by market capitalization, see https://coinmarketcap. com/ (accessed 31 October 2018).
References Akaev A and Pantin V (2014) Technological Innovations and Future Shifts in International Politics. International Studies Quarterly 58(4): 867–872. Ali R, Barrdear J, Clews R and Southgate J (2014) Innovations in Payment Technologies and the Emergence of Digital Currencies. Bank of England Quarterly Bulletin 2014 Q3. Available at https://ssrn.com/abstract=2499397 (accessed 31 October 2018). Amicelle A, Aradau C and Jeandesboz J (2015) Questioning Security Devices: Performativity, Resistance, Politics. Security Dialogue 46(4): 293–306. Ammous S (2015) Economics Beyond Financial Intermediation: Digital Currencies’ Possibilities for Growth, Poverty Alleviation, and International Development. Journal of Private Enterprise 30(3): 19–50. Ammous S (2018) The Bitcoin Standard: The Decentralized Alternative to Central Banking. Hoboken, New Jersey: John Wiley & Sons. Arnold M and Bullock N (2015) Nasdaq Claims to Break Ground with BlockchainBased Share Sale. Financial Times, 30 December. Available at www.ft.com/content/ eab49cc4-af18-11e5-b955-1a1d298b6250 (accessed 31 October 2018). Atzori M (2017) Blockchain Technology and Decentralized Governance: Is the State Still Necessary? Journal of Governance and Regulation 6(1): 45–62. Baker A (2009) Deliberative Equality and the Transgovernmental Politics of the Global Financial Architecture. Global Governance 15(2): 195–218. Baldwin H (2015) UK to Lead on Big Data Research. Her Majesty’s Treasury, 14 October. Available at www.gov.uk/government/speeches/uk-to-lead-on-big-dataresearch-says-harriett-baldwin (accessed 31 October 2018). Barnett M and Duvall R (2005) Power in Global Governance. Cambridge: Cambridge University Press. Barrdear J and Kumhof M (2016) The Macroeconomics of Central Bank Issued Digital Currencies. Bank of England Working Paper No. 605. Available at www.bankofeng land.co.uk/-/media/boe/files/working-paper/2016/the-macroeconomics-of-centralbank-issued-digital-currencies.pdf ?la=en&hash=341B602838707E5D6FC26884588 C912A721B1DC1 (accessed 31 October 2018). Beetham D (1991) The Legitimation of Power. Basingstoke: Palgrave Macmillan.
132 Malcolm Campbell-Verduyn Birkland T A (2006) Agenda Setting in Public Policy. In Fischer F & Miller G J (eds.) Handbook of Public Policy Analysis. London/New York: Routledge, 89–104. Bobeldijk Y (2017) UK Regulator: Public Must Beware the Risks of Bitcoin. Financial News London, 19 June. Available at www.fnlondon.com/articles/uk-regulatorpublic-must-beware-the-risks-of-bitcoin-20170619 (accessed 31 October 2018). Boucher P (2016) What if Blockchain Technology Revolutionised Voting? European Parliamentary Reserach Service, PE 581.918. Available at www.europarl.europa.eu/ RegData/etudes/ATAG/2016/581918/EPRS_ATA(2016)581918_EN.pdf (accessed 31 October 2018). Bourricaud F (1987) Legitimacy and Legitimization. Current Sociology 35(2): 57–67. Breznitz D (2012) Ideas, Structure, State Action and Economic Growth: Rethinking the Irish Miracle. Review of International Political Economy 19(1): 87–113. Brooks D (2017) How Evil Is Tech? The New York Times, 20 November. Available at www.nytimes.com/2017/11/20/opinion/how-evil-is-tech.html (accessed 31 October 2018). Campbell-Verduyn M (2017) Capturing the Moment? Crisis, Market Accountability, and the Limits of Legitimation. New Political Science 39(3): 350–368. Campbell-Verduyn M (2018) Bitcoin, Crypto-Coins, and Global Anti-Money Laundering Governance. Crime, Law and Social Change 69(2): 283–305. Campbell-Verduyn M and Goguen M (2018) Blockchains, Trust and Action Nets: Extending the Pathologies of Financial Globalization. Global Networks online first: https://doi.org/10.1111/glob.12214. Campbell-Verduyn M, Goguen M and Porter T (forthcoming) Finding Fault Lines in the Lond Chains of Financial Information. Review of International Political Economy. Campbell-Verduyn M and Huetten M (2018) Better Living through Technology? Scandal and Responsibility in Blockchain-Based Finance. Paper presented at European Workshops in International Studies, Groningen, The Netherlands, 8 June. Carstens A (2018) Money in the Digital Age: What Role for Central Banks? Bank of International Settlements, Lecture at the House of Finance, Goethe University Frankfurt, 6 February. Available at www.bis.org/speeches/sp180206.htm (accessed 31 October 2018). Casey M and Forde B (2016) How the Blockchain Will Enable Self-Service Government. Wired, 5 January. Available at www.wired.co.uk/article/blockchain-is-thenew-signature (accessed 31 October 2018). Champagne P (2014) The Book of Satoshi: The Collected Writings of Bitcoin Creator Satoshi Nakamoto. United States of America: E53 Publishing. Choucri N (2000) Introduction: CyberPolitics in International Relations. International Political Science Review 21(3): 243–263. Choucri N (2012) Cyberpolitics in International Relations. Cambridge: MIT Press. Committee on Payments and Market Infrastructures (2015) Digital Currencies. Bank of International Settlements, November 2015. Available at: www.bis.org/cpmi/publ/ d137.pdf (accessed 31 October 2018). Cookson R (2016) NHS Urged To Adopt Bitcoin Database Technology. Financial Times, 19 January. Available at www.ft.com/content/c4bad1ec-bea3-11e5-846f79b0e3d20eaf (accessed 31 October 2018). Crouch C (2011) The Strange Non-Death of Neoliberalism. Cambridge: Polity Press. Cutler C A, Haufler V and Porter T (1999) Private Authority and International Affairs. In Cutler C A, Haufler V & Porter T (eds.) Private Authority and International Affairs. New York: Suny Press, 3–28.
What does technology do?
133
Cyber Intelligence Section and Criminal Intelligence Section (2012) Bitcoin Virtual Currency: Unique Features Present Distinct Challenges for Deterring Illicit Activity. Federal Bureau of Investigation, 24 April. Available at www.wired.com/images_ blogs/threatlevel/2012/05/Bitcoin-FBI.pdf (accessed 31 October 2018). Davidshofer S, Jeandesboz J and Ragazzi F (2017) Technology and Security Practices: Situating the Technological Imperative. In Basaran T, Bigo D, Guittet E-P & Walker R B J (eds.) International Political Sociology: Transversal Lines. Milton Park/New York: Routledge, 205–227. Deibert R J (2000) International Plug’n Play? Citizen Activism, the Internet, and Global Public Policy. International Studies Perspectives 1(3): 255–272. Del Castillo M (2016) Libertarian Party of Texas to Store Election Results on Three Blockchains. CoinDesk, 8 April. Available at www.coindesk.com/libertarian-partytexas-logs-votes-presidential-electors-blockchain/ (accessed 31 October 2018). Del Castillo M (2018) IBM to Use Stellar for Its First Crypto-Token On a Public Blockchain. Forbes, 15 May. Available at www.forbes.com/sites/michaeldelcastillo/ 2018/05/15/ibm-to-use-stellar-for-its-first-crypto-token-on-a-public-blockchain/ #6b72e4502001 (accessed 31 October 2018). DeMarinis R, Uustalu H and Voss F (2017) Is Blockchain the Answer to E-Voting? Nasdaq Belives So. NASDAQ, 23 January. Available at https://business.nasdaq.com/ marketinsite/2017/Is-Blockchain-the-Answer-to-E-voting-Nasdaq-Believes-So.html (accessed 31 October 2018). DeNardis L (2014) The Global War for Internet Governance. New Haven: Yale University Press. Di Filippi P and Wright A (2018) Blockchain and the Law: The Rule of Code. Cambridge: Harvard University Press. DuPont Q and Maurer B (2015) Ledgers and Law in the Blockchain. Kings Review, 23 June. Available at http://kingsreview.co.uk/articles/ledgers-and-law-in-the-block chain/ (accessed 31 October 2018). Eckert D and Zschäpitz H (2017) Bundesbank Warnt Vor Internet-Währung Bitcoin. Welt, 7 May. Available at www.welt.de/finanzen/article164309456/Bundesbankwarnt-vor-Internet-Waehrung-Bitcoin.html (accessed 31 October 2018). The Economist (2015) The Trust Machine. The Economist, 31 October. Avaiable at www. economist.com/leaders/2015/10/31/the-trust-machine (accessed 31 October 2018). Einsiedel E F (2009) Making Sense of Emerging Technologies. In Einsiedel E F (ed.) Emerging Technologies: From Hindsight to Foresight. Vancouver: UBC Press, 3–10. Engle E (2015) Is Bitcoin Rat Poison: Cryptocurrency, Crime, and Counterfeiting (CCC). Journal of High Technology Law 16(2): 340–393. European Central Bank (2016) Opinion of the European Central Bank, 7 September. Available at www.ecb.europa.eu/ecb/legal/pdf/en_con_2016_43_f_sign.pdf (accessed 31 October 2018). Eyers J (2015) Why the Blockchain Will Propel a Services Revolution. Australian Financial Review, 14 December. Available at www.afr.com/technology/why-the-block chain-will-propel-a-services-revolution-20151212-glm6xf (accessed 31 October 2018). Farrell H (2016) Bitcoin Is Losing the Midas Touch. Financial Times, 9 March. Available at www.ft.com/content/12e155dc-e5e4-11e5-a09b-1f8b0d268c39 (accessed 31 October 2018). Fernholz T (2015) Terrorism Finance Trackers Worry ISIS Already Using Bitcoin. Defense One 13.
134 Malcolm Campbell-Verduyn Financial Conduct Authority (2015) Regulatory sandbox. London, November. Financial Times, 12 April. Available at www.ft.com/content/8090cc80-fff6-11e5-99cb83242733f755 (accessed 31 October 2018). Financial Services Authority (2009) The Turner Review: A Regulatory Response to the Global Banking Crisis. Available at www.fsa.gov.uk/pubs/other/turner_review. pdf (accessed 31 October 2018). Firchow P, Martin-Shields C, Omer A and Mac Ginty R (2017) PeaceTech: The Liminal Spaces of Digital Technology in Peacebuilding. International Studies Perspectives 18(1): 4–42. Frediani C (2014) How Tech-Savvy Podemos Became One of Spain’s Most Popular Parties in 100 Days. Techpresident, 11 August 2014. Available at http://techpresi dent.com/news/wegov/25235/how-tech-savvy-podemos-became-one-spain%E2% 80%99s-most-popular-parties-100-days (accessed 31 October 2018). Fritsch S (2014) Conceptualizing the Ambivalent Role of Technology in International Relations: Between Systemic Change and Continuity. In Mayer M, Carpes M & Knoblich R (eds.) The Global Politics of Science and Technology Vol. 1: Concepts from International Relations and Other Disciplines. Dordrecht: Springer, 115–138. Garsten C and Sörbom A (2018) Discreet Power: How the World Economic Forum Shapes Market Agendas. Stanford: Stanford University Press. Gerard D (2017) Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts. David Gerard. Germain R (2010) Global Politics and Financial Governance. Basingstoke: Palgrave Macmillan. Giancarlo C J (2016) Comment: With Blockchain, Regulators Should First Do No Harm. Golumbia D (2015) Bitcoin as Politics: Distributed Right-Wing Extremism. In Lovink G Tkacz N, de Vries P (eds.) MoneyLab Reader: An Intervention in Digital Economy. Amsterdam: Institute of Network Cultures, 117–131. Gowan P (2009) Crisis in the Heartland. New Left Review 55(2): 5–29. Grafstein R (1981) The Failure of Weber’s Conception of Legitimacy: Its Causes and Implications. The Journal of Politics 43(2): 456–472. Greenwich Associates (2016) Wall Street Blockchain Investments Top $1Billion Annually. 23 June. Available at www.greenwich.com/press-release/wall-street-blockchaininvestments-top-1billion-annually-0 (accessed 31 October 2018). Hackett R (2017) Big Business Giants from Microsoft to J.P. Morgan are Getting behind Ethereum. Fortune, 28 February. Available at http://fortune.com/2017/02/28/ ethereum-jpmorgan-microsoft-alliance/ (accessed 31 October 2018). He D, Habermeier K, Leckow R, Haksar V, Almeida Y, Kashima M, Kyriakos-Saad N, Oura H, Sedik T S, Stetsenko N and Verdugo-Yepes C (2016) Virtual Currencies and Beyond: Initial Considerations. IMF Staff Discussion Note, SDN/16/03. Available at www.imf.org/external/pubs/ft/sdn/2016/sdn1603.pdf (accessed 31 October 2018). Helleiner E (1994) States and the Reemergence of Global Finance: From Bretton Woods to the 1990s. Cornell: Cornell University Press. Her Majesty’s Treasury (2015) Digital Currencies: Call for Information. 18 March. Available at www.gov.uk/government/consultations/digital-currencies-call-for-infor mation (accessed 31 October 2018). Herian R (2018) Taking Blockchain Seriously. Law and Critique 29(2): 163–171.
What does technology do?
135
Higgins S (2015) Gates Foundation’s Kosta Peric on Blockchain Tech and the Unbanked. Coindesk, 18 July. Available at www.coindesk.com/gates-foundationblockchain-financial-inclusion/ (accessed 31 Oct 2018). Huillet M (2018) South Korea Legitimizes Blockchain Industry With Major New Classification Standards. Cointelegraph, 5 July. Available at https://cointelegraph. com/news/south-korea-legitimizes-blockchain-industry-with-major-new-classifica tion-standards (accessed 31 October 2018). Hutchby I (2001) Technologies, Texts and Affordances. Sociology 35(2): 441–456. Hütten M (2019) The Soft Spot of Hard Code: Blockchain Technology, Network Governance, and Pitfalls of Technological Utopianism. Global Networks, early view: https://doi.org/10.1111/glob.12217. Institute of International Finance (2015) Banking on the Blockchain: Reengineering the Financial Architecture, 16 November. Available at www.iif.com/system/files/ blockchain_report_-_november_2015_-_final_0.pdf (accessed 31 October 2018). Irrera A and McCrank J (2018) Wall Street Rethinks Blockchain Projects as Euphoria Meets Reality. Reuters, 27 March. Available at www.reuters.com/article/us-banks-fin tech-blockchain/wall-street-rethinks-blockchain-projects-as-euphoria-meets-realityidUSKBN1H32GO (accessed 31 October 2018). Irwin A S M and Milad G (2016) The Use of Crypto-Currencies in Funding Violent Jihad. Journal of Money Laundering Control 19(4): 407–425. Jasanoff S (2004a) Afterword. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 274–282. Jasanoff S (2004b) The Idiom of Co-Production. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 1–12. Johnson S and Kwak J (2010) 13 Bankers: The Wall Street Takeover and the Next Financial Meltdown. New York: Vintage Books. Jones M D and McBeth M K (2010) A Narrative Policy Framework: Clear Enough to Be Wrong? Policy Studies Journal 38(2): 329–353. Jopson B (2016) Regulators Say Bitcoin Poses ‘Financial Stability Risks’. Financial Times, 21 June. Available at www.ft.com/content/e0880cf6-3800-11e6-9a0582a9b15a8ee7 (accessed 31 October 2018). Jopson B and Wigglesworth R (2017) US Regulators Vow to Be on Guard for Bitcoin Risks. Financial Times, 14 December. Available at www.ft.com/content/64e6e7c8e116-11e7-a8a4-0a1e63a52f9c (accessed 31 October 2018). Kai J and Zhang F (2017) Between Liberalization and Prohibition: Prudent Enthusiasm and the Governance of Bitcoin/Blockchain Technology. In Campbell-Verduyn M (ed.) Bitcoin and Beyond: Cryptocurrencies, Blockchains, and Global Governance. London/New York: Routledge, 88–108. Kaminska I (2017) Attack of the 50-Foot Blockchain, a Sceptic’s Guide to Crypto. Financial Times, 27 July. Available at https://ftalphaville.ft.com/2017/07/27/2191972/attack-ofthe-50-foot-blockchain-a-sceptics-guide-to-crypto/ (accessed 31 October 2018). Kaminska I and Tett G (2016) Blockchain Debate Eclipses Basel III at Davos. Financial Times, 21 January. Available at www.ft.com/content/156c4096-c055-11e5-9fdb87b8d15baec2 (accessed 31 October 2018). Karlstrøm H (2014) Do Libertarians Dream of Electric Coins? The Material Embeddedness of Bitcoin. Distinktion: Scandinavian Journal of Social Theory 15(1): 23–36. Kaufmann S (2016) Security Through Technology? Logic, Ambivalence and Paradoxes of Technologised Security. European Journal for Security Research 1(1): 77–95.
136 Malcolm Campbell-Verduyn Keohane R O and Nye J S Jr (1998) Power and Interdependence in the Information Age. Foreign Affairs, 77: 81–94. Knafo S (2013) The Making of Modern Finance: Liberal Governance and the Gold Standard. London/New York: Routledge. Kramer H (2013) Western Union: Moving Money to Make Money. Forbes, 10 May. Available at www.forbes.com/sites/hilarykramer/2013/05/10/wu-stock-report/ #227383947771 (accessed 31 October 2018). Krieger L (1977) The Idea of Authority in the West. The American Historical Review 82(2): 249–270. Krugman P (2013) Bitcoin Is Evil. The New York Times, 28 December. Available at https://krugman.blogs.nytimes.com/2013/12/28/bitcoin-is-evil/ (accessed 31 October 2018). Lagarde C (2018) Addressing the Dark Side of the Crypto World. IMF Blog, 13 March. Available at https://blogs.imf.org/2018/03/13/addressing-the-dark-side-ofthe-crypto-world/ (accessed 31 October 2018). Lake D A (2010) Rightful Rules: Authority, Order, and the Foundations of Global Governance. International Studies Quarterly 54(3): 587–613. Lall R (2012) From Failure to Failure: The Politics of International Banking Regulation. Review of International Political Economy 19(4): 609–638. Leander A (2013) Technological Agency in the Co-Constitution of Legal Expertise and the US Drone Program. Leiden Journal of International Law 26(4): 811–831. Leese M (2016) Exploring the Security/Facilitation Nexus: Foucault at the ‘Smart’ Border. Global Society 30(3): 412–429. Macknight J (2016) Blockchain: Less Talk, More Action. The Banker, 1 April. Available at www.thebanker.com/Transactions-Technology/Trading/Blockchain-less-talkmore-action?ct=true (accessed 31 October 2018). Mainelli M and von Gunten C (2014) Chain of a Lifetime: How Blockchain Technology Might Transform Personal Insurance. Long Finance. Available at http://archive. longfinance.net/images/Chain_Of_A_Lifetime_December2014.pdf (accessed 31 October 2018). Manjikian M (2018) Social Construction of Technology: How Objects Acquire Meaning in Society. In McCarthy D R (ed.) Technology and World Politics: An Introduction. Milton Park/New York: Routledge, 25–41. Maurer B (2016) Re-Risking in Realtime. On Possible Futures for Finance after the Blockchain. BEHEMOTH - A Journal on Civilisation 9(2): 82–96. Mayer M and Acuto M (2015) The Global Governance of Large Technical Systems. Millennium - Journal of International Studies 43(2): 660–683. Mayer M, Carpes M and Knoblich R (2014a) The Global Politics of Science and Technology: An Introduction. In Mayer M, Carpes M & Knoblich R (eds.) The Global Politics of Science and Technology - Vol. 1: Concepts from International Relations and Other Disciplines. Dordrecht: Springer, 1–35. Mayer M, Carpes M and Knoblich R (2014b) A Toolbox for Studying the Global Politics of Science and Technology. In Mayer M, Carpes M & Knoblich R (eds.) The Global Politics of Science and Technology - Vol. 2: Perspectives, Cases and Methods. Dordrecht: Springer, 1–17. McCarthy D R (2015) Power, Information Technology, and International Relations Theory: The Power and Politics of US Foreign Policy and Internet. Basingstoke: Palgrave Macmillian.
What does technology do?
137
McKeen-Edwards H and Porter T (2013) Transnational Financial Associations and the Governance of Global Finance: Assembling Wealth and Power. London/New York: Routledge. Michaels D and Loder A (2018) SEC Pours Cold Water on Prospect of Bitcoin ETFs. The Wall Street Journal, 19 January. Available at www.wsj.com/articles/sec-rejectsidea-of-bitcoin-etfs-1516323558 (accessed 31 October 2018). Milano A (2018) Korea’s Financial Regulator Wants to Use the Blockchain for Stock Trading. Coindesk, 2 August. Available at www.coindesk.com/koreas-financial-regu lator-wants-to-use-the-blockchain-for-stock-trading/ (accessed 31 October 2018). Mueller M L (2010) Networks and States: The Global Politics of Internet Governance. Cambridge: MIT Press. Nakamoto S (2008) Bitcoin: A Peer-to-Peer Electronic Cash System. Bitcoin. Available at https://bitcoin.org/bitcoin.pdf (accessed 31 October 2018). Narayanan A, Bonneau J, Felten E, Miller A and Goldfeder S (2016) Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton: Princeton University Press. Nesvetailova A (2010) Financial Alchemy in Crisis: The Great Liquidity Illusion. London: Pluto. Nian L P and Chuen D L K (2015) A Light Touch of Regulation for Virtual Currencies. In Chuen D L K (ed.) Handbook of Digital Currency: Bitcoin, Innovation, Financial Instruments, and Big Data. Amsterdam: Elsevier, 309–326. Ogburn W F (ed.) (1949). Technology and International Relations. Chicago: University of Chicago Press. Ontario Securities Commission (2017) OSC Highlights Potential Securities Law Requirements for Businesses Using Distributed Ledger Technologies, 8 March. Available at www.osc.gov.on.ca/en/NewsEvents_nr_20170308_osc-highlights-poten tial-securities-law-requirements.htm (accessed 31 October 2018). Palan R (1997) Technological Metaphors and Theories of International Relations. In Farrands C, Talalay M & Tooze R (eds.) Technology, Culture and Competitiveness. London/New York: Routledge, 13–26. Panetta K (2017) Top Trends in the Gartner Hype Cycle for Emerging Technologies. Gartner, 15 April. Peters G B (2005) The Problem of Policy Problems. Journal of Comparative Policy Analysis 7(4): 349–370. Plassaras N A (2013) Regulating Digital Currencies: Bringing Bitcoin within the Reach of IMF. Chicago Journal of International Law 14(1): 377–407. Popper N (2015) Digital Gold: Bitcoin and the inside Story of the Misfits and Millionaires Trying to Reinvent Money. New York: HarperCollins Publisher. Porter T (2001) The Democratic Deficit in the Institutional Arrangements for Regulating Global Finance. Global Governance 7(4): 427–439. Porter T (2003) Technical Collaboration and Political Conflict in the Emerging Regime for International Financial Regulation. Review of International Political Economy 10(3): 520–551. PwC (2015) Money Is No Object: Understanding the Evolving Cryptocurrency Market. Available at: www.pwc.com/us/en/financial-services/publications/assets/ pwc-cryptocurrency-evolution.pdf (accessed 31 October 2018). Ralph O (2017) Insurers Battle for Blockchain. Financial Times, 27 November. Available at https://amp-ft-com.cdn.ampproject.org/c/s/amp.ft.com/content/95027f26-d437-11e78c9a-d9c0a5c8d5c9 (accessed 31 October 2018).
138 Malcolm Campbell-Verduyn Reid J (2009) Politicizing Connectivity: Beyond the Biopolitics of Information Technology in International Relations. Cambridge Review of International Affairs 22(4): 607–623. Rethel L and Sinclair T J (2012) The Problem With Banks. London: Zed Books. Reus-Smit C (2007) International Crises of Legitimacy. International Politics 44(2-3): 157–174. Risse T and Kleine M (2007) Assessing the Legitimacy of the EU’s Treaty Revision Methods. Journal of Common Market Studies 45(1): 69–80. Rodima-Taylor D and Grimes W (2019) Virtualizing Diasporas Blockchain Technologies in the New Transnational Space. Global Networks, early view: https://doi.org/ 10.1111/glob.12221. Roger C and Dauvergne P (2016) The Rise of Transnational Governance as a Field of Study. International Studies Review 18(3): 415–437. Rotolo D, Hicks D and Martin B R (2015) What Is an Emerging Technology? Research Policy 44(10): 1827–1843. Sassen S (2008) Territory, Authority, Rights: From Medieval to Global Assemblages. Princeton: Princeton University Press. Schmidt V A (2013) Democracy and Legitimacy in the European Union Revisited: Input, Output and ‘Throughput’. Political Studies 61(1): 2–22. Schmidt V A and Thatcher M (2013) Resilient Liberalism in Europe’s Political Economy. Cambridge: Cambridge University Press. Schwab K (2017) The Fourth Industrial Revolution. Geneva: World Economic Forum. Scott B (2016) How Can Cryptocurrency and Blockchain Technology Play a Role in Building Social and Solidarity Finance? United Nations Research Insitute for Development Working Paper, No. 2016-1. Seabrooke L and Tsingou E (2010) Responding to the Global Credit Crisis: The Politics of Financial Reform. The British Journal of Politics & International Relations 12 (2): 313–323. Shubber K (2016) Banks Find Blockchain Hard to Put into Practice. Finanacial Times, 12 September. Available at www.ft.com/content/0288caea-7382-11e6-bf48b372cdb1043a (accessed 31 October 2018). Singh J P (2013) Information Technologies, Meta-Power, and Transformations in Global Politics. International Studies Review 15(1): 5–29. Skolnikoff E B (1993) The Elusive Transformation: Science, Technology, and the Evolution of International Politics. Princeton: Princeton University Press. Soltas E (2013) Bitcoin Really Is an Existential Threat to the Modern Liberal State. Bloomberg, 5 April. Available at www.bloomberg.com/view/articles/2013-04-05/bitcoinreally-is-an-existential-threat-to-the-modern-liberal-state (accessed 31 October 2018). Sparkes M (2014) The Coming Digital Anarchy. The Telegraph, 9 June. Available at www.telegraph.co.uk/technology/news/10881213/The-coming-digital-anarchy.html (accessed 31 October 2018). Spoke M (2015) How Blockchain Tech Will Change Auditing for Good. Coindesk, 11 July. Available at www.coindesk.com/blockchains-and-the-future-of-audit/ (accessed 31 October 2018). Stafford P and Murphy H (2016) Has the Blockchain Hype Finally Peaked? Financial Times, 29 November. Available at www.ft.com/content/5e48f9ec-b651-11e6-ba8595d1533d9a62 (accessed 31 October 2018). Swan M (2015) Blockchain: Blueprint for a New Economy. Sebastopol: O’Reilly Media.
What does technology do?
139
Tapscott A (2016a) Blockchain Democracy: Government of the People, By the People, for the People. Forbes, 16 August. Available at www.forbes.com/sites/alextapscott/ 2016/08/16/blockchain-democracy-government-of-the-people-by-the-people-for-thepeople/#37afd8524434 (accessed 31 October 2018). Tapscott A and Tapscott D (2016) Blockchain Revolution: How the Technology behind Bitcoin Is Changing Money, Business, and the World. London: Penguin Books. Tapscott D (2016b) Blockchain Revolution: Is the Future of Business a Company without Workers, Managers, or a CEO? Quartz, 31 May. Available at https://qz. com/695499/is-the-future-of-business-a-company-without-workers-managers-or-aceo/ (accessed 31 October 2018). Tempelhof S, Teissonniere E, Templehof J and Edwards D (2017) Pangea Jurisdiction and Pangea Arbitration Token: The Internet of Sovereignty. Bitnation, Available at https://tse.bitnation.co/documents/ (accessed 31 October 2018). Thompson W R (1990) Long Waves, Technological Innovation, and Relative Decline. International Organization 44(2): 201–233. Tsingou E (2015) Club Governance and the Making of Global Financial Rules. Review of International Political Economy 22(2): 225–256. Underhill G R D (2015) The Emerging Post-Crisis Financial Architecture: The PathDependency of Ideational Adverse Selection. The British Journal of Politics and International Relations 17(3): 461–493. United Nations (2009) UN Commission of Experts on Reforms of the International Monetary and Financial Systems, Recommendations. New York: United Nations. United Nations High Commisioner for Refugees (2018) Blockchain Digital Identity to Deliver International Aid to Syrian Refugees. Available at www.unhcr.org/withrefu gees/map-location/blockchain-digital-identity-deliver-international-aid-syrian-refu gees/?mpfy_map=885 (accessed 31 October 2018). Walch A (2017) Blockchain’s Treacherous Vocabulary: One More Challenge for Regulators. Journal of Internet Law 21(2): 1–16. Walport M (2016) Distributed Ledger Technology: Beyond Block Chain. Report of the UK Government Chief Scientific Adviser, Government Office for Science. The Warwick Commission (2009) The Warwick Commission on International Financial Reform: In Praise of Unlevel Playing Fields. Coventry: University of Warwick. Waters R, Hook L and Bradshaw T (2015) Big Tech Back in Vogue on Wall Street. Financial Times, 23 October. Available at www.ft.com/content/594989b4-795f-11e5933d-efcdc3c11c89 (accessed 31 October 2018). Watson M (2018) The Market. Newcastle: Agenda Publishing. Webb J (2015) IOSCO Chief Says Block Chain Could Revolutionize Market-Data Transfers. SNL Financial, 4 December. Available at www.automatedtrader.net/head lines/154990/iosco-chief-says-block-chain-could-revolutionize-market_data–trans fers-_-snl-financial (accessed 31 October 2018). Weiss T G and Wilkinson R (2014) Rethinking Global Governance? Complexity, Authority, Power, Change. International Studies Quarterly 58(1): 207–215. Wellman B, Quan-Haase A, Boase J, Chen W, Hampton K, Díaz I and Miyata K (2003) The Social Affordances of the Internet for Networked Individualism. Journal of Computer-Mediated Communication, 8(3): n.phttps://academic.oup.com/jcmc/ issue/8/3. Wesseling M, de Goede M and Amoore L (2012) Data Wars beyond Surveillance: Opening the Black Box of Swift. Journal of Cultural Economy 5(1): 49–66.
140 Malcolm Campbell-Verduyn Wieczner J (2018) IBM Is Working with a ‘Crypto Dollar’ Stablecoin. Fortune, 17 July. Available at http://fortune.com/2018/07/17/ibm-stablecoin-cryptocurrency-stel lar/ (accessed 31 October 2018). Wild J (2015) Blockchain Believers Seek to Shake up Financial Services. Financial Times, 14 December. Available at www.ft.com/content/efa10418-9747-11e5-922887e603d47bdc (accessed 31 October 2018). Winner L (1980) Do Artifacts Have Politics? Daedalus 109(1): 121–136. Winner L (1986) The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: University of Chicago Press. Woll C (2014) The Power of Inaction: Bank Bailouts in Comparison. Ithaca: Cornell University Press. Wright A and De Filippi P (2015) Decentralized Blockchain Technology and the Rise of Lex Cryptographia. Available at SSRN https://papers.ssrn.com/sol3/papers.cfm? abstract_id=2580664 (accessed 31 October 2018). Yeoh P (2017) Regulatory Issues in Blockchain Technology. Journal of Financial Regulation and Compliance 25(2): 196–208. Youngs G (2007) Global Political Economy in the Information Age: Power and Inequality. London/New York: Routledge.
7
Who connects the dots? Agents and agency in predictive policing Mareile Kaufmann
Who acts? A journey to the center of humanities Singularity (Kurzweil, 2005), tech addiction (Kleinman, 2015), fake news (Tufekci, 2018), echo chambers (Barberá et al., 2015), algorithmic bias (Miller, 2015), superintelligence (Bostrom, 2014); as different as they seem at first, these are some of the many signal words in the language of popular science literature and tech-news that express that technology is no longer just a means to an end. Technology is not simply a material solution designed by humans for problems identified by humans, but it is increasingly discussed as matter that matters. The fact that digital technology is leaving its traces on human behavior and social life is no longer an insight highlighted by intellectual niches (cf. early writings on technology and society by MacKenzie and Wajcman, 1985; Bijker et al., 1987). The above concepts illustrate that the discussion of technology’s intended and unintended workings in society has made its way into mainstream news coverage, political debate, and dedicated research projects. Such a focus on the way in which technology changes society implies an appreciation of technology and the idea that technology itself has risen to be an important actor in most of today’s settings, whether in news reporting, crime control, or nature conservation. Though, not everyone who writes about, studies, or deals with technology in nature and culture explicitly acknowledges the character of technology’s agency. In fact, most commentators stick to the more careful vocabulary of technology’s effects or its consequences for society, which also chimes with the attempt to find solutions for related challenges within technology itself. This is one of the reasons why a more profound engagement with the matter of technology, the way in which we understand it, and the role it plays within social, cultural, and natural settings is due. In this chapter, I want to draw attention to the collaborative processes and the many kinds of agencies involved in predictive analytics, more specifically predictive policing. Based on an empirical study of seven prediction software models and 11 interviews1 with experts, police staff, software developers, and programmers,2 I want to sketch out an answer to the question: who
142 Mareile Kaufmann connects the dots in technology-supported predictive policing? Even though the sources of my empirical study, i.e. informants and actual software models, vary in terms of orientation and national settings, the questions and issues that emerge when they collaborate, build, and implement prediction algorithms for policing are comparable. At a more general level, I want to explore how agency comes about and can be conceptualized, especially in contexts in which technology moves center stage. The context in which I look at these trends appears to be a rather localized one: in predictive policing, officers use data-driven analyses to prioritize crimes, hotspots, and offender groups in their districts. Policing with crime maps may be nothing new (Chamard, 2006), but it is a field subject to rapid innovation as software begins to manage the various aspects of policing and algorithms co-produce the actual predictions (Chan, 2003; Crawford, 2006). Despite the outspoken focus on the local (the development of prediction algorithms for crime situations in specific cities is a case in point), the more overarching, international dimensions of these practices are evident. Predictive policing software stands for a shift from after-the-fact to rule-based law enforcement (Hildebrandt, 2016b) that prioritizes pattern detection as a mode of understanding the world (Kaufmann, 2018). Policing software and crime prediction algorithms contribute to the making of profiles and based on that, they produce “digital prophecies” (Esposito, 2013) or automated recommendations. The automation of recommendations, in turn, has begun to re-determine global fields as diverse as media usage, commercial practices, and political decision-making. In that sense, predictive policing illustrates one aspect of a development that Sheptycki (2007: 391) already recognized in 2007, namely that “issues (…) of crime control have become central to the transnational condition.” Or, to put it differently, the automation of control that found its way into police practice is not so different from control technologies in other societal areas. Further, predictive policing is an instance of a more global trend towards private (and increasingly digital) security activities that cut across disciplines as different as criminology and International Relations (IR) (Abrahamsen and Williams, 2010: 12f.). Especially Research in the field of security, so some scholars argue, would profit from a more “in-depth conversation between International Relations (IR) and criminology” to better grasp differentiated “practices of (in)security (…) that are nevertheless connected along a Mobius strip” (Bigo, 2016: 1068). This interdisciplinary perspective is indeed the vantage point for this article. Most importantly, the case of predictive policing software exemplifies how human and non-human factors and forms of arguing collaborate in the process of decision-making. As such, the study of these technologies places emphasis on the very local and specific relations that we need to analyze in order to conceptualize the agency of technology and understand the more global trends of algorithmic security at work. Conceptualizing the role of technological agency is not a simple task. Some have claimed that it leads us to nothing less than questions about life
Who connects the dots? 143 and death. Intelligent technologies, in particular, make us (re-)consider whether they eventually emancipate us from our bodies, our minds, our decision-making, or from biology altogether (cf. O’Connell, 2017). One does not have to follow new materialist perspectives all the way to the transhumanist acme (that is the deliverance from our bodies and the uploading of our minds into machines) to theorize the relationship between technology and society, politics, and IR. Yet, the question as to whether technology has agency will at least prompt us to re-think the neat world of binary categories (for example of structure vs. agency). It blurs the boundaries between matter and that which it represents, and it moves us from readily available and generated matter to generative and creative matter (van der Tuin and Dolphijn, 2010). Ultimately, technological agency and its related premises de-center human cognition. Drawing the focus away from anthropogenic agency does shake a few foundations of the humanities. It challenges our understandings of sociality and the political. If matter can be an agent, then technology is not a static object or a mere means in the hands of humans, but processual in nature. Some new materialists would claim that matter has self-organizing capacities (van der Tuin and Dolphijn, 2010) – an assumption that is unexpectedly reflected in public policy, for example in the European Network and Information Security Agency’s (ENISA) portrayal of the Internet as an “interconnection ecosystem” (ENISA, 2011). Understood as an active force, matter is not only sculpted by, but also co-productive in conditioning and enabling social worlds, human life, and experience. As a result of its formative impetus (van der Tuin and Dolphijn, 2010), matter or technology would then be part of shaping nature and culture, the social and the political. Such an analytical starting point also requires a specific set of methodological commitments.
Studying the life of technologies To study the everyday life of objects as well as the social and political dimensions of technology requires a view that is broader and more transversal than a sole view on the object itself. Studying agency means studying transformation and changes, emergences and development. Such a dynamism is difficult to capture without either focusing on the genealogy of the object or on the web of relations in which it is situated. In the following, I will present a few central approaches and methodologies to trace non-human agency. The agency of technologies can be studied by using a concrete object, for example a specific digital technology, as a conceptual starting point to venture into its surroundings and trace the networks it is situated in. Already in the 1950s and 1960s, Gibson (1966) suggested that objects and the way in which they are perceived, afford specific actions over others. According to his theory of affordances, the shape, or Gestalt of objects, gives opportunities to perform some actions with them, but not others, which eventually grants them a certain ability to act.
144 Mareile Kaufmann Much later, Voelkner (2013) offers a slightly different perspective on using the actual object as a starting point to explore its agency: she describes how objects and the many dimensions, decisions, and developments they incorporate can “give form” to a phenomenon. This perspective is loosely based on new materialist ideas: objects not only capture the politics, history, and the many representations ascribed to it, but they do so in a dynamic way. Objects are generative of meaning and representation (van der Tuin and Dolphijn, 2010). Not only do they generate meaning constantly, but Bennett (2010) suggests that objects (in our case digital technologies), actually act quite concretely: they shut down, shape, initiate, burn etc. Objects, she deduces, have constructive and destructive thing-power (Bennett, 2010). Actor-Network Theory (cf. Callon, 1991; Latour, 2005) focuses even more on the web of relations that objects incorporate. The study of such relations developed over time into an approach to theorize the agency of objects. Actor-Network Theory argues that to appreciate the meaning and agency of objects and subjects we need to look at the networks in which they are situated. It is only through a focus on relations that we can understand how agency comes about and how humans and technologies affect and act on each other, what kind of acts they bring about and what the meaning of these acts are. Objects are then “actants” that mediate between human actors and systems (cf. Mutlu, 2013). The expression of mediation, however, can be misleading as it may suggest reducing the role of objects to the position in-between rather than placing them at the center of a network, where they can very well be situated. A related approach to grasping the agency of objects in a more transversal way is to follow and trace their dynamics through “open-ended assemblages that are always in the process of (un-)becoming, absorbing, discarding, and transforming disparate human and nonhuman elements” (Voelkner, 2013: 204; summarizing Bennett, 2005). Even though Voelkner describes assemblages in her own work on human (in)security as “circumstantial, unstable, and unpredictable” (Voelkner, 2013: 204), it is equally possible to trace the agency of objects within assemblages that are stable in the sense that these assemblages reproduce themselves and therewith the agency of the objects placed within them. In both cases, using assemblages to trace relations between objects or devices, practices, humans, societies, and discourses allows for a deeper understanding of the complexity of a phenomenon. The methodology of assemblages then situates the object and its agency inside a network at the same time as this network can document variability and flows (Voelkner, 2013). Methodically, studying the participation of an object within a given context requires immersive and in-depth studies. One literally has to follow the object in order to trace its workings. Such mapping exercises can include historical developments to document the life stories of objects, as well as participant observation (Mutlu, 2013) and other (digital) cartographic methods. Here, it is crucial to be aware of the way in which we as researchers and the instruments
Who connects the dots? 145 we use affect affect this very practice of tracing. van der Tuin and Dolphijn (2010) remind us that in fact all of the involved – that is the observer, the observed, and the observing instruments – are agential. They are active parts of the assemblage that is traced. The methodological commitment to mapping then not only requires the active use of reflexivity in the sense of an explanation as to from where a researcher maps a given object and its workings. It also means that the observer, the observed, and the observing instruments can cause new emergences. Such a methodology necessarily leads away from static categorizations and classifications towards mappings and cartographies as only they can capture dynamic developments and translate this dynamism into theory formation. Cartographies of agency aim to transcend narratives, to trace actualizations, adaptations, mutations, co-constitutions, connections, and connectivities, in-betweens, as well as the multiplicity of flows, rather than fixed grids (van der Tuin and Dolphijn, 2010). I will now proceed to explain the standpoint from which I started my cartography of prediction algorithms and the methodological flexibility that I needed to do this mapping exercise. Surprises in my research on the life of prediction algorithms When I started studying predictive policing software and their algorithms, I was interested in the way in which they influence the police’s understanding of crime. I wanted to trace if and how algorithms co-constitute the way in which police officers react to crime and even what they consider a crime. Since I entered this analytic project knowing studies about algorithmic governance (e.g., Amoore, 2009; Amoore and Piotukh, 2016), the challenges of predictive policing at large (e.g., Harcourt, 2007), and big data policing (Chan and Bennett Moses, 2016), as well as the broader literature on preemption, prevention, precaution, and preparedness (e.g., Beck, 1986; Ewald, 2002; O’Malley, 2009), I was inspired to study prediction algorithms without necessarily conducting yet another discursive analysis of temporality and the politics of “pre.” The plan was to start a project that would give me in-depth insight into how matter – namely policing algorithms – works and whether these material entities would change and reformulate police work and the police’s understanding of crime. I started out studying the actual software products, but soon realized that this would not be sufficient to find out and map how algorithms act in the context of policing. It was more important to trace the life of the algorithm: I needed to understand how these algorithms come about, who is part of writing them, what kind of data is fed into them, how algorithms are received and implemented by police officers, how exactly the software products work and present findings, and how these findings would eventually influence police decision-making. Even if participant observation would have been an ideal way of studying such questions, I figured that observing the software models in action would involve traveling to three continents and at least five different countries. Instead, I chose to conduct in-depth interviews with software
146 Mareile Kaufmann developers, programmers, police officers, and experts in the field. They could answer my questions about how they would collaborate in writing an algorithm, who would translate assumptions about crime into variables, which data was used to train the algorithm, how software owners would introduce the algorithms in police stations (sometimes via long-term collaboration), and how police officers would use and interpret the results and take decisions based on them. Together with an insight into the actual software models, I could begin to trace the agencies involved in this process. While I originally wanted to focus on the agency of the algorithms, and on that alone, the net of relations around the algorithms almost forced itself into my analysis. It was impossible to map the agency of the algorithm without understanding the workings of data, computing practices, attitudes to and histories of technology in police work and much more. Most importantly, I could not take human agency out of the equation. I had to understand and map how humans, technologies, other objects, and their surroundings truly collaborate and constitute each other in the context of technology-based predictive policing. In short, inspired by the material turn in critical security studies, I expected to find and write mainly about the agency of algorithms. But eventually, this mapping exercise taught me to appreciate the role of human agency in relation to the agency of objects.
Algorithms as digital detectives? An analysis of agency in algorithmic predictive policing INTERVIEWER:
“Will people be out of the loop in creating predictions?” “Well, someone needs to push that ‘On’ button.”
INTERVIEWEE K:
The general premise for prediction software technologies to be developed in the first place is that they will make a difference in policing. In terms of the change that predictive policing software would bring, most interviewees mentioned the expectable growth in efficiency and effectiveness of police work (Int. C, E, G, I, K).3 For some interviewees this was limited to the idea that the software would assist police officers in making “a better guess” (Int. K) as to where and when to place police staff. The expectation of others was much higher. They wouldn’t, for example, rule out that the sheer calculatory power of computers would eventually “outperform a skilled police officer” (Int. B) in strategic planning. These divergent anticipations illustrate the many roles that software or algorithms can play in predictive policing. They also give us a first glimpse into the differing ideas about algorithmic agency in the predictive policing assemblage. If predictive policing is all about connecting the dots within vast amounts of data in order to recognize crime patterns, the rise of prediction software prompts a few questions: if algorithms search information for particular constellations of parameters, who actually, connects the dots? Who detects
Who connects the dots? 147 the pattern? And who even makes data dots appear in the first place? In this context, the popularized fear of data skeptics is that intelligent algorithms may “connect the dots without any human analyst oversight” (Lindsey, 2018: n.p.). In fact, such statements are uttered by skeptics and optimists alike. They imply the idea that either human or algorithmic intelligence may be better or worse for ensuring the identification of the right results – whether these are the correct results, the politically correct results, or the efficient results. However, if one describes agency via assemblages or networks, then it will become clear that there is no such thing as either/or. There is not just one kind of agency or one kind of oversight within predictive policing practices. Where and when different agencies emerge and unfold may be best told in the form of the life story of the algorithm. Even though such a story is not necessarily linear – as it really consists of multiple stories that intersect and change over time, the scaffolding of a life story4 may be helpful to illustrate the many points at which agency of humans and technology occur in the process of predictive policing. Pre-conceptions: preparing (inputs) for the algorithm to be conceived The computation of crime does not start with the birth of the algorithm. Computing crime ties in with a long-standing history of police bureaucratization, the use of technology within the police, and the many reforms that sought to increase police efficiency (cf. Wilson, 2006). Eventually, the computation of crime intersected with the larger societal shifts from analogue to digital computational means (Int. H.; Wilson, 2018a, 2018b). Keeping this broader historical context of predictive policing in mind, this chapter jumps straight to more recent efforts in digital computation. Nonetheless, the simplified life story that I assembled from the insights gained during my project equally begins before the algorithm. It starts with an intention. Every algorithm is developed for a specific purpose. A software developer explains: “Before we even start looking at the data we have to start working with the stakeholders to find out what it is they want to forecast. They decide that” (Int. I). The question of purpose is tightly entangled with the kind of ingredients or information that software developers would use to build the algorithm. This is an early phase that includes data collection and editing as well as the cleaning or pre-processing of databases to help building and fine-tuning the algorithm at a later stage. In short, “without historic data you can’t train an algorithm” (Int. B; similar point mentioned by Int. I). At this stage, agency already appears in multiple shapes, for example with regard to the question of how information – the smallest unit of what will eventually make up the algorithms body – is understood, how it is collected, cleaned, assembled, and translated involves many human and non-human actors. Together, they determine how the algorithm will be implemented and what the algorithm can find.
148 Mareile Kaufmann How information is thought of and conceptualized already makes a critical difference for the algorithm to be conceived. Some software developers plan their algorithm with the assumption that data collection should be “opportunistic” (Int. C) and “greedy” (Int. I). In order to train an algorithm, one should “connect all databases so that we get one answer: this is what we know.” (Int. D). Since “the more you know, the better system you can make” (Int. C). However, such databases and archives – no matter how big and connected they are – already act within the algorithmic project as they constitute the data that is available for the algorithm’s training. Any dataset only ever reflects that which has at some point been chosen to be registered and stored. One interviewee suggests that the “number of people buying headache medicine” (Int. C) could be relevant for police algorithms, but he also mentions that this data is not registered in most countries – and probably shouldn’t be for reasons of data protection (Int. C). With that, he acknowledges that databases, even if they would include any available piece of digital information today, influence knowledge production. They do so through the very information about social life that they do and do not contain. More importantly, such ideas express that any data – if only collected – could be of relevance to a policing algorithm. Other software developers would not agree with such an all-encompassing attitude towards data. They work with more select and small datasets, for example with information about “what kind of crime occurred, where it occurred and when it occurred.” (Int. F). Some algorithms focus only on burglaries and add the stolen goods to the relevant information (Int. J). Advocates of selective approaches to data argue that everything else makes it harder to manage the software (Int. C). Often, the latter developers are also more aware about the way in which data is and is not registered, or collected, and what that actually means for the algorithm. A related, yet slightly different discussion among software developers is whether the data they use is enough or not. Some argue that in “these days, digital data is capturing most things that we would be interested in using. (…) I haven’t seen a case where there was a type of data that we wanted to use (and) it just does not exist anywhere” (Int. G). Others find that “We don’t have all the important data. (…) It’s noisy, it’s not perfectly measured, we would have preferred other data, which we don’t have. We do the best what we can with whatever we got” (Int. K; similar point made by Int. D). The different opinions as to what kind of data can make an algorithm and how a dataset literally acts on the algorithm as it determines what the algorithm can do, becomes even more evident when interviewees explained how datasets are pre-structured or cleaned. When interviewee I mentions the pre-structured dataset that they receive about prison inmates, it becomes clear that the police’s, magistrates’, and prison guards’ incarceration practices fully determine the dataset that they receive, which will also influence the algorithms’ results. Other software developers mention that datasets obviously vary across cities, which is why algorithms have to be built
Who connects the dots? 149 specifically for the geographies they will be used in (Int. F). Police practice to a large extent structures the data that is available for the algorithm’s training. When it comes to the way in which police officers themselves generate datasets, a known challenge is underreporting and other thresholds for reporting. Software developers consider these a “caveat” (Int. A): The computer program is only using crime incidents that resulted in a formal incident report that’s been created. It’s not using information when an officer stops on the street talking to a lady sitting on her stoop (…) The individual officers have their own kind of perceptive of what crime is. (Int. A) Another police officer problematizes that “approximately 20% of the police population are registering 80% of the information in the database” (Int. D). This is a fact that constitutes the algorithm’s workings. Effectively, a multitude of decisions influence the generation of datasets, as for example, the decisions taken by officers on the crime scene. One interviewee reflects about the semantics of when, for instance, police response officially stops: “Because getting control over the scene, when is that? When you handcuff him? Is that when you provided first aid because someone’s shot?” (Int. H). Such semantics influence the writing of reports that will eventually be turned into digital information to be analyzed by algorithms. In a similar vein, another officer (Int. D) deliberates about the way in which data collection by the police has to follow the standards of police law, which are, in his opinion, subjective and often checked manually. Others mention that the analogue technology of the police form prestructures and determines data collection. It only affords the collection of information that the form asks of officers. In addition, such forms only ever represent the information that “is reported or only the information that the officers find” (Int. A). Within the context of data collection, all of these decisions that are either taken by police officers or that are enabled through the bureaucratic technologies of law and form-filling is a form of acting that influences the data available for the training of algorithms. Not only the original production of information, but also its translation into digital data is a moment where the agency of both, humans and technologies are relevant. Much information does not appear digitally, so it has to be translated into digital formats in order to make it readable and possible to process for a computer (Int. A; E; H). While some argue that nothing gets lost in this process in principle, (…) in practice you may lose some precision because there are limits in how many sources you are prepared to invest in let’s say digitizing a picture. Same thing is true with everything else. You may not be able to reproduce the same precision that is in the information unless you take the effort. (Int. I)
150 Mareile Kaufmann Others are more outspoken about the fact that translating analogue into digital information is always an incidence in which human mistakes can be made (Int. A). With digitalization, social context may also get lost (Int. I). However, since social context is relevant for the processing of digital data at a later stage, some police stations have developed procedures to preserve this context. Such procedures are again highly dependent on the police officer that registers the information: So they were obliged to fill in a short story. They had to present in written text what is the story here? What is the suspicion? Why do you think this is suspicious? You have to put it in words. Because we can’t really tell that from the data you provided. (Int. D) This goes to show that human and machinic forms of writing and reading, of compressing and decompressing information are not necessarily compatible and always a moment of decision-making and interpretation (cf. Hildebrandt, 2016a: 26; Kaufmann, 2018). Yet another set of technologies that afford and structure the data available for training algorithms are automated systems that remove specific variables from datasets (Int. I) or indexing systems that lead programmers through written text (Int. D). All of these make certain variables and texts visible, but also render others invisible within given datasets – even for the algorithm programmers. Software developers furthermore actively clean datasets of what they consider “errors” (Int. C). They structure datasets by categorizing the type of information “and at some point, you have this challenge: who decides if this is black or white?” (Int. H). Some developers need to do this cleaning and structuring work manually: “We have all this data, all this information, but we don’t have procedures, we don’t have any systems that help us decide which data to keep, which to delete. The data-management itself is manual.” (Int. D). Not only the cleaning procedures – whether done by technologies or humans – determine the available data for the training of algorithms. Many developers actually add and combine different databases to train their algorithm, some of which they find available in the public domain. For example, Twitter data are used to infer information about relevant events and crime locations (Int. G). Other databases are professionally sold to programmers and developers (Int. H), while some data providers only make parts of the database available for use (Int. C, G). As explained above, all these additional datasets are again built according to the specific assumptions and rules of those who collect and organize the data in the first place (Int. D). All of these databases are pre-structured, which often leads to additional manual or digital cleaning-procedures to prepare them for, and attune them to, the algorithm’s purpose. It has become evident, how much a database and the way in which it is built, cleaned and combined has affordances. Each dataset allows for certain
Who connects the dots? 151 types of usages over others. More importantly, each database allows for certain kinds of analytics over others. The rules according to which databases are built, the data collection methods that determine which data is being registered, as well as the practices of human and non-human indexing and data-cleaning not only make up the complex assemblage of the database, but all of them involve different forms of agency within the process of building an algorithm and of predictive policing at large. Human and nonhuman influences are already at play in the phase that precedes the actual programming of the algorithm. One software developer mentions that these processes have at least one political dimension for him: If you want to clean up the data, clean up the algorithm, I’m gonna remove some predictive accuracy, I’m gonna make everybody worse off. There is gonna be more injustice in those decisions, but I’m gonna make everybody equally worse off. The question for policy makers is: is that a good trade? (Int. I) When and how specific datasets actually stand for accuracy and justice is debatable. More importantly, even though the developer is deeply embedded in this assemblage of agency and decision-making, this particular interviewee does not see a role for himself to engage with this dimension: “I don’t make that decision – that’s up to the policy makers” (Int. I). An algorithm is born In fact, decisions about data are crucial to the algorithm’s makeup. Data is a basic part of creating an algorithm, or to put it differently, algorithms emerge from a set of formal instructions that scan and learn from specific datasets. Anything that an algorithm finds and presents as results, it knows because it either sits in its formal instructions or the datasets it analyzes. An algorithm needs to be taught how to think. This also includes training on which rationalities to follow. These do not need to be mathematic rationalities, but basically include any theorizations or logics that its developers want to embed in the algorithm (Int. B). The algorithm learns correlative as well causal reasoning, while both correlations and causalities are as multiple as the theories we can find about crime. This means that any decision about what to include in the original setup of the algorithm, e.g. definitions of crime patterns, of what variables count as correct, are taken by a team of developers and translated into forms by programmers. The algorithm learns from specific datasets that are “found to be most valuable for each type of crime” (Int. G), for each pattern or logic. From that data, it learns at what point a result is considered a result. Whether the result is relevant, however, still varies. When it comes to prediction algorithms, the teaching period has two phases. First, the algorithm tries to find patterns that it has been taught to
152 Mareile Kaufmann find via parameters in a dataset where the relevant incidents (here, the reported crimes) are unknown to the algorithm, but known to the programmers. It scans this dataset, tries to predict, … and gets it wrong. You change parameters and it predicts wrong again. Millions of times. Some predictions were better than others (…) the computer tries to remember the parameter settings that made its predictions better than others (…) It keeps on varying other parameters that didn’t have an effect to find. (Int. B) The training is considered complete when the algorithm has become good enough at identifying the reported crimes. Thereafter, the algorithm is deployed on one computer where it identifies patterns as it is set up to do, and on another computer where it continues the self-learning process. Here, the computer holds information that the algorithm doesn’t know already and the algorithm is still allowed to adjust parameters in order to get even better at predicting. (Int. B). To train an algorithm to become fit for its purpose is both time- and resource-intensive (Int. E, Int. D). Most importantly, any control-measure and any interaction with the algorithm in the training phase is highly dependent on the affordances of each dataset and the algorithm’s teachers. One interviewee stated: “whoever works on algorithms and sets them up, they will become powerful people” (Int. H). In its training phase the algorithm is highly dependent on the machine-learning expertise, policing insights, the anecdotal, and criminological knowledge of its trainers (Int. G, J). Most of these trainers, however, need to work in a team. Here, collaboration is reduced to the specific fields of expertise: for example, criminologists develop the base parameters and contents, programmers translate these into forms and police officers rubber stamp the algorithms (Int. C; Int. J). Each act of translation between these steps is also an act of interpretation. Thus, an algorithm never simply emerges out of itself as a neutrally mathematic entity (Int. B). The agency of each expert and each translation tool is playing into the algorithm. Most algorithms never stop learning, which means that there will be an ongoing interaction with its teachers – at least in the way in which the teachers feed it new information via pre-structured datasets and control the algorithm (Int. D). However, the actual act of testing new parameters is fully automated once the algorithm is set up (Int. F). As we see from the above and the following descriptions, humans and technologies can collaborate at any stage in the creation of predictions. Keeping that in mind, the next step describes one of the moments where human agency moves more into the background and the algorithm’s own agency moves to the foreground as it begins to combine parameters in a (semi-) automated fashion.
Who connects the dots? 153 Algorithms’ adolescence Algorithms are eager learners. Their calculative capacity is their strength, which means that the way in which algorithms eventually outperform human brains is a calculative one. While this outperformance is intended by humans (most developers actually consider its calculative strength the software’s selling point), algorithms can still provoke quarrels with their teachers and develop their own characteristics. For example, a typical point for algorithms to cause trouble is during their testing-phase with police officers: “a lot of police officers were frustrated with the program” (Int. A). The interviewee mimics officers: “The program shouldn’t be predicting this spot, it should be predicting this spot over here. (…) They’re like: ‘I am smarter than this program is, I know where the crime should be and it’s not finding it’” (Int. A). As such, algorithms may cause actual debates about policing identities. Interviewee A argues that police officers can physically interact with the people they intend to police. They have both informal and social information available to understand what drives crime and judge where to patrol. An algorithm, however, simply “knows” where to patrol (Int. A). This is why some developers actually consider that the informal (and non-mathematic) information police officers have, could in fact change the algorithms’ views (Int. A), i.e. what they do and do not find. The opposite argument is made by other developers who say that the skepticism towards the algorithm is surprising (Int. E): Police officers are biased. Algorithms may be, too, but at least algorithms are quicker in making decisions (Int. E). Here, the algorithms’ talent has to do with the computer’s talent: they are good at computing (Int. B). Algorithms may suffer – some argue – from over-representations of specific populations and types of crime. But generally speaking, once they are programmed, algorithms do have a specific task – and they can concentrate on these tasks better than humans (Int. G). The problem is that while their concentration skills are excellent, algorithms’ deliberation skills are limited to keeping or deleting specific parameters with the aim to reach more and better crime matches. At this stage, the human interaction with the vehemently working algorithm would be to keep algorithms from bubbling, from ending up in a filter bubble of self-amplified social information (Int. B).5 More basic interaction with such algorithmic effects is to write policing algorithms that are based on expert knowledge before society is “taken hostage” (Int. C) by more mainstream and generalist companies’ algorithms. Algorithms don’t only have concentration skills that are superior to that of humans, but they are believed to be better at considering complex information. This, so some developers argue, can give police officers new perspectives on crime: I think police officers don’t necessarily have a handle […] of how macro-level or community level factors influence individual behavior.
154 Mareile Kaufmann That’s obviously something the […] software picks up. […] it might change how officers view why crime is happening in particular locations. Why is crime happening here, but not there? (Int. A) In providing such selected macro-perspectives, algorithms have agency in the policing process. Further, they have quite a concrete analytic influence: even though their actions are originally based on a programmer’s setup, algorithms start deleting parameters out of crime analyses (Int. B) or decide when police officers should stop mapping networks (Int. D). They do that in an automated fashion and – strictly speaking – for, and instead of, the human analysts. At the same time, they also discover new insights or parameters for the analysts: INTERVIEWER:
Did the algorithms come across new insights that didn’t exist in the literature before? INTERVIEWEE G: Some cases seemed unusual at first. […] For example, the phases of the moon. […] There is no literature about why that is the case, but with full moon you may be seeing more outside, what seems brighter etc. [The algorithm] is not building a model that is saying: the moon is explaining the crime. Besides the identification of new and seemingly relevant parameters for the prediction of crime, the algorithm also makes networks and priorities visible that officers can otherwise not see (Int. D; Int. F). Interviewee D mentions, for example, that their networking algorithm could show that someone may be connected to a group of criminals without showing up in any databases as convicted or suspicious (Int. D). Algorithms can identify such connections, because applying the same rules manually would be too complex for police officers. In fact, algorithms are meant to reveal insights beyond that which police officers know. The software actively creates new knowledge based on the inputs that its teachers have been giving it. This idea that the algorithm’s outputs derive from a “kind of higher knowledge base” (Int. D) also creates a sense of accountability that is coproduced by the algorithm. A police officer argues: “So when we do something to any of our citizens, it’s based on a higher level of knowledge” (Int. D). That this higher level of knowledge is not a given and not a purely mathematic process, but dependent on human and non-human collaboration including the many decisions and acts of pre-structuring the knowledge-base, is often not reflected on. Algorithms’ graduation To be considered mature, algorithms need to be more efficient than humans. They need to outperform them in order to increase police efficiency (Int. B).
Who connects the dots? 155 While many software developers argue that algorithms are “not meant to replace human ingenuity” (Int. F), algorithms develop capabilities that humans can no longer perform. They produce knowledge in a way that the average human being can no longer understand. A developer said: “Even if you were to say you were to publish the algorithm, it wouldn’t make any sense to the people reading it” (Int. C). In addition to the level of complexity at which algorithms combine different sets of information with each other, algorithms tend not to disclose the arguments about how they have reached a specific result. They only provide the result. Humans without advanced digital literacy can no longer know how exactly the algorithm combines the datasets. In some cases, the same datasets actually led to different results. While Interviewee E argues here that humans also don’t need to know how the algorithm got to its result, as long as the result is a good one, others argue that an explanation for the results’ why and how is necessary as it would help police officers in the implementation of counter-activities, so that officers “can decide whether these reasons are relevant or not” (Int. A). Interviewee H agrees: “if you could have software that suggests why this is happening, you could guide the officer into the problem-solving on scene […] give better advice to the woman who had a burglar in their apartment” (Int. H). He continues to argue that if algorithms don’t explain how and why they got to a certain result, they are less transparent than an officer’s decision strategy: “it’s harder for people then to question those patterns if these parameters are not visible or accessible” (Int. H). Algorithms at work Once a software’s prediction algorithm has graduated and is actually implemented, the collaboration between algorithms and police officers moves even more into focus. Some developers and users argue that the algorithm actually does not predict anything, but basically provides police officers with the status quo (Int. D). This status quo is a baseline that is “not gonna change basic human interaction, I think, but it’s mostly a tool to make our time a little more efficient, both, the policemen’s time for which society pays a whole lot of money” (Int. C). The algorithm, then, has an impact on the policing process, but does not take decisions for police officers, not least because the interpretation of an algorithm’s result still needs to be done by those who implement crime control: We don’t know if there is a police agency out there who would download the (software) and simply just send saturation patrols to all the hot spot areas. And if they did that – really – they would be policing the poor minority communities, which is not what the software is intended for. We wrote the software to identify the highest risk. […] We didn’t want police officers to interpret this output too literally. (Int. A)
156 Mareile Kaufmann This means that a true collaboration between police officers and algorithms is still necessary. Software owners argue that humans are needed to make “judgment calls” (Int. C) and “take decisions” (Int. F; Int. A). What the algorithm adds is efficiency, but it does not replace the officer or human reasoning about crime. In fact, the algorithm should not be too fast and professional. Not only would the replacement of human judgment by algorithms cause unease and surprise, but it would re-determine the collaborative practice of policing for the worse. One interviewee sees the problem in the lack of the officers’ media competence: INTERVIEWEE F:
Doing predictions in real time creates distractions for police
officers. INTERVIEWER:
How so? Well, if they constantly have to ask the question: “Where are my predictions now?,” then they spend more time on their iPhones looking through where the predictions are rather than policing the environment. So it actually is counterproductive to do predictions in perfect real-time.
INTERVIEWEE F:
Software owners argue that the algorithm is meant to empower, and not take agency away from police officers. Some developers would even see the potential in empowering vigilantes or reserve-police officers. Interviewee C, here, sees different outcomes depending on the way in which predictive policing tools would be implemented in the general society. An overreliance on prediction algorithms could cause negative effects by lowering the potential for natural surveillance that sits in neighborhoods. This problem of overreliance could be summarized in the attitude: “the computer will take care of it and the police will fix it” (Int. C). On the other hand, some developers see that the general citizen could be empowered by their own digital device and assist in neighborhood patrol. Here, however, the problem of over-reporting and discriminatory bias in neighborhood policing could easily grow out of proportion. Algorithmic agency and police agency do not necessarily stand in competition with each other. Rather, it is expectable that both forms of agency will continue to be relevant in policing efforts. And yet, the impact of algorithmic predictions is expected to supersede police agency in some domains. Such expectations are already argued about. Interviewee K sees the need for collaboration between officers and algorithms by acting as each other’s supervisory authority in terms of investigating and alter each other’s prediction results: So if the computer says low and you think high risk, you should probably do the assessment once more – and the other way around. The danger is if you trust the computer too much, you might overlook very important information that will lead you to do a sensible decision. But you can also distrust the computer too much and these algorithms using information. You should pay attention to it. (Int. K)
Who connects the dots? 157 In such collaborations, police officers would also have to check whether algorithms work lawfully and according to the different national standards on “civil liberties or human rights” (Int. C). Especially this last statement implies that the standards for using prediction technology in the context of law enforcement vary. If taken a bit further, it is a statement about the fact that any of the collaborative efforts of humans and technologies to predict crime are also embedded or situated in specific societal contexts. Yet, not just the police and the societal contexts in which the prediction technology is implemented influence the algorithm’s results, but the software or algorithm also influences the police and policing practices. Many interviewees agree that algorithms will change policing behavior at large – not just via recommendations. Rather, predictive policing could change key performance indicators in the police – i.e. how the police’s efficiency and effectiveness is measured (Int. G). Interviewee J, too, expects a change in police culture: “Predictive policing will be standard police procedure in 10 years’ time. […] It will change policing culture. It will generate new functionalities and new tasks” (Int. J). Much can be said about how algorithms may make policing more efficient and effective, but the assemblage of agency described above has shown that algorithms also have the power to render decision-logics invisible and less transparent. Yet, the discourse in the software developer community focuses on the way in which algorithms create new insights and generate new interests.6 Interviewee D argues that this new knowledge already has an effect on policing (Int. D), but it is hard to understand how actual arrests will impact the algorithm’s formula again (Int. D). Now, do algorithms die? When and if algorithms die – that means whether an algorithm will actually stop computing – is in fact a popular debate in the philosophy of science. It is currently seen as the ultimate unknown, not least because the answer to this question is not computable. In order to answer this question, the algorithm has to be run (Int. B). However, whether the algorithm stops computing or not, is not necessarily tied to its ability to act. As we have seen from the above descriptions, algorithms, and technologies that precede algorithms, act and collaborate with humans from the moment the algorithm is pre-conceived. Interviewee C summarized this in relation to computation at large and its relevance in the police’s future: “One thing is sure: they’re gonna be using computers much more than now.” With that, one could assume that as long as algorithms’ results are structuring police work – even after they may be gone or are replaced – their agency remains.
158 Mareile Kaufmann
What can prediction algorithms tell us about technology and agency? Some conclusions One interviewee pondered upon the role that digital technologies would have in predictive policing: “Is there a difference in who tells the story, officers or algorithms?” (Int. H) The analysis above illustrates that the agency of identifying patterns is not a question of either/or. In the process of predicting crime a whole network of police officers, software developers, programmers, digital and analogue forms, manual as well as technical procedures for data collection and cleaning, datasets and not least algorithms co-constitute the results. Within this network or assemblage we find many moments where agencies occur and where humans and non-humans influence each other. And yet, choosing the standpoint of the algorithm to explore predictive policing was done with the intent of placing the focus from human agency onto collaborations. The flourishing materialist approaches in IR and critical security studies tend to explore the newness of matter or technologies as actors. This chapter adds to this debate, but it wants to remind us that agency is not a binary concept, but that it is truly co-constitutive as humans and technologies interact.7 Indeed, we can say that algorithms have risen to be actors in digitized bureaucracies across the globe. Yet, even if they are deployed as seemingly autonomous detectives with “artificial intelligence,” algorithms always collaborate with humans in both, international but also more local contexts. Amongst other things, this chapter has described how these are collaborations of influence: such collaborations between algorithms and humans already have an impact on the local level, for example on specific security domains (as Int. D says about the field of policing). At an international scale, they prioritize patterns as a way to grasp social developments and relations and co-create concrete recommendations and predictions based on pattern analysis. Such predictions have become relevant in various domains of international importance, reaching from economic spheres, the creation of law and order or the consumption of information via media. A closer look at the use of prediction software in the domain of policing, then, gave us an insight into the kinds and stages of collaboration as well as the coconstitutive forces that we need to investigate when we want to understand the politics of automated recommendation that emerge globally in different contexts. This perspective of co-constitution still prompts a few conceptual questions that are relevant when we want to study the relations between technologies, agency, and the international at large. For example, is technology social? Incidentally, a specific subset of technologies, namely media, have actually adopted the social into their name. Admittedly, the social in social media rather stands for the idea that they connect people to each other, that they facilitate networks and inter-action. However, this chapter went further in arguing that technologies are more than just means; they exist
Who connects the dots? 159 and act through a group or a network. Due to that, technologies are social in the sense that they relate or have active, interdependent relationships with others. They relate to other technologies, humans, and society. The analysis has foregrounded how technologies cooperate and interact with these networks. Further, it has become clear that technologies can become an ally, which is the Latin origin of the word social, but also involves the idea of political alliances. In that sense, prediction algorithms are allies for certain forms of crime prevention and the related views on crime. In debates about whether technologies play a role in policing (see the section Algorithms at work), we have seen that, for example, software developers and algorithms can be allied. This does not mean that they are always of the same opinion, but we have seen examples where algorithms and police officers stand for specific arguments within the debate and together, they rise as important actors in the technology-supported prediction and prevention of crime. Whether this social character of technologies actually ties in with awareness, reflexivity, voluntariness, and intentionality of algorithms could not be answered with this study, but such questions would be promising material for further research. A question related to the notion of agency and sociality is whether technology is animate. Choosing the model of the algorithm’s life story to explore the relationship between technology and agency is already a partial answer to that question. Algorithms are not alive in the sense that they are made of organic mass. Yet, they are tightly entangled with human life or bios at large and have already become co-constitutive parts in biological assemblages. In addition to that, algorithms do have a “body” that is made up of processing instructions, which grows as it works on and with data material. Thus, algorithms may not be alive with respect to all the characteristics of organic life, but this chapter has illustrated that they do have a life cycle. Algorithms can organize, adapt, grow, and evolve in an automated fashion. They may not do so in a self-sustained fashion, since the chapter focused on the way in which agency comes about in networks and relationships. Algorithms do not exist out of themselves. If agency is about the ability to act, then algorithms also need to act on something. Thus, algorithms need an environment to emerge from, but also to engage with in the sense that they – again – contribute to emergences and developments within their environments. Such “intra-actions” (i.e. the “mutual constitution of entangled agencies,” cf. Barad, 2007: 33) also includes the ongoing creation of knowledge, which brings us to the question: can algorithms know? And if so, what kind of intelligence is this? Algorithms are active contributors to the production of knowledge. They know, but their knowledge is not based on deliberation, but on instruction. Algorithms receive instructions by developers, police officers, researchers, and ultimately programmers, which the algorithms then implement in environments that are too complex for human minds to grasp.
160 Mareile Kaufmann As we have seen, they can combine parameters in a fashion that is impossible for humans to perform and they can concentrate on this task with machinic rigor. Bowker (no date: n.p.) describes this rigor as knowledge production with a normative temporality: “Thou shalt learn at the maximum rate you can (be the best you can be) without hesitation or disruption” (online), without stopping and without a chance “to drop out for a while, to tank the odd subject” (Bowker, no date: n.p.). Most algorithms then work in a deterministic fashion, namely in order to provide knowledge that is guided by a certain telos, i.e. to make police officers more efficient in preventing specific crimes. Yet, the researcher’s view on the algorithm’s epistemic work is rather to observe how algorithms influence knowledge production at large (van der Tuin and Dolphijn, 2010). This latter view on algorithmic knowledge production is different from how some software developers look at the algorithm, who understand the algorithmic workings as positive and negative effects,8 all of which can be solved with newer or better algorithms. Instead of seeing algorithmic knowledge production as mere effects of technological means that can become better at what they do, an argument that sees the algorithms’ agencies appear also acknowledges that biases or effects are necessary constituents of algorithmic knowledge production. It is a view that sees being and knowing as entangled: If the algorithm is and acts, it also knows in a specific way. The question then becomes whether there are mechanisms in place to make algorithmic knowledge production comprehensible. This last part has illustrated how much seeing and studying agency is intimately tied to methodological, ontological, and epistemological questions. Just like in IR, relations are indeed in focus when agency is studied. Yet, relations are not just a methodological entry point, but they are the ontological core of agency. They are the place from where human and nonhuman actors and acting emerges. This method of tracing agency via relations is then not deployed to create reproducible knowledge, but to broach the issue of what counts as knowledge. In this process, matter or technologies become a transformative force rather than an object to be studied (van der Tuin and Dolphijn, 2010), which is how, and why, technological agency fundamentally challenges and changes the humanities.
Notes 1 The project and all of its interviews have been subject to ethical evaluations by the Norwegian Center for Research Data (NSD), which formally approved the use of the data in anonymized form. 2 The term software developers relates to those who stand for the final product and are part of the software project at large, also with ideas, planning and inputs. The term software programmers relates to those who write and train the software’s algorithm. Sometimes, but not always, these roles overlap. 3 All interviewees are anonymized via alphabetic code. In the following, all references to interviewees will be indicated via the abbreviation Int. & letter. 4 I apologize here for using a rather standard model of a life story.
Who connects the dots? 161 5 The bubble or so-called early voter problem is now countered by big companies like Google with so-called Google love: “If a new page is instituted, Google actually counters this problem with extra love points: they wait and see whether people will click on it. To see if that changes interest in a person or subject – counter the information bubble” (Int. B). 6 for exceptions see ACM Conference on Fairness, Accountability, and Transparency (ACM FAT) https://fatconference.org (Accessed 28. February 2019) 7 Some have argued that the role of humans in studying non-human agency is again very central as they measure impacts of non-human agents on their surroundings. This, again, leaves the role to decide on what counts as agency, to humans and their agency (Mutlu, 2013). This chapter does not try to identify a solution to this problem, but simply suggests that agency is not reducible to one (i.e. technology) or the other (i.e. humans). It foregrounds co-constitution. 8 These could be challenges to the presumption of innocence, the right to nondiscrimination, the proportional use of data.
References Abrahamsen R and Williams M C (2010) Security Beyond the State: Private Security in International Politics. Cambridge: Cambridge University Press. Amoore L (2009) Algorithmic War: Everyday Geographies of the War on Terror. Antipode 41(1): 49–69. Amoore L and Piotukh V (eds.) (2016) Algorithmic Life: Calculative Devices in the Age of Big Data. Milton Park/New York: Routledge. Barad K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham/London: Duke University Press. Barberá P, Jost J T, Nagler J, Tucker J A and Bonneau R (2015) Tweeting from Left to Right: Is Online Political Communication More than an Echo Chamber? Psychological Science 26(10): 1531–1542. Beck U (1986) Risikogesellschaft: Auf dem Weg in eine andere Moderne. Frankfurt am Main: Suhrkamp. Bennett J (2005) The Agency of Assemblages and the North American Blackout. Public Culture 17(3): 445–465. Bennett J (2010) Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press. Bigo D (2016) Rethinking Security at the Crossroad of International Relations and Criminology. British Journal of Criminology 56(6): 1068–1086. Bijker W E, Hughes T P and Pinch T J (eds.) (1987) The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge/London: MIT Press. Bostrom N (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. Bowker G C (no date) I Don’t Wish to Know That. Department of Arts and Cultural Studies, University of Copenhagen. Available at https://artsandculturalstudies.ku. dk/research/focus/uncertainarchives/activities/archivaluncertaintyunknown/bowker/ (accessed 31 October 2018). Callon M (1991) Techno-Economic Networks and Irreversibility. In Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, 132–161.
162 Mareile Kaufmann Chamard S (2006) The History of Crime Mapping and Its Use by American Police Departments. Alaska Justice Forum 23(3): 1–8. Chan J (2003) Police and New Technologies. In Newburn T (ed.) Handbook of Policing. Cullompton: Willan Publishing, 655–679. Chan J and Bennett Moses L (2016) Is Big Data Challenging Criminology? Theoretical Criminology 20(1): 21–39. Crawford A (2006) Policing and Security as ‘Club Goods’: The New Enclosures? In Wood J & Dupont B (eds.) Democracy, Society and the Governance of Security. Cambridge: Cambridge Univ. Press, 111–138. ENISA (2011) Inter-X: Resilience of the Internet Interconnection Ecosystem. Available at www.enisa.europa.eu/publications/interx-report/at_download/fullReport (accessed 31 October 2018). Esposito E (2013) Digital Prophecies and Web Intelligence. In Hildebrandt M & de Vries K (eds.) Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. London/New York: Routledge, 121–142. Ewald F (2002) The Return of Descartes’s Malicious Demon: An Outline of a Philosophy of Precaution. In Baker T & Simon J (eds.) Embracing Risk: The Changing Culture of Insurance and Responsibility. Chicago/London: The University of Chicago Press, 273–301. Gibson J J (1966) The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin. Harcourt B E (2007) Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. Chicago/London: The University of Chicago Press. Hildebrandt M (2016a) Law as Information in the Era of Data-Driven Agency. The Modern Law Review 79(1): 1–30. Hildebrandt M (2016b) New Animism in Policing: Re-Animating the Rule of Law? In Bradford B, Jauregui B, Loader I & Steinberg J (eds.) The SAGE Handbook of Global Policing. London/Thousand Oaks/New Delhi/Singapore: Sage, 406–428. Kaufmann M (2018) The Co-Construction of Crime Predictions: Dynamics Between Digital Data, Software and Human Beings. In Gundhus H O, Rønn K V & Fyfe N R (eds.) Moral Issues in Intelligence-Led Policing. London: Routledge, 143–160. Kleinman Z (2015) Are We Addicted to Technology? BBC News, 31 August. Available at www.bbc.com/news/technology-33976695 (accessed 31 October 2018). Kurzweil R (2005) The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Lindsey N (2018) Does Predictive Policing Really Result in Biased Arrests? CPO Magazine, 9 April. Available at www.cpomagazine.com/2018/04/09/does-predictivepolicing-really-result-in-biased-arrests/ (accessed 31 October 2018). MacKenzie D and Wajcman J (1985) Social Shaping of Technology: How the Refrigerator Got Its Hum. Milton Keynes: Open University Press. Miller C C (2015) Algorithms and Bias: Q. and A. with Cynthia Dwork. New York Times, 10 August. Available at www.nytimes.com/2015/08/11/upshot/algorithmsand-bias-q-and-a-with-cynthia-dwork.html (accessed 31 October 2018). Mutlu C E (2013) The Material Turn: Introduction. In Salter M B & Mutlu C E (eds.) Research Methods in Critical Security Studies: An Introduction. Milton Park/New York: Routledge, 173–179. O’Connell M (2017) To Be a Machine. London: Granta Publications.
Who connects the dots? 163 O’Malley P (2009) “Uncertainty Makes Us Free”: Liberalism, Risk and Individual Security. BEHEMOTH – A Journal on Civilisation 2(3): 24–38. Sheptycki J (2007) Criminology and the Transnational Condition: A Contribution to International Political Sociology. International Political Sociology 1(4): 391–406. Tufekci Z (2018) It’s the (Demoracy-Poisoning) Golden Age of Free Speech. Wired, 16 January. Available at www.wired.com/story/free-speech-issue-tech-turmoil-newcensorship/ (accessed 31 October 2018). van der Tuin I and Dolphijn R (2010) The Transversality of New Materialism. Women: A Cultural Review 21(2): 153–171. Voelkner N (2013) Tracing Human Security Assemblages. In Salter M B & Mutlu C E (eds.) Research Methods in Critical Security Studies: An Introduction. London and New York: Routledge, 203–206. Wilson D (2006) Biometrics, Borders and the Ideal Suspect. In Pickering S & Weber L (eds.) Borders, Mobility and Technologies of Control. Dordrecht: Springer, 87–109. Wilson D (2018a) The Instant Cop: Time, Surveillance and Policing. Paper presented at the 8th Biannual Conference of the Surveillance Studies Network, Århus, Denmark, 8 June. Wilson D (2018b) The Real-Time Cop: Imaginaries of Technology, Speed and Policing. Paper presented at the EURIAS Conference “Automated Justice: Algorithms, Big Data and Criminal Justice Systems” Zurich, Switzerland, 20 April.
8
Designing digital borders The Visa Information System (VIS) Georgios Glouftsios
Introduction In the post-9/11 world, governments in Europe and North America seem to suffer from post-traumatic stress disorder (Salter and Mutlu, 2012), in the sense that they are obsessed with risks stemming from international mobility, allocating large amounts of public funds for the development of technologies that are envisaged to revolutionize practices and processes of border security. This leads to a situation where security controls on mobile subjects and objects depend heavily on high-tech artefacts and infrastructures. Databases (e.g., Dijstelbloem and Broeders, 2015; Jeandesboz, 2016), predictive algorithms (e.g., Amoore, 2011; Leese, 2014), land and maritime surveillance systems (e.g., Jeandesboz, 2017; Tazzioli and Walters, 2016), explosive detection devices (e.g., Bourne et al., 2015; Lisle, 2017), body scanners (e.g., Bellanova and González Fuster, 2013; Valkenburg and van der Ploeg, 2015), and high-tech walls (e.g., Pallister-Wilkins, 2016; Vukov and Sheller, 2013) are only some examples of border security technologies that have received increasing scholarly attention. In an EU context specifically, several studies have dwelled on the deployment of Information and Communications Technologies (ICTs) that facilitate the collection, processing, and sharing of data on supposedly suspect mobilities; rendering them intelligible, calculable, and governable by security apparatuses. Scholars adopt various heuristic devices to capture this development, arguing that EU border security becomes increasingly “technologized” (Ceyhan, 2008), “digitized” (Broeders, 2007), and “smart” (Leese, 2016). In this literature, ICTs are often approached in three different, but closely linked and often overlapping, ways. First, they are studied as instruments that support the implementation of specific policies – for example, internal security and migration management policies – and the coordination of the work of transnational guilds of security professionals (e.g., Balzacq, 2008; Bigo, 2008). Second, they are viewed as constituted in, and constitutive of, broader socio-technical assemblages that bring together populations of security practitioners, securitized bodies, and technological devices of all sorts (e.g., Duez and Bellanova, 2016; Jeandesboz, 2017; Walters, 2017). Third, they are approached as agents
Designing digital borders 165 that shape sovereign decisions of inclusion and exclusion at borders by actively mediating the work of those actors – for example, border guards and police officers – who are responsible for monitoring international mobility and sorting out its risky elements (e.g., Amoore, 2011; Hall, 2017; Matzner, 2016). The common denominator of these studies is that they all introduce a post-humanist understanding of border security, which is symmetrically attentive to its social and technological realities. This chapter aspires to contribute to this vibrant field of research by emphasizing the need to explore how ICTs were designed before their development and deployment in the field of EU border security. Inspired by studies politicizing the labor that goes into the making of contemporary high-tech borders (e.g., Bourne et al., 2015; Jeandesboz, 2016; Leese, 2018; Vukov and Sheller, 2013), I will provide an in-depth analysis of the design process that preceded the development of one large-scale information system that is currently used for border security, migration management, and law enforcement purposes by the competent authorities of the EU Member States. This is the Visa Information System (VIS). The question that I will attempt to answer is twofold. First, how did the VIS emerge within the process of its design? Second, what does the analytical focus on this process reveal about the agency inscribed in the VIS by its designers – in particular, the agency of the system, as this manifests in ways that intervene in the governing of international mobility in the EU? This exercise is important because it unearths a variety of actors, knowledges, concerns, and controversies that have shaped the makeup of the VIS, as well as the ways in which the system functions and comes to matter politically. I approach the design process of the VIS as a spatiotemporally dispersed nexus of practices that seek to describe the associations and interactions between the human (i.e. security professionals) and non-human (e.g., hardware, software, data) agents constituting the system. These practices involve discussions, negotiations, the (re)drafting of various texts, and the circulation of these texts across the groups formed by those who designed the system. Throughout the process of its design, the VIS emerges as a projected socio-technical assemblage constituted by agents whose anticipated relations and interactions are to produce specific performative effects. My argument is that one of these effects is the establishment of the Schengen area as a techno-political ordering: a controlled space of transnational circulations which is built upon ICTs that allow national authorities to share information on suspect mobilities. As I will demonstrate in the following pages, the VIS allows for the digital interconnection of different spaces where international mobility is filtered – for example, airports, land borders, and consulates of the Member States in third countries where individuals apply for Schengen visas – as well as for the coordination of control practices enacted in those spaces (see also Glouftsios, 2018). Indeed, the VIS was designed as an expanding assemblage that actively mediates attempts to govern international mobility in the EU through data gathering and sharing.
166 Georgios Glouftsios In order to find information relevant to the process within which the design characteristics of the VIS emerged, I conducted a series of interviews with policy, legal, and technology experts working at: (a) the European Commission’s Directorate-General for Migration and Home Affairs (DG HOME); and (b) the European Agency for the Operational Management of large-scale IT Systems in the Area of Freedom, Security, and Justice (eu-LISA). DG HOME acted as the center that coordinated the overall design (and development) process of the VIS, while eu-LISA is now entrusted with its maintenance, evolution, and protection. To be clear, those who had the legal responsibility for the delivery of the system were not the technoscientists who developed it, but the EU bureaucrats who gathered, combined, and “translated” heterogeneous kinds of concerns, considerations, and expert knowledges – including technoscientific ones – into its functional and technical-infrastructural design specifications. “Translation” refers to the process through which heterogeneous considerations and knowledges were synthesized and inscribed in the design characteristics of the VIS; characteristics that are detailed in various documents (lists of functional requirements, feasibility studies, legislations) that were drafted and circulated by those involved in the design of the system. The remainder of this chapter is organized as follows. In the first section, I will introduce a set of analytical sensibilities that will allow us to make sense of the VIS design process. In the next three sections, I will explore (a) how the initial functional requirements for the system emerged; (b) how these functional requirements were translated into the systems’ technicalinfrastructural specifications; and (c) how the identified requirements and specifications were translated, once more, into legislations governing the future development and use of the system. It is through this progressively unfolding chain of translations that the VIS slowly emerges as a socio-technical assemblage expected to support the governing of international mobility in the EU and contribute to the establishment of the Schengen area as a controlled space of transnational circulations.
Heterogeneity, dispersion, projection How can we approach the design process of the VIS? Let me start by emphasizing its heterogeneous (Law, 1987) and dispersed nature. Heterogeneity refers to two interlinked points. First, it denotes that those collectives of actors who designed the VIS were not just concerned with its technicalities, but also with the ways that these technicalities might impact (a) future security practices and processes; and (b) the future implementation of relevant policies (i.e. internal security, migration management, and asylum policies). In addition, heterogeneity refers to the entanglement of diverse types of technoscientific, security, and policy concerns that were translated into the design characteristics of the system. As I will show in subsequent sections, apart from the technical aspects of the VIS, its designers were concerned with the
Designing digital borders 167 efficiency of border security, law enforcement, and migration management practices, the implementation of existing and forthcoming Schengen policies, and how all these could be affected by the future development, deployment, and use of the system. Second, to reflect upon heterogeneity means recognizing that those who designed the VIS were not only technoscientists but also policy, legal, and security experts. Here the dispersed element of the design process also comes into play. Ethnographic research inspired by classical laboratory studies (e.g., Latour and Woolgar, 1979; Lynch, 1985) does not really invite us to open spaces other than technoscientific laboratories where design activities may take place. “Give me a laboratory and I will move the world,” Latour (1983) once brilliantly argued, even though he was not referring to laboratories producing technological products. Indeed, what this kind of ethnographic research taught us is that laboratories (technoscientific or otherwise) have the power to affect what is happening in their outside world (Callon et al., 1986). For example, several studies have demonstrated that, when it comes to technologies mediating control practices targeting international mobility, the border is always a “laboratized” border (Bourne et al., 2015) because what technoscientists do inside their laboratories (i.e. designing and developing security technologies) affects the practices and decisions made by those actors (e.g., border guards and police officers) using their products (see also Amoore, 2014; Valkenburg and van der Ploeg, 2015). However, to a certain extent, what these studies do not reflect upon is how dispersed outside spaces can affect what is happening inside technoscientific laboratories. As regards my research, among these spaces are, for example, conference rooms inside buildings of EU institutions in Brussels, where EU bureaucrats, policymakers, security professionals, legal and technology experts meet to discuss technoscientific, policy and security-related needs and concerns, as well as how these needs and concerns can be translated into the design characteristics of new information systems (see Jeandesboz, 2016). Now, to account for the heterogeneous and dispersed process within which the design characteristics of the VIS emerged, I suggest that we should study this process as a nexus of practices performed across space and time. These practices produce and are, in turn, affected by the projections of the system’s functional and technical-infrastructural design specifications – projections that then condition the way that the VIS is built. These projections are inscribed in various design documents, for example, in draft lists of functional requirements for the system; feasibility studies aimed at assessing possible design solutions; and draft legislations laying down the functional characteristics of the system, the authorities allowed to use it, and the procedures that these authorities should follow to insert, exchange, and consult VIS data. These documents should not be seen as static linguistic representations that somehow appeared out of the blue. Rather, they have their own history: they emerged, indeed they materialized, within nexuses of recursive design practices, such as discussions,
168 Georgios Glouftsios negotiations, calculations, draftings, and re-draftings. What is more, they are themselves actors that intervene in, and make a difference to, the design process of the VIS in three ways. First, design documents act as intermediaries bringing together heterogeneous engineers – EU bureaucrats, technoscientists, security experts, policymakers, and legislators – who collectively (re)draft, discuss and consult these documents. By doing so, they allow for the coordination of the overall design process. Second, design documents have a real material impact on the technical-infrastructural features of the VIS and its functionalities. This is because the contractors who developed the VIS (a consortium of Sopra Steria and Hewlett Packard) were preoccupied with the functional and technical-infrastructural specifications that were identified during the design phase of the system’s lifecycle (Participant 9, 2017). Third, inscribed in design documents is the assemblage of human and non-human agents constituting the VIS. During the design process of the system, this assemblage is not physical or tangible; however, the carefully calculated projections of the future associations and interactions that make it up have an impact on what is said, thought and done by the heterogeneous engineers who design it. Put simply, previously calculated projections about the future conduct of the system’s human and non-human operating parts affect the work of those heterogeneous engineers who design it, because design documents and whatever projections are inscribed into them form the basis upon which further discussions, negotiations, draftings, and re-draftings are enacted. What Latour (2004) describes as a “matter of concern” – meaning a thing powerful enough to gather a group of actors that negotiate and care about it – is in my case an information system that has not yet acquired a tangible material form. Therefore, to appreciate the politics of its design, we should focus on the projections made by designers about the ways that the VIS will behave after its deployment. In short, my analysis will dwell on that period of time when the VIS was still a project (not an object): a socio-technical assemblage in the process of becoming. Attending to this process of becoming is crucial because it reveals the agency that heterogeneous engineers inscribed in the design characteristics of the VIS. To be clear, I do not reduce agency to human reflexivity, intentionality, and purposiveness. Rather, I understand agency as the force that humans and non-humans exert in their associations and interactions. To appreciate the agency of the VIS, I suggest that we should go beyond the mere recognition that its human and non-human operating parts are agents, and instead try to appreciate the effects produced by the dynamics of their interactivity. If the VIS was designed as a socio-technical assemblage constituted by a multiplicity of interacting agents, such as security professionals, servers, network cables, interfaces and algorithms, to name just a few, then this implies that the system is characterized by a “distributed” human-nonhuman agency (Bennett, 2005; Latour, 2005), which emerges from a loosely structured set of associations and interactions between its operating parts.
Designing digital borders 169 As we will see in the following sections, heterogeneous engineers described in various design documents how the humans and non-humans constituting the system should interact to produce continuous flows of information on supposedly risky bodies travelling to the Schengen area. For example, design documents detail the exact procedures that should be followed by security professionals to store, consult and update VIS data, as well as the technical specifications of hardware and software that they should use to enact these “data practices” (Madsen et al., 2016). This means that design documents serve as “scripts” (Akrich, 1992) that define the roles of each agent – human or otherwise – that constitutes the VIS assemblage; scripts that are expected to be acted out after the deployment of the system in the field of EU border security. Ultimately, it is the associations and interactions defined in such design scripts that (a) condition the ways that the VIS functions as an assemblage, (b) generate flows of data across the spaces and times where/when security controls targeting suspect bodies on the move are performed, and (c) sustain the power to govern international mobility through data gathering and sharing.
Concerns, problems, and functional requirements The VIS was, at least initially, justified in official policy discourse as a measure necessary to deal with international terrorism and implement the EU common visa policy. Shortly after the tragic events of 9/11, on 20 September 2001, the Member States’ Ministers for Justice and Home Affairs (JHA) held an “extraordinary” Council meeting to discuss possible measures aimed to combat terrorism. In point 26 of its conclusions, the JHA Council highlighted the need to ensure harmonized and rigorous visa examination procedures, enhance the cooperation between national consular authorities and, to achieve these ends, examine the possibility of introducing a system supporting the exchange of information on already issued Schengen visas (Council of the European Union, 2001c). This call for stringent visa examination procedures and enhanced consular cooperation suggests that visas were viewed as powerful instruments having an impact not only in the field of migration management, but also counter-terrorism and the internal security of the Member States. What is more, drawing links between visa– related measures and counter-terrorism suggests that third-country nationals subject to the Schengen visa requirement were considered as carrying potential security risks. The introduction of a system allowing for information exchange on Schengen visas was viewed as a measure facilitating security apparatuses to tame these risks. Later that year, the EU Council meeting in Laeken confirmed the political will of the Member States to develop such a system, without however providing any further clarifications about its exact purpose and functionalities (Council of the European Union, 2001d: 12). Following these announcements, and to prepare the ground for discussions on the functional requirements for the VIS, the Council’s presidency
170 Georgios Glouftsios (at that time Spain) drafted and circulated a questionnaire to the Visa Working Party and the Strategic Committee on Immigration, Frontiers and Asylum; asking national delegations to express their views on the purpose, content (i.e. data stored, processed, and exchanged), and authorities allowed to use the system (Council of the European Union, 2001a). To clarify, the aforementioned groups are fora in which national delegations meet to discuss issues broadly linked to the fields of border security and migration management. Depending on the agenda set for each meeting, national delegations consist of different actors. Among them are official state representatives, like those coming from ministries of foreign affairs, legal and technical experts, as well as high-ranked officials working in relevant national authorities, such as consular, migration, and border authorities (Participant 4, 2016). During the discussions that had been taking place within these formations in the early 2000s, the VIS emerged as a solution to problems related to (a) the processing of Schengen visa applications; (b) the examination of asylum requests; and (c) border controls (Council of the European Union 2001b, 2001e, 2002; see also EPEC, 2004). As regards the processing of Schengen visa requests, two problems were identified: visa-related fraud and visa shopping. First, visa-related fraud refers to the submission of fraudulent information by third-country nationals when applying for Schengen visas. Examples include false identity documents and passports, counterfeit bank statements, fictitious information on those inviting the applicants in a Member State, as well as fabricated details on the route, purpose, and intention of travel. The problem was that, due to the lack of information exchange on Schengen visas, the national authorities of, so to speak, Member State A could not determine whether a visa applicant has previously submitted a fraudulent application to the authorities of Member State B. Second, visa shopping refers to situations where individuals whose visa applications have been previously rejected by the authorities of a Member State, lodge new applications in other consular posts, either of the same Member State or of a different one, without the latter being informed about the reasons why previous applications have been rejected. Before the establishment of the VIS, information sharing among different consular posts was carried out through the VISION message exchange platform, telephone calls, and emails – a rather loose system – that did not ensure the accuracy and timely availability of requested information (EPEC, 2004: 13–4). This lack of readily accessible records on past Schengen visa applications was deemed as a facilitating factor for visa-related fraud and shopping that, in turn, were hindering the implementation of the EU common visa policy. Furthermore, regarding the EU asylum policy, one of the identified problems was that it was difficult to determine the Member State whose authorities had the legal obligation to process each specific request for asylum. According to Article 9 of the Dublin II Regulation (Official Journal of the European Communities, 2003), which was in force when the VIS was
Designing digital borders 171 designed, in cases where an asylum seeker was in possession of a valid Schengen visa, it was the responsibility of the Member State whose authorities issued that visa to assess the request for asylum. Also, in reference to the same Article, the asylum authorities of a Member State could request information, primarily for security-related purposes, from the authorities of another Member State that had previously issued a visa to the asylum seeker in question. However, without having computerized records on already issued visas, and a reliable communications infrastructure ensuring accelerated information exchange, it was difficult to determine which Member State was responsible for the examination of asylum requests, share additional information for security-related purposes, and verify the identity of those asylum seekers who did not carry any official documentation with them – which is often the case when third-country nationals arrive in the EU irregularly (Participant 6, 2016). Finally, before the deployment of the VIS, controls at the external ports of entry of the Member States were carried out through the visual inspection of the uniform Schengen visa stickers attached to travelers’ passports (see Official Journal of the European Communities, 2008d). Yet, the lack of digitized information on previously issued visas was hindering both the authentication of these documents by border guards, and the quick verification of the identities of visa holders (i.e. verification that the visa holders in question are indeed those who have previously applied for visas). This problem was linked to the production and circulation of fake or altered travel documents. In fact, the VIS impact assessment study executed by the European Policy Evaluation Consortium found that criminal networks had the capacity to produce high-quality counterfeit visas, create forged passports, and transfer genuine visas from one travel document to another (EPEC, 2004: 17). For example, there were cases of visas attached to passports stolen by consulates of the Member States and then forged by altering the photographs and personal details of their lawful holders. The authentication of travel documents and verification of their holders’ identities were possible by requesting information from those national authorities that issued the documents in question. However, this was time-consuming and was generating delays in the workflow of controls, especially in high-traffic border crossing points, such as airports. What we see here is an emergent set of interlinked concerns about visa examination procedures, the assessment of asylum requests, and border controls, which feed back into considerations related to the internal security of the Member States. These concerns justified the introduction of the VIS and had a direct impact on its design because they formed the basis upon which the list of the functional requirements for the system was drafted. Indeed, the problems that I discussed in the previous pages were inscribed by the relevant Council’s working groups and committees in the document specifying what the VIS was expected to do after its deployment (Council of the
172 Georgios Glouftsios European Union, 2001b, 2002). More specifically, among the functional requirements for the system were: a) the collection, storage, processing (i.e. entering, deleting, and updating) and exchange of information on Schengen visas; b) the storage of digitized photographs and other biometric data of visa applicants; c) the possibility to conduct searches in the database of the system on the basis of these data (i.e. biometrics); d) the storage and exchange of scanned documents submitted by those applying for Schengen visas, such as copies of identity documents, passports, and recent bank statements; and e) the storage and exchange of documented information on EU residents issuing invitations to visa applicants. Furthermore, regarding the broader community of VIS end users, it was specified that access to the data exchanged through the system should be given to: a) authorities responsible for the examination of Schengen visa applications (i.e. national visa authorities and consulates); b) authorities performing controls at border checkpoints (i.e. land borders, seaports, and airports); c) authorities responsible for the examination of asylum requests; and d) authorities responsible for the internal security of the Member States (i.e. police and intelligence services). The list of the functional requirements for the VIS was the first important document drafted and circulated across the heterogeneous engineers involved in the design process of the system. To be more specific, this text was circulated by the Council to the EU Commission’s DG HOME (“Large-Scale Information Systems” Unit), which acted as the project coordinator orchestrating the overall design (and development) process of the VIS (Official Journal of the European Communities, 2004: 6). Once the Council prepared the list of the functional requirements, it was the responsibility of the Commission to initiate all the necessary procedures for the execution of the so-called VIS feasibility study. As one of my interviewees explained to me: When the idea about a new system emerges [in the Council], we [the Commission] typically conduct a study for which we use the term “feasibility study.” This should not be misunderstood. A feasibility study does not look at whether a system is feasible, but at how it is feasible. So, from the moment when we have a problem defined and a scope known [i.e. requirements for the system] we can go for a feasibility study. For a feasibility study you need at least to have your problem
Designing digital borders 173 statement: the case that you want to resolve. The feasibility study needs in fact to produce an additional level of knowing: what are the possible technical solutions to the identified problems? (Participant 1, 2016) As it will become clear in the next section, the VIS feasibility study was aimed at translating the functional requirements for the system – as these were identified by the relevant working groups in the Council – into possible design solutions detailing its technical-infrastructural characteristics. This was the next phase of the VIS design process.
Feasibility and technical-infrastructural specifications The VIS feasibility study was completed in 2003 by a major IT consulting company (Trasys International) for the European Commission and, as I clarified before, it aimed to translate the functional requirements for the VIS into more technical details. Before saying anything else, I want to highlight that during the execution of the feasibility study the Commission was in regular contact with Trasys to provide any necessary clarifications about the system’s functionalities and the authorities that were allowed to use it (Participant 1, 2016). After modelling the different control processes that the VIS was expected to support (Trasys, 2003: Ch. 2), the study found that a crucial aspect of visa examination procedures, the assessment of asylum applications and border controls was the verification of the identities of visa holders and applicants, and the identification of undocumented individuals. Verification is the procedure during which, first, a visa applicant or holder claims an identity by presenting the necessary documentation to consular, migration, or border authorities and, second, these authorities verify that the individual in question is indeed the person who she or he claims to be. Technically, always according to the feasibility study, verification would not require much computing power (and thus expenses), because it is performed through a regular one-to-one search in the database of the system. To verify the identity of an individual, the system checks an already existing file on the basis of alphanumeric data (e.g., name, surname, date of birth, number of visa sticker) inserted by an end user. These data then serve as search keys to unlock and retrieve the information contained in pre-existing data files stored in the VIS. Identification is the process through which end users determine the identity of those individuals who do not carry any documentation with them. Contrary to verification, which is performed through a one-to-one search in the system, identification refers to a one-to-many kind of search. More specifically, identification requires the automatic comparison of unique personal traits, such as facial characteristics, fingerprints, and/or iris scans, with a population of biometric templates (i.e. numerical figures) already stored in
174 Georgios Glouftsios the system. To be clear, biometric templates are produced through algorithmic calculations and analysis of the digitized characteristics of those individuals (i.e. visa applicants and holders) whose files have been previously created by the authorities responsible for issuing Schengen visas (i.e. consular and visa authorities). Technically, this functionality would require much computing power and the implementation of biometric matching capacities in the system. In practice, identification is done through the capturing of biometric traits on-site, for example, at consular posts and ports of entry (e.g., airports). In turn, these traits serve as search keys to unlock a data file in the case that there is a match, meaning that the data algorithmically extracted from the captured personal traits correspond to a biometric template already stored in the system. Regarding the storing and processing of biometric identifiers (Trasys, 2003: Ch. 3), the feasibility study recommended the introduction of automatic fingerprint matching functionality in the VIS. The principal reason informing this suggestion was that the technologies available in the market at that time (i.e. fingerprint scanners and algorithms generating biometric templates) were considered mature enough to ensure the accurate identification of third-country nationals subject to the Schengen visa obligation. An alternative considered by the study was the digital capturing and subsequent algorithmic processing of iris scans. At that time, iris recognition technologies were considered very promising in terms of their accuracy – accuracy which was necessary for the reliable identification of individuals. However, one of the problems was that the products available in the market were neither mature enough, nor already used in other large-scale IT systems in Europe or elsewhere. Another problem was that there were neither international standards governing the quality of iris scans, nor specifications of hardware and software necessary for their capturing and subsequent analysis. Finally, a third alternative was the use of high-quality facial images. Facial images were necessary for the visual verification of the identity claimed by a visa holder or applicant. However, regarding identification, the problem was that automatic facial recognition technologies processed facial features that were subject to change over time, which could cause discrepancies between live encoded images and those that were already stored in the system. Another problem was that the accuracy of facial recognition could be affected by different factors, such as the positioning of the head in front of a camera, as well as environmental conditions, like inappropriate lightning. For example, in enclosed spaces, such as airports, problems linked to lightning conditions could be easily addressed, but this was not the case in relatively open spaces, such as land borders, where controls through the VIS were expected to take place after its deployment (Participant 5, 2017; Participant 13, 2017). Now we see how considerations about future control practices performed through the use of the system were translated into some of its technical features. Concerns about the verification and identification of third-country
Designing digital borders 175 nationals subject to the Schengen visa obligation – that feed into problematizations linked to border security and migration management – come to be inextricably linked with considerations about the appropriateness of different biometric identifiers, and the technologies necessary for their capturing and processing. At the same time, we see how the VIS, as a projected socio-technical assemblage, slowly emerges out of the coming together of human and non-human agents, such as biometric and alphanumeric data, scanning devices and algorithms, as well as end users, whose anticipated associations and interactions are inscribed in the text of the VIS feasibility study. When third-country nationals apply for visas, elements of human corporeality, like fingertips, mutate into digitized images and scans captured by devices, such as cameras and fingerprint scanners, before being algorithmically translated into biometric templates that, together with alphanumeric data, form computerized files stored in the database of the system (see Amoore and Hall, 2009; Epstein, 2008). These files are then sifted by a population of end users (e.g., visa, asylum, and border authorities) through the insertion of search keys to verify the identity of visa holders and applicants and identify undocumented individuals. All these human and non-human agents play an active role in the projected assemblage that emergently materializes within the design process of the VIS. Yet, alphanumeric and biometric data are not just collected and processed by the system; they are also exchanged between their end users. This could only be achieved through the design and subsequent development of an expanding ICT infrastructure connecting the different spaces where controls on Schengen visa applicants and holders were (and still are) performed. These spaces include consulates of the Member States in third countries where individuals apply for Schengen visas; central visa authorities located in each Member State; border checkpoints regulating international mobility; police and intelligence services’ headquarters where criminal and terrorist networks are investigated; and premises of migration authorities where asylum seekers lodge their requests. What interests me here is the architecture of the VIS, which allows for the interconnection of these dispersed spaces where controls on suspect mobilities are performed. The feasibility study considered two alternative design solutions for the VIS architecture: a centralized and a decentralized one (Trasys, 2003: Ch. 5). In the centralized solution, all data are stored, processed, and distributed by a central system located somewhere in the Schengen area. Regarding information exchange, this means the following. First, national consular authorities in third countries collect and digitize data during the processing of visa applications. Second, data are transferred through a communications infrastructure to national interfaces located in each Member State. The national interfaces do not store any data, but rather work as communication hubs through which information flows from the national level (i.e. consulates and visa authorities) to the central one (i.e. central system). Fourth, once data are stored and processed at the central level, they are distributed
176 Georgios Glouftsios across the Member States’ authorities which have access to the VIS through the national interfaces. The central system stores fingerprints (biometric templates), facial images, alphanumeric data, and scanned documents. To clarify, the production and storing of biometric templates at the central level facilitates the implementation of biometric matching functionality. This, in turn, enables the identification of third-country nationals by end users performing searches through the on-site capturing of fingerprints. In the decentralized solution, data that require much storage capacity, such as photographs and scanned documents, are stored in national interfaces, while alphanumeric data and biometric templates are stored at the central level. In this case, the retrieval of information after consultation requests by end users becomes more complex because, for example, photographs needed for the accurate verification of third-country nationals are distributed across, and should be retrieved from, the interfaces located in each Member State. This is not the case in the centralized solution, where all data are stored in, and distributed by, a central system. Technically, the decentralized solution translates into further complexity in terms of hardware and software configurations at the national level. This increases the overall burden of system management because significant maintenance activities do not only take place at the central level, but also at the premises of the Member States where the interfaces are located. In addition, this solution would render the development of the system more difficult due to technical complexities in the interfaces, generating potential problems that would have delayed its deployment. This was the principal reason for why the feasibility study suggested the development of the system in a centralized fashion. Indeed, the VIS now consists of a central system (CS-VIS) located in Strasbourg and, to ensure its resilience, a backup site in St Johann im Pongau (see Official Journal of the European Communities, 2008a). All the data stored in, and transmitted by, the CS-VIS are mirrored in the backup site. In the unlikely case of a major technical failure in the CS-VIS, the backup system can take over all the operations necessary to prevent any disruptions of services provided to end users (i.e. disruptions of information flows). Another crucial aspect of the VIS architecture is the Biometric Matching System (BMS). This was developed by the Bridge consortium (Accenture and Morpho with Bull) as part of the CS-VIS, and provides fingerprint matching services to the system. In addition to that, the CS-VIS and the backup system are connected through a pan-European communications infrastructure (sTESTA) to local national interfaces (LNIs) and backup LNIs (BLNIs), which were established in each Member State. These interfaces allow for the interconnection between the CS-VIS and national systems processing visa data. As in the case of the backup VIS, BLNIs were established for resilience purposes at national level. Finally, each Member State has put in place its own communications infrastructure connecting the premises of end users (i.e. consulates, central visa authorities, border
Designing digital borders 177 checkpoints, immigration offices, police agencies) to the national systems processing visa data, and the interfaces connecting national systems to sTESTA. The technical-infrastructural specifications of the system reveal its elaborate makeup. The VIS is an expanding socio-technical assemblage that emerges out of the coming together of end users, technical devices, and communications infrastructures that enact data flows across and beyond (as in the case of consular authorities) the Schengen area – in spaces where controls on mobile bodies required to carry Schengen visas are performed. The system allows for the interconnection of these geographically dispersed spaces, and the coordination of technologically mediated control practices performed there. Indeed, the governing of international mobility in the EU is, to a large extent, conditioned by the establishment of associations between the human and non-human (technological) agents that constitute the VIS, as well as other systems deployed for border security, migration management, and law enforcement purposes. These associations emerge within the progressively unfolding design process of the system, which is driven by a community of heterogeneous engineers comprised by policy, technical, security and, as I will show in what follows, legal experts. It should be clear by now that the VIS design process involved, first, the identification of problems linked to the examination of Schengen visa applications, border controls, and the assessment of asylum requests. These problems were then translated into the functional requirements for the system by the relevant Council’s working groups and committees, before being retranslated into its technical-infrastructural specifications through the execution of the VIS feasibility study.
Legislation and controversies Once the VIS feasibility study was finalized, the Commission prepared the legislative proposals defining the exact purpose and functionalities of the VIS, its end users, as well as the conditions and procedures for visarelated information sharing (European Commission, 2004, 2005). To be clear, the Commission drafted its proposals based on the functional requirements set by the Council and the results of the feasibility study. In other words, the functional requirements for the VIS, together with its technical-infrastructural aspects examined by the feasibility study, were translated by the Commission into the legislative proposals governing the development and use of the system. Nevertheless, it is important to highlight that the design of the VIS stabilized only after the conclusion of the negotiations on the adoption of the final legal texts. These negotiations brought together the Commission, the Parliament, and the Council – what is often described in the jargon of EU institutions as a “trilogue” – and started back in 2004, once the proposals had been drafted and circulated by the Commission.
178 Georgios Glouftsios The legislative package for the VIS consists of two key documents. The first one is a Regulation governing the use of the system by: a) consulates and national visa authorities for the examination of Schengen visa requests; b) border authorities performing controls at the external ports of entry of the Member States; c) immigration authorities performing checks inside the Member States’ territories to determine whether third-country nationals fulfil the conditions of their stay and/or residence; and d) authorities responsible for the assessment of asylum applications (Official Journal of the European Communities, 2008b). The second document is a Decision concerning the access to VIS data by police, Europol, and internal security agencies for the prevention, detection, and investigation of terrorist and serious crime offences (Official Journal of the European Communities, 2008c). For clarity reasons, I will first describe the principal characteristics of the system as these are defined by the final legal texts, before discussing one of the main controversies that emerged during the trilogue. To begin with, there are three categories of data stored, processed, and accessed through the VIS. First, there are alphanumeric data on Schengen visa applicants and holders (e.g., surnames, names, places and dates of birth), as well as on each specific application (e.g., status of application, authority processing it, type of visa requested). Depending on the stage of the visa request process, such as the lodging of applications (Official Journal of the European Communities, 2008b: Art. 8 and Art. 9) and the subsequent issuing (Art. 10) or refusal (Art. 12) of visas, there are different types of alphanumeric data recorded by the system. For example, when individuals apply for Schengen visas, designated end users in consulates record, among other things, the names of the applicants, their nationality, the reasons why they want to travel in a Member State, information on those inviting them, as well as administrative information, such as the place where, and date when, an application has been lodged. Accordingly, when visas are issued, end users add to each application file information like the number of visa stickers and the duration of authorized stays. Conversely, when visas are refused, the grounds of the refusals should be recorded. Visa requests are rejected in cases where, for instance, applicants fail to provide valid travel documents, do not have sufficient means to cover their costs during their stay, or are considered as threats to the internal security of the Member States (European Commission, 2010: 81–2). The second category of data processed by the VIS is fingerprints. Consular authorities capture, digitize, and store in the system the ten fingerprints of each applicant. This is done when third-country nationals lodge visa applications. To ensure the high-quality of fingerprints, the Commission has defined the technical characteristics of scanners used by the Member States (Official Journal of the European Communities, 2006) and specified that fingerprints should be collected and processed on the basis of the ANSI/NIST-ITL 1–2000 standard (Official Journal of the European Communities, 2009). In addition to that, the
Designing digital borders 179 Commission has circulated a software kit (USK4) to consular authorities which automatically checks the quality of fingerprints. Any fingerprint scanner must have certain technical specifications. On top of that, we have set a minimum standard for fingerprint image quality […]. This is why we have kits that check if the quality of fingerprints is good enough. If the quality is not good enough, the end user [i.e. consular authority] will have to follow the procedure again, again, and again. (Participant 5, 2016) Practically, “again, again, and again” means cleaning the surfaces of scanning devices and requesting visa applicants to wipe their fingers, before attempting to re-capture and re-enroll fingerprints (European Commission, 2010: 45). Once fingerprints are captured, they are added in computerized application files and transferred to the central VIS. It is in the central system that fingerprints are then algorithmically processed and stored as biometric templates used for automatic fingerprint matching. The third category of data stored in the system is photographs (facial images) of those requesting Schengen visas. As in the case of fingerprints, photographs are taken when third- country nationals lodge their visa applications. These photographs are then stored in the VIS and integrated into the uniform Schengen visa stickers attached to travelers’ passports (Official Journal of the European Communities, 2008d). Photographs should be of high-quality and taken according to guidelines (i.e. pose, lighting, color balance etc.) set by the ICAO (International Civil Aviation Organization) Document 9303 – an international standard that defines common specifications for machine-readable travel documents issued worldwide. As I explained before, automatic facial recognition functionality was not implemented in the VIS because the technology available on the market was not considered advanced enough. Yet, the optimal quality of stored photographs was necessary to allow for the potential introduction of facial recognition functionality at a later stage, after the development and roll out of the system. Finally, another category of data that is exchanged through the system are scanned documents submitted by individuals in support of their visa applications (Official Journal of the European Communities, 2008b: Art. 16). These can be copies of travel documents (i.e. passports and identity cards), as well as other (translated or not) documents requested by visa authorities to “assess the possible risk of illegal immigration and/or security risks” that applicants may embody (European Commission, 2010: 46). More specifically, among the documents submitted by third-country nationals in support of their visa applications are: a) those justifying the purpose of their travel (e.g., invitations by a firm or institution to attend a meeting); b) documents considered as proving the intention of applicants to leave the territory of a Member State before the expiry of their visas (e.g., return ticket); and c) documents indicating that applicants can cover the costs related to
180 Georgios Glouftsios their travels (e.g., recent bank statements) (European Commission, 2010: 46–53). Copies of these documents are collected by consulates and visa authorities of, for example, Member State A. In the case that where the authorities of Member State B need to consult these documents, they submit their requests electronically through the VIS. If the authorities of Member State A have copies of the requested documents, they transmit them back to Member State B. During the negotiations between the heterogeneous engineers (i.e. Commission’s services, Council’s working groups, Parliament’s committees) involved in the redrafting of the VIS legislations, one of the principal controversies that arose was linked to the access of police and internal security agencies to VIS data. Several national delegations in the Council’s working groups that were reviewing the legislative proposal (European Commission, 2005) found that the framing used by the Commission to describe this category of end users – “authorities responsible for internal security” – was problematic and restrictive (Council of the European Union, 2007: 2). The problem was that this denomination could potentially create legal constraints and prevent the Member States from freely designating the categories of end users allowed to consult VIS data. For example, the German delegation in the Council’s Police Cooperation Working Party emphasized that “the authorities entitled to access the VIS should be designated by the individual Member States,” and that “this explicitly includes the possibility of access by intelligence services” (Council of the European Union, 2006: 2). Similar arguments raised by other delegations resulted in amendments of the legislative proposals, which clarified that each Member State is free to designate the authorities that can access VIS data for investigations related to serious crime and terrorist offences. Indeed, the end users consulting VIS data for security-related purposes were described in the final legal text as “designated authorities,” instead of “authorities responsible for internal security.” This discursive change had an important impact on the design of the system: it was enough to allow intelligence services to consult information on third-country nationals subject to the Schengen visa obligation. Even though the European Parliament did not have the legislative power to intervene and decisively shape the Decision allowing intelligence services, police agencies, and Europol to consult VIS data – because this was an area falling under intergovernmental decision-making procedures – it expressed a significant concern that was taken into consideration by the Commission and the Council. The principal argument put forward by the Parliament (European Parliament, 2006) can be summarized as follows. The VIS does not contain information on individuals convicted for a criminal or terrorist offence, or people for whom there are grounds to believe that they will potentially commit one in the future. In other words, the VIS is not a criminal database, but rather a system whose principal purpose is to support consular cooperation for the processing and examination of visa requests. This is why, according to the Parliament’s (2006: 19) report:
Designing digital borders 181 it has to be clearly stated from the beginning that access by internal security agencies to Community databases must respect the purpose limitation principle and therefore access can be given only in exceptional circumstances and has to be accompanied by specific safeguards. In a similar vein, the European Data Protection Supervisor (EDPS) emphasized that [a]s the purpose of the VIS is the improvement of the common visa policy, it should be noted that routine access by law enforcement authorities would not be in accordance with this purpose. (Official Journal of the European Communities, 2005: 17). These concerns were inscribed in the final text of the Decision and produced changes in the overall design of the system. More specifically, it was decided that each Member State should create central access points (i.e. specific units within national authorities) to which operational units responsible for internal security (i.e. police and intelligence services) can submit electronic or written requests to access information stored in the VIS (Official Journal of the European Communities, 2008c: Art. 3 and Art. 4). These access points ensure the non-routine consultation of VIS data by police and intelligence services. They can be seen as interfaces providing access to data only if specific conditions are fulfilled (Art. 5), such as the existence of reasons to believe that the consultation is necessary for the prevention of a terrorist offense. Something similar also applies to the case of Europol (Art. 7), which had to establish specialized units within its organizational structure to access VIS data only after submitting consultation requests to Europol national units located in each Member State. In short, the consultation of information stored in the VIS for securityrelated purposes is being done only by following clearly defined procedures and only on a case-by-case basis, which reflects the concerns raised by the Parliament and the EDPS about the unrestricted access of security agencies to the system. However, the final text of the Decision specified that in “an exceptional case of urgency,” consultation requests should be processed by the central access points “immediately” (Art. 4) without, nevertheless, defining exactly what makes a case exceptional and urgent. In this case, the central access points provide access to data and verify only at a later stage whether all the conditions rendering the consultation necessary and proportionate have been met. This exception was added because national delegations in the Council’s working groups expressed concerns related to operational problems generated by the data consultation procedure: Delegations have explained that requesting such an authorization [to access VIS data] from another unit [central access point] would indeed
182 Georgios Glouftsios create a merely bureaucratic hurdle without added value, leading to a waste of resources and in urgent cases losing valuable investigation time. (Council of the European Union, 2007: 4) What we see here is a controversy that emerged during the negotiations on, and the redrafting of, the VIS legislative proposals. The latter acted as intermediaries that circulated between the Commission, the Parliament, and the Council’s working groups concerned with the functional aspects of the system. This controversy produced changes in the proposals, which had an important impact on the design of the system because they specified the end user communities allowed to consult VIS data for security-related purposes, as well as the procedures that should be followed to gain data access authorization. It was only after the conclusion of the negotiations on the legislative proposals and the adoption of the final legal texts that the design characteristics of the VIS were stabilized. In addition, we saw that the VIS was designed as an assemblage constituted by a population of interacting human (end users) and non-human (e.g., fingerprint scanners, algorithms, data, interfaces) operating parts. This assemblage expands across and beyond the Schengen area, tinkering a nexus of spaces where controls on suspect mobilities are enacted, such as premises of visa, consular, immigration, asylum, police, and intelligence authorities, as well as border crossing points. Ultimately, it is these interconnections between the spaces where border control, migration management, and law enforcement practices are performed that contribute to the establishment of the EU Schengen area as a controlled space of circulations built on ICTs, such as the VIS.
Conclusion How did the VIS emerge within the process of its design? What does the analytical focus on this process reveal about the agency inscribed in the VIS by its designers? These were the questions that I asked in the introduction of this chapter. To find answers, my methodological suggestion is that we need to embrace an analytical ethos attentive to the heterogeneity, dispersion, and projection that characterizes the design process of the system. Heterogeneity means, first, that those who designed the VIS were concerned with not only its technicalities, but also the ways that these technicalities feed back into considerations related to border security, migration management, and the internal security of the Member States. Before the introduction of the VIS, the lack of readily available information on Schengen visas was generating inefficiencies in border controls, the examination of visa applications, and the assessment of asylum requests. These problematizations were translated into the design of the VIS through a progressively unfolding chain of translations. They were first translated into the functional requirements for the system, these were then translated into possible
Designing digital borders 183 design solutions specifying its technical-infrastructural characteristics, before being translated, once again, into legislations governing the future development and use of the VIS. Second, heterogeneity means that those who designed the VIS were not only technoscientists, but also policy, security, and legal experts. These actors formed groups of what I described as heterogeneous engineers established within EU institutions, like the European Council, Commission and Parliament, as well as the contractor entrusted with the execution of feasibility study. In turn, the formation of these groups rendered the design process of the VIS dispersed across space and time. To draw the overall picture of this process, I had to follow flows of design practices (discussions, negotiations, (re)draftings) from one space and time to another. For example, I had to zoom in on the work of the Council’s working groups, read the design documents that they produced (i.e. requirements for the VIS), redirect my focus onto the feasibility study, understand how this study translated the requirements for the system into technical-infrastructural specifications and, ultimately, pay attention to how the legal texts detailing the VIS data, end users, and functionalities were negotiated by the Commission, the Council, and the Parliament. Third, heterogeneity means that not only humans, but also non-humans, were involved in the design process of the VIS. The non-humans that I am referring to are textual intermediaries (design documents) produced by, and circulated within, communities of heterogeneous engineers. The redrafting and circulation of texts, like the list of functional requirements, the feasibility study, and the legislative proposals allowed heterogeneous engineers to communicate and coordinate their design work. What is more, these texts embodied projections of a future assemblage comprised by end users, devices, algorithms, data, communications infrastructures and so on that, in their relationality, gave shape to the VIS. This means that the VIS acquired in the design phase of its lifecycle a future socio-technical agency, which was inscribed in design scripts that described the associations and interactions between its human and non-human operating parts. It is the acting out of these associations and interactions after the deployment of the VIS that render the system a forceful agent-assemblage that facilitates the governing of international mobility in the EU through data gathering and sharing, and contributes to the establishment of the Schengen area as a controlled space of transnational flows. Together with other systems, the VIS forms the techno-infrastructural skeleton that connects the EU border security apparatus and enables it to function. Ultimately, these observations suggest that technological innovation in the field of EU border security is not a field of “pure” technoscientific research. Rather, it is a mode of political intervention. The labor that goes into the design of information systems, as well as the agency inscribed in their functional and techno-infrastructural specifications, have a direct impact on the establishment and workings of the Schengen area. By designing, developing and deploying information systems, such as the VIS,
184 Georgios Glouftsios heterogeneous engineers (re)configure the very practicing of border security, migration management, and law enforcement.
Acknowledgments I am deeply indebted to my PhD supervisor, Debbie Lisle, as well as to Marijn Hoijtink, Matthias Leese, William Walters and Michael Bourne for their valuable feedback and suggestions.
Funding This work was supported by the Leverhulme Trust [grant number 2014097], as part of the ‘Leverhulme Interdisciplinary Network on Cybersecurity and Society (LINCS)’ research project. I am also grateful to the UK’s Research and Development Management Association for covering part of the travel expenses related to my fieldwork.
References Akrich M (1992) The De-Scription of Technical Objects. In Bijker W E & Law J (eds.) Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge: MIT Press, 205–224. Amoore L (2011) Data Derivatives: On the Emergence of a Security Risk Calculus for Our Times. Theory, Culture & Society 28(6): 24–43. Amoore L (2014) Security and the Incalculable. Security Dialogue 45(5): 423–439. Amoore L and Hall A (2009) Taking People Apart: Digitized Dissection and the Body at the Border. Environment and Planning D: Society and Space 27(3): 444–464. Balzacq T (2008) The Policy Tools of Securitization: Information Exchange, EU Foreign and Interior Policies. Journal of Common Market Studies 46(1): 75–100. Bellanova R and González Fuster G (2013) Politics of Disappearance: Scanners and (Unobserved) Bodies as Mediators of Security Practices. International Political Sociology 7(2): 188–209. Bennett J (2005) The Agency of Assemblages and the North American Blackout. Public Culture 17(3): 445–465. Bigo D (2008) Globalized (In)Security: The Field and the Ban-Opticon. In Bigo D & Tsoukala A (eds.) Terror, Insecurity and Liberty. Illiberal Practices of Liberal Regimes after 9/11. London/New York: Routledge, 10–48. Bourne M, Johnson H and Lisle D (2015) Laboratizing the Border: The Production, Translation and Anticipation of Security Technologies. Security Dialogue 46(4): 307–325. Broeders D (2007) The New Digital Borders of Europe: EU Databases and the Surveillance of Irregular Migrants. International Sociology 22(1): 71–92. Callon M, Law J and Rip A (1986) How to Study the Force of Science. In Callon M, Law J & Rip A (eds.) Mapping the Dynamics of Science and Technology: Sociology of Science in the Real World. Hampshire: The Macmillan Press, 3–15.
Designing digital borders 185 Ceyhan A (2008) Technologization of Security: Management of Uncertainty and Risk in the Age of Biometrics. Surveillance & Society 5(2): 102–123. Council of the European Union (2001a) Database of Visas. Document Number ST 15577 2001 INIT. Available at http://data.consilium.europa.eu/doc/document/ST15577-2001-INIT/en/pdf (accessed 31 Oct 2018). Council of the European Union (2001b) Guidelines for the Introduction of a “Common System for an Exchange of Visa Data.” Document Number ST 7309 2002 REV 3. Available at http://data.consilium.europa.eu/doc/document/ST-73092002-REV-3/en/pdf (accessed 31 Oct 2018). Council of the European Union (2001c) Justice, Home Affairs and Civil Protection. Document Number 12019/01. Available at http://data.consilium.europa.eu/doc/docu ment/ST-12019-2001-INIT/en/pdf (accessed 31 Oct 2018). Council of the European Union (2001d) Presidency Conclusions European Council Meeting in Laeken. Document Number SN 300/1/01. Available at http://europa.eu/ rapid/press-release_DOC-01-18_en.pdf (accessed 31 Oct 2018). Council of the European Union (2001e) Security Implications of Visa Policy. Document Number ST 14523 2001 INIT. Available at http://data.consilium.europa.eu/ doc/document/ST-14523-2001-INIT/en/pdf (accessed 31 Oct 2018). Council of the European Union (2002) Guidelines for the Introduction of a Common System for an Exchange of Visa Data. Document Number ST 9243 2002 INIT. Available at http://data.consilium.europa.eu/doc/document/ST-9243-2002-INIT/en/ pdf (accessed 31 Oct 2018). Council of the European Union (2006) German Proposals for a Council Decision Concerning Access for Consultation of the Visa Information System (VIS). Document Number ST 12840 2006 INIT. Available at http://data.consilium.europa.eu/ doc/document/ST-12840-2006-INIT/en/pdf (accessed 31 Oct 2018). Council of the European Union (2007) Proposal for a Council Decision Concerning Access for Consultation of the Visa Information System (VIS). Document Number ST 5456 2007 REV 1. Available at http://data.consilium.europa.eu/doc/document/ ST-5456-2007-REV-1/en/pdf (accessed 31 Oct 2018). Dijstelbloem H and Broeders D (2015) Border Surveillance, Mobility Management and the Shaping of Non-Publics in Europe. European Journal of Social Theory 18(1): 21–38. Duez D and Bellanova R (2016) The Making (Sense) of EUROSUR: How to Control the Sea Borders. In Bossong R & Carrapico H (eds.) EU Borders and Shifting Internal Security: Technology, Externalization and Accountability. Cham/Heidelberg/New York/Dordrecht/London: Springer, 23–44. EPEC (2004) Study for the Extended Impact Assessment of the Visa Information System, December 2004. Available at www.statewatch.org/news/2005/jan/vis-com835-study.pdf (accessed 31 October 2018). Epstein C (2008) Embodying Risk: Using Biometrics to Protect the Borders. In Amoore L & de Goede M (eds.) Risk and the War on Terror. London/New York: Routledge, 178–193. European Commission (2004) COM(2004) 835 Final. Proposal for a Regulation of the European Parliament and of the Council Concerning the Visa Information System (VIS) and the Exchange of Data between Member States on Short Stay-Visas. 28 December. European Commission (2005) COM(2005) 600 Final. Proposal for a Council Decision Concerning Access for Consultation of the Visa Information System (VIS) by the Authorities of Member States Responsible for Internal Security and by Europol for
186 Georgios Glouftsios the Purposes of the Prevention, Detection and Investigation of Terrorist Offences and of Other Serious Criminal Offences. European Commission (2010) Handbook for the Processing of Visa Applications and the Modification of Issued Visas. Commission Decision C(2010) 1620 final. Available at https://ec.europa.eu/home-affairs/sites/homeaffairs/files/policies/borders/ docs/c_2010_1620_en.pdf (accessed 31 Oct 2018). European Parliament (2006) Draft Report on the Proposal Concerning Access for Consultation of the Visa Information System (VIS) by the Authorities of Member States Responsible for Internal Security and by Europol for the Purposes of the Prevention, Detection and Investigation of Terrorist Offences and of Other Serious Criminal Offences. 2005/0232 (CNS). Glouftsios G (2018) Governing Circulation Through Technology Within EU Border Security Practice-Networks. Mobilities 13(2): 185–199. Hall A (2017) Decisions at the Data Border: Discretion, Discernment and Security. Security Dialogue 48(6): 488–504. Jeandesboz J (2016) Smartening Border Security in the European Union: An Associational Inquiry. Security Dialogue 47(4): 292–309. Jeandesboz J (2017) European Border Policing: EUROSUR, Knowledge, Calculation. Global Crime 18(3): 256–285. Latour B (1983) Give Me a Laboratory and I Will Raise the World. In Knorr-Cetina K & Mulkay M (eds.) Science Observed: Perspectives on the Social Study of Science. London: Sage, 141–170. Latour B (2004) Why Has Critique Run Out of Steam? from Matters of Fact to Matters of Concern. Critical Inquiry 30(2): 225–248. Latour B (2005) Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. Latour B and Woolgar S (1979) Laboratory Life: The Social Construction of Scientific Facts. Beverly Hills: Sage. Law J (1987) On the Social Explanation of Technical Change: The Case of the Portuguese Maritime Expansion. Technology and Culture 28(2): 227–252. Leese M (2014) The New Profiling: Algorithms, Black Boxes, and the Failure of AntiDiscriminatory Safeguards in the European Union. Security Dialogue 45(5): 494–511. Leese M (2016) Exploring the Security/Facilitation Nexus: Foucault at the ‘Smart’ Border. Global Society 30(3): 412–429. Leese M (2018) Standardizing Security: The Business Case Politics of Borders. Mobilities 13(2): 261–275. Lisle D (2017) Failing Worse? Science, Security and the Birth of a Border Technology. European Journal of International Relations. online first: 10.1177/ 1354066117738854. Lynch M (1985) Art and Artifact in Laboratory Science: A Study of Shop Work and Shop Talk in A Research Laboratory. London: Routledge. Madsen A K, Flyverbom M, Hilbert M and Ruppert E (2016) Big Data: Issues for an International Political Sociology of Data Practices. International Political Sociology 10(3): 275–296. Matzner T (2016) The Model Gap: Cognitive Systems in Security Applications and Their Ethical Implications. AI & Society 31(1): 95–102. Official Journal of the European Communities (2003) Legislation Number L50/1. Official Journal of the European Communities (2004) Legislation Number L2013/5.
Designing digital borders 187 Official Journal of the European Communities (2005) Information and Notices Number C181/13. Official Journal of the European Communities (2006) Legislation Number L267/41. Official Journal of the European Communities (2008a) Legislation Number L194/3. Official Journal of the European Communities (2008b) Legislation Number L218/60. Official Journal of the European Communities (2008c) Legislation Number L218/129. Official Journal of the European Communities (2008d) Legislation Number L267/41. Official Journal of the European Communities (2009) Legislation Number L270/14. Pallister-Wilkins P (2016) How Walls Do Work: Security Barriers as Devices of Interruption and Data Capture. Security Dialogue 47(2): 151–164. Salter M B and Mutlu C E (2012) Psychoanalytic Theory and Border Security. European Journal of Social Theory 15(2): 179–195. Tazzioli M and Walters W (2016) The Sight of Migration: Governmentality, Visibility and Europe’s Contested Borders. Global Society 30(3): 445–464. Trasys (2003) Visa Information System. Final Report. Valkenburg G and van der Ploeg I (2015) Materialities between Security and Privacy: A Constructivist Account of Airport Security Scanners. Security Dialogue 46(4): 326–344. Vukov T and Sheller M (2013) Border Work: Surveillant Assemblages, Virtual Fences, and Tactical Counter-Media. Social Semiotics 23(2): 225–241. Walters W (2017) Live Governance, Borders, and the Time–Space of the Situation: EUROSUR and the Genealogy of Bordering in Europe. Comparative European Politics 15(5): 794–817.
9
Technology, agency, critique An interview with Claudia Aradau Claudia Aradau, Marijn Hoijtink, & Matthias Leese
(MH): The idea for this book was to take up developments in Science and Technology Studies (STS) and New Materialism and the turn towards these literatures in International Relations (IR), and to study the question of agency more specifically with regard to IR and technology. Claudia, you have been at the forefront of some of these discussions. In particular, your 2010 article on critical infrastructure protection in Security Dialogue is often referenced within the debates (Aradau, 2010). We wanted to take a look back and see what we have gained from these discussions, and we would be interested to know how you would evaluate the ways in which STS and New Materialism approaches have inspired our work in IR. CLAUDIA ARADAU (CA): Thank you very much for your generous words about the article. I see three main ways in which the literatures on STS and New Materialism are contributing to IR. The first can be seen as an integral part to the study of practices, and particularly to the study of human/non-human assemblages. We have different vocabularies through which to analyze these assemblages, and there is a long-standing debate in IR about the implications of these types of analysis. One prevalent criticism concerns methodological assumptions and premises: what it means to be speaking of assemblages as ontologically “flat,” the question of symmetry, and the politics that is implied when one talks about human/non-human assemblages. But I think that the analytical attention to human/non-human assemblages has led to very productive interventions for IR despite these objections. It has not only highlighted different modes of materiality, technologies, and non-humans, but it has also unpacked the relational practices between humans and non-humans. Moreover, I think, it has done away with very limiting debates in IR about the “micro” and the “macro.” We have had this debate for some time, but we now have much more productive ways of analyzing transversal relations and understanding transversal modes of connecting in terms of international practices. MARIJN HOIJTINK
Technology, agency, critique 189 Secondly, and I think this is one of the reasons your book is important, another contribution was the reconsideration of performativity and agency. Here, both questions of distributed and entangled agency have been really important, and the chapters in the book take up these questions and discuss what they mean both methodologically and politically: Georgios Glouftsios’s chapter about the Visa Information System (VIS), for example, as he discusses modes of distributed agency; or Philipp Olbrich’s chapter on satellite imagery. Agency is not new to IR. There has been a lot of debate about it in post-structuralist, feminist, and postcolonialist literature. But I think the idea of entanglement and of different kinds of agency can really help us push forward some of these boundaries. So we can build on these developments. The third contribution concerns the politics of technology, of objects, of devices. And an acknowledgment that debates about the liberal subject, about liberal governmentality in IR need to be understood as co-constitutive, or to use Sheila Jasanoff’s (2004) terminology that several of the authors in this book invoke, as “co-produced” by objects, technologies, and all of these mundane devices, social, and cultural practices. I think this is really important for developing wider vocabularies for politics, but also in terms of understanding the politics of technology. It does not mean that there would not be limitations to this enlarged vocabulary of politics, and again, there have been a lot of debates in IR about the limits of STS – and also some discussions in the book touch upon this. As you kindly mentioned my article from 2010, that article engages the work of Karen Barad, and was really interested in the debates within STS as well. STS is not a homogeneous field, and often we go back to Actor-Network Theory, which is one of its forms. I think it is important to actually be much more aware of the debates and disagreements within STS. The chapter by Katja Lindskov Jacobsen and Linda Monsees engages with Jasanoff’s work, which is a particular strand in STS that is not based on Latour and Callon, but develops a critique of their ANT approach. And your own chapter, Matthias, uses Lucy Suchman’s (2007, 2012) work on “configuration” to draw attention to the ways in which materialities and imaginaries of technology are joined together. It is also necessary to engage much more with feminist and post-colonial approaches in STS, as there is a very rich body of work on technology, for example on reproductive technologies, or ultrasound technologies, their circulations, and political effects. And this brings back these distributed and entangled modes of agency in relation to bodies, the production of knowledge, the politics of (de)humanization, and so on. This work is deeply political, there is nothing “flat” about these assemblages. “Flatness” is neither a methodological precaution nor an assumption of the research. MH: More recent work at the intersection of feminist and post-colonial studies and STS has taken up questions of what we have been doing away
190 Claudia Aradau et al. with: the micro/macro debate, differences between the global and the local (e.g., Pollock and Subramaniam, 2016). And this literature also looks at the global circulation of technologies, how technologies emerge, how they become appropriated. This also seems an important thing that these perspectives offer us. CA: The circulation of technologies is an important aspect, indeed. And we have a lot of work in IR, and more specifically in critical security studies, that has addressed this. Circulations have been particularly understood as subsumed by hierarchical power relations, so that we see circulation of technologies from the North to the South, but also vice versa: that technologies developed in “laboratories” in the South find their way back to the North. The topos of “laboratories of security” has been important to challenge micro/macro distinctions and implicates a turn towards the analysis of global/local encounters, translations, and circulations of (in)security (e.g., Bourne et al., 2015). STS can be useful to unpack how “laboratories of security” work in practice, how they differ from the scientific laboratories and experimental science that STS has analyzed, given also that we cannot just follow the techno-scientists, as Glouftsios’s chapter shows. There is also a question about how we conceptualize modes of circulation, and how we address these modes of circulation in relation to global power relations. Marieke de Goede (2018: 24) has proposed to approach this through the epistemic implications of the “security chain,” focusing on how the modes of circulation of security across public-private institutions entail processes of “sequencing, movement, and referral in the production of security judgements.” Another important aspect, and this is sometimes missing in STS work, is the production of violence, the production of insecurity, the modes in which (in)security is enacted. I think there is an understanding about the effects that these practices and technologies have, the modes of insidious violence, the modes of differential exclusion from these practices. It is important to keep these questions as something that IR brings to this conversation. A final element is connected to the question of politics and how we understand politics. Drawing on some of the critical resources in IR, I think we should look into the politics of networks, particularly as deployed in Actor-Network Theory (ANT). An approach that highlights controversies, struggles, frictions, and disputes seems to me more apt to grasp the politics of local and global, circulation and technology critically (see Aradau, 2018; Hönke and Cuesta-Fernández, 2018). STS is grappling with these questions, but I think the experiences from critical IR and from critical security studies are really productive here. We should not lose that. MATTHIAS LEESE (ML): I would like to go back to your point about the risks of homogenizing STS by reducing it to ANT and excluding other work such
Technology, agency, critique 191 as feminist and post-colonial strands. On the other hand, we already have numerous concepts and vocabularies that include the likes of dispositif, assemblage, intra-action, co-production, or vibrancy – and you have in your own work, together with Anthony Amicelle, and Julien Jeandesboz, coined the notion of the “security device” (Amicelle et al., 2015). And all these concepts come from different philosophical traditions and disciplinary backgrounds. How can we grapple with that multitude and heterogeneity of vocabulary? CA: My first reaction would be to say that I don’t worry about having too many concepts. I would worry about having too few concepts. And this relates to where IR was starting. If you think about the dangers of what is called “parsimonious theory,” we need more rather than fewer concepts to grapple with the complexity and heterogeneity of the world. Parsimonious theory tries to discipline analytical attention by reducing the complexity of the world to alleged big matters and claiming that messiness is untenable, ungraspable, and methodologically invalid. But I think your point is important, because there is also a question about heterogeneity becoming disorientation or confusion, when we have a massive proliferation of concepts. Still, different concepts do different work, and this is why a multiplicity of concepts is important for me. Conceptual multiplicity cannot be legislated before the research, but is enacted through the process of research. As you say, concepts come from different debates, and if you look back at this book as a whole, authors mobilize different concepts in relation to different debates in order to be able to do certain things. For instance, co-production, for Jacobsen and Monsees, is important, because it enables them to relate their analysis back to questions of social order. They want to focus on the micro-practices and politics of social order, and they can do so through the concept of co-production and through Jasanoff’s work. However, if you are starting from an understanding of controversy, this does something different than a concept like social order, the analysis will be very different. Even as Jasanoff is interested in the de- and re-stabilization of social order, she distinguishes her approach from the study of technoscientific controversies or boundary objects, as she focuses on tracing the tacit assumptions, understandings, cultural and national differences that are constitutive of these moment of de-stabilization (Jasanoff, 2012). She is also interested in controversies in society rather than just the laboratory. That is why it is crucial to not de-historicize these concepts, and this is where the debates within STS are relevant. This is also the case in engaging STS and IR, in terms of how, for example, co-production as a concept is similar to, but not quite the same as enactment or performativity. We need to situate the concepts we use in their socio-historical contexts of emergence, but also to follow them through the debates and circulations that redeploy and change them. There is a particular intervention, and this is about what kind of work these concepts
192 Claudia Aradau et al. allow us to do. Why do we invent concepts? We invent concepts to try to make sense of the heterogeneity of practices. STS gives us a vocabulary that we can use productively in relation to the vocabularies that we already have, and end this myth of parsimony that IR (and social sciences) have been reproducing for a long time. There are also other vocabularies in critical IR – and vocabularies that we need to invent ourselves, not as individual scholars who reproduce the “distinctions” of the academic field, but as collaborative endeavors to engage critically with the problems of insecurity, violence, and global politics that we want to understand and confront. ML: Others have also pointed at the presumed incompatibility of different levels of analysis between STS and IR. STS comes from a sociological and ethnographic tradition, where researchers have paid close attention to very specific and local practices, and situated networks of actors. IR, on the other hand, is a discipline that is still preoccupied with the notion of the international and the quite abstract question of change versus continuity. So it could be argued that STS and IR are not really compatible when it comes to international practices. How can we try to bridge this gap? CA: Let me turn back to something that interested me a few years ago, as I hope it is relevant to the question. In the early days of the Cold War, there was a big debate about avoiding accidental war between the two great powers. And one of the key responses was to have a hotline between Moscow and Washington. And this hotline is key. You can conduct a whole analysis – I tried to do this a couple years ago, and there is in fact a lot of literature on this – about putting the hotline in practice and its political implications: what does it take to make the hotline as an assemblage work? You see, a hotline is a quite banal thing. But it brings into being a very particular understanding of global war, of nuclear warfare, a particular understanding of what it means to have relations between global powers, who gets to be connected through the hotline and who doesn’t (Aradau, 2016). Its banality also means that mundane actions, which appear to be at a distance from international politics such as a Finnish farmer cutting the hotline while ploughing the land become constitutive of the international. So we can turn your question around to some extent, and ask: how do we study the enactment of the international? The international is enacted in many different sites. It is not a given, but it is constituted through practices. Concepts from STS can be productively mobilized to study and understand the enactment of the international through the production of particular discourses, institutions, practices, routines, but also objects such as hotlines, railways, infrastructures, logistics, weapons, and also expertise. This is why I like the “transversal,” as it allows us to understand how the international is enacted and re-enacted. Transversal is not transnational but that which connects
Technology, agency, critique 193 by cutting across in more or less unexpected ways. There are heterogeneous enactments and re-enactments, but also controversies about what the international is and how, and where, it comes into being. Or controversies about, and struggles over, what counts as the international and what counts as the global. These are often subject to controversies, and objects and technologies are part of these controversies and contestations. Olbrich, in his chapter, for example looks at the imagination and enactment of the global through technology and the particular production of images of the globe. Or we could also analyze technologies that enact the “world.” What does it mean to look at the world? Again, there is a history of that: world future, the Club of Rome, and so on. So there is a whole history of attempting to produce the world. And the same goes for the international. There is a history of doing that through the modes of inscription, through different practices and so on. MH: You mentioned that we should keep our IR understanding of politics: what politics is about, what it does, and also what it means to do critical work in IR and critical security studies. In STS, many scholars would follow a Latourian approach to politics and assume that politics is what the actors within a network define as politics. And their forms of political engagement or political critique would be based on observing the relationships within the network, and subsequently engaging the actors and speaking with them about their observations in a very detailed and nuanced manner. How would that work for us, when we study phenomena such as exclusion or violence, also with regard to possibly holding human/non-human assemblages accountable for these things? In other words, is our understanding of politics in IR compatible with the understanding of politics in STS? CA: I think your question about politics is closely related to debates about critique and how to locate critique within entangled relations between a multitude of “actants.” It is also important to acknowledge that there is not a shared concept of politics, either in IR as a whole, or in critical IR. So again, if we start from the traditional debate about politics and understandings about what liberal or realist understandings of politics entail – or also in terms of post-structuralist understandings of politics – you have a lot of variation in how politics is understood: from Foucault’s politics as war to a host of other post-structural understandings of politics as contingency. Contingency is a concept that I have noticed across several of the chapters in the book. There is politics within contingency, but also of contingency. And then you have debates about politics and resistance, politics as resistance, politics as contestation, politics as controversy. This is connected with an understanding of contingency and the possibilities and indeterminacies of social practices that open up the possibility of contesting politics. Therefore, there are a lot of connections that we can make, while again attending to the heterogeneity of different understandings of politics. We need to pay close attention to the sites of
194 Claudia Aradau et al. political contestation and how politics and critique are enacted in controversies within and across those sites. But we need to be careful not to remain just within the understanding of politics that actors have within a specific situation. To me, this means to take seriously what STS does, but also to think across different sites, to not remain confined to a social situation. And I think this is where critical IR is interesting. Because we “move” a lot, producing understandings of transversality and circulation, an understanding of the sedimentation and transformation of discourses, an understanding of global power relations. We move across different sites, and throughout these sites the understandings of politics shift as well. We also need to attend to the understandings of politics that we have in our IR, even as we find some problematic. To a certain extent, we are also actors, within and across this trans-academic field. So I think it is important to work with this, and work across, work at the interstices, work in-between. This means that we can step beyond the confines of a situation, and there are openings, and under-determinacies, and contingencies, and failures in situations. Practices are contested, (re)deployed, and (re)appropriated. And all this has been very productive for IR and for STS. But I think we can move beyond, and sometimes we need to move beyond, the understandings of politics in a specific situation. This is the methodological view of proximity and distance (Bueger and Mireanu, 2015; Coleman and Hughes, 2015). In that sense, we cannot just erase or disavow the whole history of thinking about politics that we have been trained in, that we have learned, that we work with. We can bring that to particular situations, while at the same time being more attentive to absences and silences. Or to that which might be non-perceptible in a given situation, to use Jacques Rancière’s (1999) terms, to the distribution of the sensible and the political moment of redistributing the sensible. Working upon and contesting this distribution of the sensible, I think there we can have something to say without assuming that we are an equal actor, or have some kind of similar position, but still bring our history of thinking about politics into particular situations. MH: This reminds me of a story about Bruno Latour giving a lecture in Taiwan and reminding his audience of the importance of symmetry, relationality, and contingency in our research (Law and Lin, 2017: 214–5). Latour explained that we can only make sense of the world if we adopt methods that are themselves non-coherent and messy, but he was challenged by his audience who told him that messiness and the struggle against a grand narrative was not at all productive with regard to the political situation in Taiwan. I guess what this example shows is that when STS prescribes that all knowledge is situated, this cannot itself become a decontextualized truth. John Law and Wen-yuan Lin, who recount this story about Latour’s lecture, then go on to call for extending the principle of symmetry further, treating non-Western and STS terms of analysis
Technology, agency, critique 195 symmetrically without privileging the latter, but before that perhaps what we need is understand the political work that our own concepts, such as messiness and contingency, do. CA: That is where the question of politics becomes a question of problematizing contingency and messiness. Why and how is something problematized in a given context, and on the basis of what kind of understandings of politics? I cannot speak to the debates about politics in Taiwan or what was implied in the question to Latour, as this is not something I am familiar with, but I can see that contingency can be problematized differently in specific political situations. For instance, problematizing contingency can open political space against consensual politics, or dominant representations, which silence or exclude other voices or, to put it differently, lead to epistemic injustice or rendering some types of knowledge and knowledge subjects as lacking credibility. This is what critical security studies and critical IR more generally – and most explicitly feminist work – have done. But perhaps we can also say that – to a certain extent – we are now faced with a different situation, where contingency seems to render political judgments indefinitely changeable to that extent that the language of “post-truth” has become increasingly used. Contingency is here rendered as “anything goes” rather than a socio-historical conceptualization of relations. Contingency does not mean that a situation or relations are indeterminate, rather that they are not fully determined. Therefore, contingency is mobilized to create confusion, doubt and uncertainty, as the literature on agnotology has shown (Proctor and Schiebinger, 2008). We need to develop transversal modes of analysis, which situate contingency as a socio-historical concept and practice and also move across sites of controversy, rather than just having one understanding of what the politics of contingency is. Agnotology seems to me a more apt toolbox for diagnosing and intervening in the present than the problematic coinage of “post-truth.” ML: I’d like to pick up on your discussion of critique. You mentioned that there is a debate, in critical IR and critical security studies, about what it means to be critical, and what the implications from a critical stance would need to be. For you, what does it mean to be critical in relation to technology, and how can we accommodate the normative or the ethical within a critical stance towards technology and the international? CA: That is a difficult question. Let me try to split it into two parts. The first one is how we can think about being critical in relation to technology and technological developments. And then I’ll address the question of ethics. I have thought about the question of critique as always a situated one. Critique in relation to technology needs to be situated and specified: what kind of technology are we speaking about? It needs to come after an analysis of power, controversy, and agency. So, for me, critique does not come first, but it builds upon an understanding of power relations, of the modes of differential exclusion, of the modes of silencing, of the
196 Claudia Aradau et al. struggles and controversies that take place, of the controversies that can mobilize objects and subjects, the technologies but also the subjects that are involved. We need to try to understand how these things produce forms of differential exclusion, but also distributions of humanity and inhumanity, the construction of categories of some people as lessthan-human. And it is on this understanding of inequality, differential exclusions, dehumanizations, and injustices that critique builds. It seems to me that in critical security studies, and I take the liberty of including feminist and post-colonial approaches here as well, this is very important as a mode of analysis and as an understanding of critique. It is, I think, a quite specific understanding of critique, quite different from some of the analyses in ANT, for example. This is how I would specify critique. And I would take critique in this “negative” sense that it builds upon an understanding of what produces differences and inequalities, power asymmetries, violence, and injustice. There has been a move in new materialist work to develop modes of “affirmative” critique, which are situated in relation to “negative” critique. Yet, this is not a positively/negatively charged continuum. To me, it is a question of situating critique in relation to the production of injustice, inequality, domination and so. That is why I feel ambiguous about formulations of “post-critique” and “a-critique,” which take the “negative” critique I have outlined as somehow violent itself (e.g., Anker and Felski, 2017). And critique then enters as a mode of reasoning. Critique is not the same as politics, but I connect critique and politics, because I think critique can be a site of politics. It builds upon political struggle, but it can itself be mobilized, and anticipate political struggle. For me, what is really important is that if you take this analysis of the different modes of relations and the effect that technologies have in the classification of humans and the creation of categories of being human or non-human, then politics is about contesting that. Therefore, in my work, I have spoken about politics rather than ethics. We have seen debates that have attempted to formulate different ethical approaches and different normative approaches, and we can talk about that. But I wonder whether ethics – particularly as it is discussed in relation to technology and emerging digital technologies – risks eschewing what politics is about: engaging, coming to grips, entering, working within the interstices, and controversies. To some extent, I worry that ethics does not allow for that messiness. Formulations of ethics in relation to technology are particularly problematic in that sense, as they assume that ethics can be “designed in” the technology or that somehow ethics is matter of rules. An ethics which inscribes particular universal rights in the technology design does not only decontextualize the subjects of technologies, but it imagines a universal and non-situated subject of technology. Here, ethics offers solutions and aims for sameness across all deployments of technology. Yet, I would argue that what we need are not
Technology, agency, critique 197 solutions but new problematizations. We need a political sociology of contestations: of controversies, struggles, resistances, disagreements, and disputes. ML: If you would allow me to relate this back to your own work once more: together with Tobias Blanke, and with regard to Big Data and security, you write “what matters in the Big Data-security assemblage is how the relation between humans and computers gains content, and how the assembling of humans and computers is both an association and a division of labour” (Aradau and Blanke, 2015: 5). How can we understand this division of labor between machines on the one hand, and humans on the other hand, particularly if we think about the violent and exclusionary effects that you spoke about earlier? CA: Your question refers to two terms. The first one is labor. A lot of the debates about the role of digital technologies, Big Data, and algorithms are about the production of value, about labor that produces this value, and about capitalism. You can see this, for example, in Malcolm Campbell-Verduyn’s chapter on blockchain technology, where you see the re-working of the blockchain within dominant systems of finance and liberal capitalism. The concept of labor is really important because we need to analyze the effects of digital technologies and computers in relation to the production of value. Alex Edney-Browne’s chapter also shows the effects of labor – long shifts, the strain of fatigued vision, and multi-tasking – for drone pilots and the fatal consequences that the human-machine distribution of vision can entail for those who become targets of violence. I think it is here that we need to connect the work on security technologies, devices, logistics or infrastructures done in CSS or STS with feminist scholarship. For instance, in her work on gestational surrogacy, Kalindi Vora starts with the clinic as a sort of laboratory that disciplines women through technologies of surrogacy, legal contracts, and training. She is particularly interested in how women are guided into “a new understanding of their bodies without their full knowledge of the technologies involved to train them into a previously unimagined relationship (or lack of relationship) to the child they will bear” (Vora, 2015: 109). Vora’s analysis is exemplary in connecting technology, (not) knowing, gendered, and racialized embodiment. The other element that is important for both politics and critique is how relations get specified. Often, when we talk about agency or assemblages, we talk about relational approaches. Again, this is something that several authors discuss in the book. But the relational is a vague concept. And while it is productive to have many different concepts, these relationships need to be specified. In the article that you referred to, we take seriously a criticism that the geographer John Allen (2011) raised towards the uses of the concept of assemblage, and particularly towards assemblages being used as too descriptive and focused on their elements. So his argument is that we need to work through the content of the relations
198 Claudia Aradau et al. within the assemblage, and that we need to specify these relations. That is what we tried to do: specifying and historicizing relations. But how do we do that? We specify them within the controversies that take place around questions such as what Big Data is, how it works, what it means in practice – an approach that Mareile Kaufmann also takes in her chapter in this volume. We trace a series of controversies, starting from the Snowden revelations and including judicial litigation and public scandals. And then you can see, if you follow controversies, how violence is problematized in relation to Big Data. To go back to Vora’s analysis, she also shows how an analysis of relations cannot be limited to the surrogacy clinic but also needs to be placed within both a historical context of “the Indian middle class and rural women” and a global one of “the transnational reach of directors, and their ability to command technology and resources at the global level” (Vora, 2015: 114). For us, the “association and division of labour between humans and computers” was helpful to orient our approach. There is violence in how Amazon Mechanical Turk, for example, works as low-paid workers in the South are given “tasks” that supplement the work of computers. The division of labor is also international, with low-paid workers and a lack of labor rights. There is violence in how anomalies are produced, as anomaly detection has become the “holy grail” for detecting unknowns in the mass of data. There are different modes of inclusive exclusion, of classification, and hierarchization, which also embody violence. But at the same time, specifying the content, and that takes us also to your own chapter in the book, Matthias, also prevents us from falling into the trap of all kinds of dystopian visions of machines and automation taking over, this discourse around Artificial Intelligence and a world run by robots which I think is actually undermining critical discourse. MH: I think what we find in many chapters of the book is that they look at how those relationships take shape, but also at what the effects of those relationships and constellations are. So while I think it is very important to specify relations and their content, it is also important to study what the effects of such relationships are: for example, in terms of how North Korea is depicted (Olbrich), how the blockchain is re-appropriated within financial regulation (Campbell-Verduyn), or how practices of warfare are transformed through the visual regime of the drone (Edney-Browne). One of the premises of this book, in this sense, is to show what the effects of those constellations are. CA: A critical analysis of technology emerges through the diagnosis of effects, particularly as we understand technologies as socio-technical assemblages, and unpacking the specific relations through which agency emerges. Several of the chapters take this approach, but push it in different directions. Take for instance Olbrich’s chapter about satellite imagery and North Korea. What is important here is that these effects are mobilized in the production of evidence. And this is again a key element if we think
Technology, agency, critique 199 about the international politics of knowledge and about how human rights abuses and other forms of violence can be known or not. The production of evidence, what counts as evidence, is key. But Olbrich shows how the production of evidence is asymmetrical, and I think he has an important point there, also methodologically, about the question of symmetry/asymmetry, which has often been used in IR to criticize STS. While for the critics symmetry appears to eschew the asymmetries of power, Olbrich’s chapter points out that it does not mean that “human beings, things, institutions and concepts matter in the same way.” In my reading, I would say that there is a methodological precaution of not accepting asymmetries as given a priori. And in Campbell-Verduyn’s chapter on blockchain you have the question of authority in global financial regulation. Again, this is the effect of asymmetries and it is produced through specific relations. So we really need to focus on the production of asymmetric relations of power and authority, but also on asymmetric forms of knowledge. And we extend this to the production of what counts as evidence, what counts as truth, who gets to speak, who gets to be an actor in particular situations, who gets to be human, who gets to be an expert, and so on. All these questions are underpinned by particular relations, but also by the equipment and instruments that these actors can have and appropriate. ML: There is quite a debate in terms of how to study these relations empirically, specifically when it comes to technology. Most technologies are either framed as security technologies and therefore subject to a certain level of secrecy and inaccessibility, or they are the products of private companies and therefore proprietary, which makes them also to a certain extent inaccessible for us as researchers. How can we deal with this problematic constellation if we seek to study the relations that unfold from and through technologies? CA: First of all, and I think this is very important, we should also study secrecy itself. Secrecy is also a particular mode or relation, where something is not unknown, or unknowable, but it is kept from certain people. If no one knows it, then there is no secrecy. It is a really interesting epistemic concept, because it partitions and distributes knowledge, and creates particular boundaries. And that raises the question of where the researcher sits in relation to these boundaries, and how you can do research on particular technologies that are secret and to which you don’t have access. There are two elements I want to address. One is that secrecy is not just about security or international relations. I think secrecy is perhaps intensified in relation to security technologies, but security has become a very mundane task. As you said, it is tied to the proprietary technologies of private companies that have a lot of secrecy around the development of their products, partly due to competition. That is the metaphor of the “black box” that Bruno Latour and Steve Woolgar (1979) used to develop the
200 Claudia Aradau et al. methodology of “opening the black box” and which is now widely used to render the challenges digital technologies and algorithms, and you have also used it in your work on profiling (Leese, 2014). Frank Pasquale has even coined the term “black box society” (Pasquale, 2015). Secrecy is produced in many different forms. Academics produce secrecy in the research process as confidentiality, anonymization, and so on. There are many modes of secrecy, but in the end it is often quite banal. So we need to work with this banality of secrecy. And one thing is: could we render it more banal in relation to security, rather than thinking that there is always something exceptional in relation to security? Secondly, how do we do the research then? If secrecy is banal, this means that the field is quite dispersed, and there are a lot of boundaries, and you can work around those boundaries. And you have different ways of working around these lines, for example, anthropologists like Hugh Gusterson have been working around nuclear weapons, quite literally. Gusterson (1997: 116) develops the methodology “polymorphic engagement,” thereby multiplying the sites of inquiry and “collecting data eclectically from a disparate array of sources in many different ways.” Tobias Blanke and I have argued that many technologies are not as secret as we think they are. To give you an example: there is a lot of secrecy around the technologies that intelligence agencies use. We do not know exactly what the NSA [National Security Agency] is doing with data, and what kinds of technologies they have available. However, it is very unlikely that the NSA will have technologies that are more developed than the stateof-the-art in computer science. This is what whistleblowers and leaks have also shown. We also know that there are only a limited number of classes of algorithms, so we can build on this. Finally, if you look at the modes of research funding, for example within the DARPA [Defence Advanced Research Projects Agency], a lot of the academics involved in this research for security purposes then go on and publish articles about it. And let’s not forget that there are the public controversies around leaks and what has come to be called the “half-life of secrets.” So I think there are different ways in which one can do research, and to think differently about secrecy. What is key here is to treat it as less exceptional. But also to think about discursivity around technologies. When the Snowden revelations came out, of course, there was a huge debate about the secrecy, but also some intelligence experts said that they were quite happy that they could finally publicly talk about what they were doing. MH: I think this is also something that some of the chapters work with and highlight: that we should not be looking for secrets in one single space, or chamber of secrets, but that there are ways to work around it and to find data in different places, and to connect these dots in order to be able to tell a convincing story. For example, Georgios Glouftsios suggests that we study the meetings where “bureaucrats, policymakers, security
Technology, agency, critique 201 professionals, legal and technology experts meet to discuss technoscientific, policy, and security-related needs and concerns.” CA: Yes, this is also Gusterson’s point about polymorphic engagements. But there is a lure of secrecy, which is exactly that of making visible, of discovering that which is hidden. The recent literature on “post-critique” has raised objections to this analysis of “surface and depth.” Contra this surface/depth reading, Toril Moi’s chapter in the Anker and Felski edited anthology on Critique and Post-Critique proposes to “develop critical readings without invoking terms like hermeneutics of suspicion, or symptomatic reading” (Moi, 2017: 32; emph. in orig.). She rejects the surface/ depth opposition, which leads to an epistemology of revealing and making visible. Your suggestion, Marijn, is about the heterogeneous surfaces and interstices where secrecy is enacted, but also contested. There is another element in relation to secrecy, which is based on the assumption that making something visible is equivalent to knowing. Yet, technology is also opaque and often difficult to understand for experts themselves. So the idea of seeing the technology, or having access to the technology, will not necessarily dispel secrecy. MH: If we think of the chapters of the book as an invitation to have a conversation about technology and agency in IR, what would be your take on how to productively push this conversation further? In other words, where do you think we, disciplinarily speaking, should go next, where is there still some uncharted territory left for the study of technology? CA: We need to think about several things. One element concerns the notion of laboratories, and the ways in which technologies are produced in laboratories. So how can we think about the production of technologies in relation to the use of technologies? And I think there is something really interesting in the notion of the laboratory. It has been used almost as a metaphor in some of the critical work on security, for example as some work engages the laboratories in the Global South where technologies are produced, used and tested and then these come back to the Global North, and so on. So, on the one hand, it is interesting to engage with laboratories and the production of technology and to revisit what counts as a laboratory today. On the other, it is important to do transversal analyses, to move outside of the laboratory, as Kalindi Vora shows us. Security is not “laboratory studies,” however important laboratories are for the production of security technologies. Laboratory work always moves out in terms of experiments, but also in terms of inscriptions, in terms of publications. This speaks to a transversal analysis of the modes of circulation and connection. Secondly, we need to think about how important technologies actually are in the partition of the sensible today: what we can see and know, and what we cannot see or know without technology. This does not mean that technology becomes immediately knowable, and there are no controversies about what counts as knowledge, what counts as evidence. These three sets of elements raise important questions today. And we can see
202 Claudia Aradau et al. them for example in relation to climate change – and not just simply climate change, but how do we know, or perhaps not know that and how certain events are taking place? How is uncertainty produced, and through what kinds of technologies? There is a lot more that we can explore with regard to these relations. And finally, the concept of agency is really important, because it connects us to critical work on agency. We talked about feminist work, about post-colonial work, and I think agency is a really important bridge to work in-between these approaches, for example between feminist work in IR and feminist work in STS. Agency allows these bodies of work to make explicit their stakes and political investments in the reconfigurations of social and power relations. Claudia Aradau is Professor of International Politics in the Department of War Studies, King’s College London. Her research has developed a critical political analysis of security practices and their transformations. Among her publications are “Politics of Catastrophe: Genealogies of the Unknown” (with Rens van Munster, 2011) and “Critical Security Methods: New Frameworks for Analysis” (co-edited with Jef Huysmans, Andrew Neal, and Nadine Voelkner, 2014). Her recent work examines security assemblages in the digital age, with a particular focus on the production of (non)knowledge. She is currently writing a book with Tobias Blanke on algorithmic reason and the new government of self and other. She is on the editorial collective of the journal Radical Philosophy. She is also chair of the Science, Technology and Art Section (STAIR) of the International Studies Association (2018–2019). This interview was conducted via conference call, transcribed, and then edited and amended over several days.
References Allen J (2011) Powerful Assemblages? Area 43(2): 154–157. Amicelle A, Aradau C and Jeandesboz J (2015) Questioning Security Devices: Performativity, Resistance, Politics. Security Dialogue 46(4): 293–306. Anker E S and Felski R (eds.) (2017) Critique and Post-Critique. Durham: Duke University Press. Aradau C (2010) Security that Matters: Critical Infrastructure and Objects of Protection. Security Dialogue 41(5): 491–514. Aradau C (2016) Hotlines and International Crisis. In Salter M B (ed.) Making Things International 2: Catalysts and Reactions. Minneapolis: University of Minnesota Press, 216–227. Aradau C (2018) From Securitization Theory to Critical Approaches to (In)Security. European Journal of International Security 3(3): 300–305. Aradau C and van Munster R (2011) Politics of Catastrophe. Genealogies of the Unknown. London: Routledge. Aradau C, Huysmans J, Neal A and Voelkner N (2014) Critical Security Methods: New Frameworks for Analysis. London and New York: Routledge.
Technology, agency, critique 203 Aradau C and Blanke T (2015) The (Big) Data-Security Assemblage: Knowledge and Critique. Big Data & Society 2(2): 1–12. Bourne M, Johnson H and Lisle D (2015) Laboratizing the Border: The Production, Translation and Anticipation of Security Technologies. Security Dialogue 46(4): 307–325. Bueger C and Mireanu M (2015) Proximity. In Aradau C, Huysmans J, Neal A W & Voelkner N (eds.) Critical Security Methods: New Frameworks for Analysis. London/New York: Routledge, 118–141. Coleman L M and Hughes H (2015) Distance. In Aradau C, Huysmans J, Neal A W & Voelkner N (eds.) Critical Security Methods: New Frameworks for Analysis. London/New York: Routledge, 142–158. de Goede M (2018) The Chain of Security. Review of International Studies 44(1): 24–42. Gusterson H (1997) Studying Up Revisited. PoLAR: Political and Legal Anthropology Review 20(1): 114–119. Hönke J and Cuesta-Fernández I (2018) Mobilising Security and Logistics through an African Port: A Controversies Approach to Infrastructure. Mobilities 13(2): 246–260. Jasanoff S (2004) The Idiom of Co-Production. In Jasanoff S (ed.) States of Knowledge: The Co-Production of Science and Social Order. London/New York: Routledge, 1–12. Jasanoff S (2012) Science and Public Reason. London/New York: Routledge. Latour B and Woolgar S (1979) Laboratory Life: The Social Construction of Scientific Facts. Beverly Hills: Sage. Law J and Lin W-Y (2017) Provincializing STS: Postcoloniality, Symmetry, and Method. East Asian Science, Technology and Society: an International Journal 11 (2): 211–227. Leese M (2014) The New Profiling: Algorithms, Black Boxes, and the Failure of AntiDiscriminatory Safeguards in the European Union. Security Dialogue 45(5): 494–511. Moi T (2017) “Nothing Is Hidden”: From Confusion to Clarity; Or, Wittgenstein on Critique. In Anker E S & Felski R (eds.) Critique and Post-Critique. Durham: Duke University Press, 31–50. Pasquale F (2015) The Black Box Society. Cambridge: Harvard University Press. Pollock A and Subramaniam B (2016) Resisting Power, Retooling Justice: Promises of Feminist Postcolonial Technosciences. Science, Technology, & Human Values 41(6): 951–966. Proctor R N and Schiebinger L (eds.) (2008) Agnotology: The Making and Unmaking of Ignorance. Stanford: Stanford University Press. Rancière J (1999) Disagreement: Politics and Philosophy. Minneapolis: University of Minnesota Press. Suchman L (2007) Human-Machine Reconfigurations: Plans and Situated Actions, 2nd Edition. Cambridge: Cambridge University Press. Suchman L (2012) Configuration. In Lury C & Wakeford N (eds.) Inventive Methods: The Happening of the Social. London/New York: Routledge, 48–60. Vora K (2015) Life Support: Biocapital and the New History of Outsourced Labor. Minneapolis: University of Minnesota Press.
Index
2008 global financial crisis 17, 123, 127–30 9/11 164, 169 Accenture 176 Accountability (see also responsibility) 1, 13, 45, 49, 55, 78, 125, 154, 193 actant 16, 33–4, 37–8, 44, 144, 193 Actor-Network Theory (ANT) 3, 16–7, 33–8, 69, 144, 189–90, 196 aesthetic 88, 92, 104, 108 Afghanistan 97, 99 agencements (see also distributed agency) 33 agent-structure problem 3, 10 Air Combat Patrol 101, 103 algorithm 1–4, 12, 14–6, 18–9, 36, 42–3, 48, 58–9, 97, 102–4, 141–2, 145–60, 164, 168, 174–5, 179, 182–3, 197, 200, 202 al-Qaeda 96 Amazon Mechanical Turk 198 Amnesty International 73, 76–7, 81 anthropocentrism (see also Cartesian dualism) 10, 12, 45, 47, 49, 59, 69 architecture 51, 119, 175–6 Article 36 48 Artificial Intelligence (AI) 1, 33, 36–7, 48–9, 158, 198 assemblage 13–4, 19, 45–6, 67–72, 75–8, 80–1, 91–2, 144–7, 151, 157–9, 164–6, 168–9, 175, 177, 182–3, 188–9, 191–3, 197–8, 202 asylum (see also Dublin II Regulation) 166, 170–3, 175, 177–8, 182 asymmetry 18, 47–8, 59, 68, 92, 125, 196, 199 authority 1, 6, 17, 18, 27, 31, 51, 75–80, 114–30, 165, 167–82, 199
Automatic Target Recognition (ATR) 54 automation 2, 18, 36, 43, 47–8, 50–9, 61, 109n3, 121, 125, 129, 142, 150, 152, 154, 158–9, 173, 198; bias 43, 52–3, 58 Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System (ARGUS-IS) 104 Autonomous Weapons Systems (AWS) 1–2, 43, 61n1 autonomy 1–2, 18, 43, 46–7, 49, 52, 54–60, 61n1, 66, 80, 125, 158; defined as free will 1–2, 10, 14n, 45 Baroque 93 battlefield 6, 18, 52, 58 Bill and Melinda Gates Foundation 126 Biometrics (see also identification) ix, 24–5, 28, 30–1, 172–6, 179 Bitcoin (see also digital, currency) 113, 116, 118, 123, 126–7, 131n10 black box 51, 70–1, 77, 199–200 blockchain (see also cryptocurrency) 4, 15, 17, 113–4, 116–19, 121, 123–30, 130n2, 131n8, 197–9 body 12, 24, 30–2, 69, 92, 96–7, 105–6, 109n8, 147, 159, 164, 189 body scanner 164 border 4, 15–6, 115, 119, 127–8, 130, 164–78, 182–4 boundary 4, 11, 18, 45–50, 59–61, 191 Bretton Woods 115 British Economic Secretary 127 Brussels 167 Bull 176 bureaucratization 16, 147, 149, 166–8, 182, 200
Index Cartesian dualism (see also anthropocentrism) 69; perspectivalism 93; split 10 CCTV (see also surveillance) 1–2 Chaos (see also complexity) 12 China 78–9, 123 CIA 96–8 circulation 18, 90, 113, 165–6, 170–1, 177, 179, 182–3, 189, 190–1, 194, 201; of documents 165, 171, 183; of technology 8, 189–91; of weapons 90 civil liberties 157 classification 54, 55, 56, 57, 106, 145, 196, 198 Club of Rome 193 cognition 2, 47, 50, 53, 97, 143 Cold War 5–6, 67, 77–8, 128, 192 collaboration 146, 152, 154–8; between humans and non-humans (see also human-machine relations socio-technical systems) 154, 156, 158 Combined Air Operations Centre 101 Committee for Human Rights in North Korea (HRNK) 73, 77, 82n3 complexity 2–3, 7, 10, 12, 14–5, 32, 35, 44, 51, 58–61, 68–71, 76–8, 92, 94, 98, 100, 116, 124, 144, 151, 153–5, 159, 176, 191 configuration 3, 18, 42–56, 58–61, 71, 176, 184, 189, 202 constructivism 4, 7–8, 15, 61, 120, 144 contingency 45, 50, 67, 69–71, 80–1, 99, 102–3, 108, 122, 193–5 control (see also meaningful human control) 2, 6, 9, 18, 42–61, 78, 89, 91, 105–6, 115, 118–9, 125, 141–2, 149, 152, 155, 164–78, 182–3; technologies 142 controversy 15, 28, 33, 36–8, 68, 127, 165, 177–8, 180, 182, 190–1, 193–201 co-production 3, 17, 24–38, 44, 58–9, 61, 71, 103, 113–4, 119–20-3, 128, 130, 142–3, 189, 191 CorpWatch 89 correlation 54, 55, 56, 68, 151 Council of the European Union 169–70, 180, 182 counter-terrorism 1–2, 30, 169 credit default swaps 115 crime 4, 7, 73, 141–2, 145–60, 171, 175, 178, 180 critical security studies 66, 79, 146, 158, 190, 193, 195–6
205
critique 18, 27, 28, 31–2, 36, 38, 90, 93, 129, 188–9, 193–7, 201 cryptocurrency 113, 118 data 1, 2, 4, 15, 30, 42–3, 48, 54, 58, 72–3, 76, 96–8, 103–4, 125, 127, 142, 145–51, 158–9, 160n1, 161n7, 164–5, 167, 169–70, 172–83, 197, 198, 200; Big Data 4, 103–4, 145, 197–8; data base 1, 13, 30, 66, 147–51, 154, 164, 172, 173, 175, 180, 181; dataset 1, 13, 30, 66, 148–52, 154–5, 158, 164, 172–3, 175, 180–1 Davos 113, 123 Defense Advanced Research Projects Agency (DARPA) 200 design (see also engineering) 4, 8, 10, 15–6, 44–54, 58–61, 71, 103–5, 113, 119, 122, 141, 164–83, 196 determinism (see also essentialism) 2, 4, 8–10, 15, 29, 80, 120, 130, 160, 193–5; techno-determinism 130 developer (see also programmer) 10, 15, 121, 141, 146–60 device 48, 54, 70–1, 119, 144, 156, 164, 175, 177, 179, 183, 189, 191, 197 digital 1, 13, 15, 19n1, 26, 30, 31, 113, 122, 127, 129, 141–44, 146–50, 156, 158, 165, 174, 196, 197, 200; bodies 26, 30–1; border 174; currency (see also Bitcoin) 113; data 149–50; divides 129; technology 1, 141, 143–4, 156, 158, 196–7, 200 Directorate-General for Migration and Home Affairs (DG Home) (EU) 166 discourse 15, 17, 24–32, 38, 45, 47, 50, 60–1, 66, 73, 78, 92, 100, 116, 120–1, 123–4, 126, 144, 157, 169, 192, 194, 198 dispersion 16, 117, 124, 165–7, 175, 177, 182–3, 200 distributed agency (see also agencements) 33–8, 69, 189 document 67, 72, 79–80, 97, 108n2, 114, 122, 144, 166–83 dollar 113, 126 drone (see also Predator; Reaper; Unmanned Aerial Vehicle (UAV)) 2, 4, 10, 18, 25, 27, 33, 36–7, 42, 45, 66, 88–9, 92–109, 197–8; warfare 4, 18, 45, 89, 92, 99, 105, 105, 107–8 Dublin II Regulation (see also asylum) 170
206 Index emergence 3–4, 8, 10–2, 14–9, 28, 30–8, 44–5, 58, 70, 89, 105, 113–29, 142–3, 145, 147, 151–2, 158–60, 165–8, 170–2, 175, 177–8, 182, 190–1, 198 enactment 34, 45, 69–71, 120, 131n8, 165, 168–9, 177, 182, 190–4, 201 engagement 5, 42–3, 52, 54–5, 56, 57, 193, 200, 201 engineering (see also design) 1, 8, 10, 15, 42, 44–5, 47, 49–51, 59–60, 168–9, 172, 177, 180, 183–4 enrolment 67, 69–70, 72, 76–7, 79–80, 179 environmental security 25 epistemology 92, 108, 160, 201 Ernst & Young 126 essentialism (see also determinism) 8–9, 15, 68 ethics 4, 9, 13–4, 27, 45, 49, 55, 60, 108, 120, 160n1, 195–6 ethnography 11, 16, 69, 90, 167, 192 European Agency for the Operational Management of large-scale IT Systems in the Area of Freedom, Security, and Justice (eu-LISA) 166 European Commission 166, 173, 177–80 European Data Protection Supervisor (EDPS) 181 European Network and Information Security Agency (ENISA) 143 European Parliament 125, 180 European Parliamentary Research Service 125 European Policy Evaluation Consortium 171 European Union (EU) 16, 28, 118, 125, 164–73, 177–83 Europol 178, 180–1 evidence (see also scientific fact) 17–8, 49, 71, 73, 75, 77–80, 93, 96, 98–9, 101, 105, 142, 148, 150, 198–9, 201 expert (see also authority) 16, 72, 76, 88–90, 102, 107, 116, 124, 141, 146, 152–3, 166–70, 177, 183, 199–201 expertise 10, 26, 37, 89–90, 107, 109n3, 152, 192 externalization 3–5, 8–9, 68, 78–80 facial image 174, 176, 179 failure 9, 12–3, 18, 90, 104–5, 113, 115, 117, 178, 129, 176, 194 fallibility 18, 95, 101, 107 feasibility study 16, 166–7, 172–7, 183
feminism 18, 81n1, 97, 189, 191, 195–7, 202 finance 17, 114–18, 121, 123, 125–7, 129–30, 197 Financial Conduct Authority (FCA) 118 fingerprint 173–9, 182; innovation (see also finnovation) 9, 51, 66, 89–90, 103–4, 108, 115–7, 123, 142, 183 Ford, Henry 126 Gestalt 143 Global North 201 Global South 201 Gorgon Stare 103–4 governmentality 189 heat signature (see also thermal image) 42, 95, 97, 100–1, 105–6, 109n6 hegemony 94, 106 heterogeneity 2, 15–6, 19, 44–5, 69, 77, 166–9, 172, 177, 180, 182–4, 191–3, 201 Hewlett Packard 168 Hobbes, Thomas 117 homo economicus 117 House of Representatives 77 human rights 13, 17, 42, 49, 66, 73–4, 76–82, 157, 199 human security 22, 66, 77–8, 81 Human-Computer Interaction (HCI) 47, 50 humanitarian 25, 28, 43, 72, 81 human-machine relations (see also socio-technical systems) 18, 46, 53, 58–9 identification (see also biometrics) 1, 9, 11, 30, 38, 42, 46, 54–7, 67, 72–6, 89–90, 92, 94–7, 100, 102–3, 107–8, 114–6, 122–3, 141, 147, 152, 154–5, 158, 161n6, 166, 168, 170, 173–7 identity 6, 33, 54, 102, 170–5, 179 imaginary 10, 13, 46–50, 58, 60–1, 107; of agency 13; of control 48, 58; of technology 10, 46, 49, 107; of the machine-as-human 47 implementation 2–3, 6, 8–9, 15, 34, 36, 53, 78, 142, 145, 147, 155–9, 164, 166–70, 174, 176, 179 inequality 17, 25, 27–8, 31–2, 34, 196 Information and Communication Technologies (ICTs) 6, 116, 164–5, 182
Index infrastructure 4, 13, 15, 24, 32, 34, 44, 74, 77, 127, 164, 166–8, 171, 173, 175–7, 183, 188, 192, 197 injustice 151, 195–6 innovation (finnovation) 9, 51, 66, 89–90, 103–4, 108, 115–7, 123, 142, 183 instrumentalism 8–9 intelligence 1, 4, 33, 36, 42, 48, 60, 76–7, 101, 103, 147, 158, 172, 175, 180–2, 198, 200 International Civil Aviation Organization (ICAO) 179 International Committee of the Red Cross 48 International Humanitarian Law (IHL) 43, 49, 58 International Mobile Subscriber Identity (IMSI) 102 International Monetary Fund (IMF) 123–4, 126 International Political Economy (IPE) 116, 119–20, 128 Internet of Things (IoT) 121 interstice 194, 196, 201 interview 15–6, 18, 69, 72, 75, 79, 106, 109n6, 141, 145, 148–9, 151–8, 160n1, 160n3, 166, 172, 188, 202 intra-action 3, 44, 159, 191 Islamophobia 98 Japan 130 Justice and Home Affairs (JHA) 169 knowledge 8, 13, 16–8, 27, 32, 48, 66–8, 73, 76–80, 90, 109n3, 116, 121, 124, 148, 152–5, 157, 159–60, 165–6, 189, 194–5, 197, 199 Korean peninsula 79 LA Times 98, 99 labor 6, 14, 43, 50, 55, 60, 73, 127, 165, 183, 197–8 laboratory 11, 167, 190–1, 197, 201 law 11, 16, 43, 45, 49, 72–4, 76, 101, 114, 117–8, 120, 142, 149, 157–8, 165, 167, 171, 177, 181–2, 184 legal system 13, 49 Lethal Autonomous Weapon System (LAWS) (see also Autonomous Weapon System) 42–3, 45, 59, 61n1 lethal force 42–3, 48–9, 54–5, 58–9, 91 Levels of Automation (LOAs) 51–6
207
liberalism 2, 4, 6–7, 10, 12–3, 16–7, 44, 60, 113–8, 123–4, 126–30, 189, 193, 197 linguistic turn 32–3 London 115, 123 loop task 54, 56, 58 machine learning 1, 49, 60, 152 macro 28, 31–2, 69, 104, 126, 153–4, 188, 190 Manhattan 115 material turn (see also New Materialism) 32–3, 146 materiality 4, 12–3, 17, 24–5, 30–2, 37, 44, 68, 80, 188 meaningful human control 18, 48–9, 52, 54, 58–9 Member State (EU) 165, 169–72, 175–6, 178–82 methodology 4, 15, 26–7, 32, 35–8, 143–5, 160, 182, 188–9, 191, 194, 199–200 micro 7, 27–8, 31–2, 67, 69, 71, 77, 104, 188, 190–1 migration 16, 78, 126, 164–70, 173, 175, 177–9, 182, 184 military (see also UK Royal Air Force; US Air Force; US Department of Defense) 1–2, 4–7, 9, 18, 43–4, 47–50, 52–5, 58–61, 66, 71, 76–9, 88–91, 94–5, 98, 101, 103, 107–9 mobility 6, 9, 77, 164–7, 169, 175, 177, 183 money laundering 118–9, 124 moral 1, 13, 43, 45–7, 51, 60, 68, 74, 106, 122 Morpho 176 Moscow 192 multitude 149, 191, 193; of actants 193 Muslim 97, 99 Myanmar 81 Nakamoto, Satoshi 113, 116 NASDAQ 124 network 6–7, 14–6, 26, 33–8, 43, 47, 67–71, 77, 80, 96, 117–8, 122, 126, 143–4, 147, 154, 158–9, 171, 175, 190, 192–3 New Materialism (see also material turn) 4, 11–2, 18, 24, 44, 67–8, 80–1, 188 New York City 123 New York Times 101, 130n1 Newton 11
208 Index Non-Governmental Organization (NGO) 7, 48, 49, 72–3, 76–7, 79, 81, 126 non-human 1–2, 4, 10–4, 16, 19, 34, 37, 44–7, 50, 59, 61, 67–70, 75, 120–2, 126–8, 129, 142–3, 147, 151, 154, 158, 161n6, 165, 168–9, 175, 177, 182–3, 188, 193 norm 7–8, 17, 28–9, 98–100, 105, 108, 114–5, 119, 121, 129 normative 14, 27, 50, 90, 92, 114, 119–21, 125–9, 160, 195–6 North Korea 17, 66–7, 72–82, 198 NSA 200 nuclear weapons 66, 74, 200 object 12–3, 30, 32–6, 38, 42, 44, 54–8, 67–8, 70–1, 75–6, 95–6, 108, 120–1, 128, 143–6, 160, 164, 168, 189, 191–3, 196 OECD 116, 118, 120 ontology 10, 12, 14, 19, 47 OODA 54, 61n3 operator 2, 4, 13, 42–4, 46–8, 50–5, 58–61, 91, 96–8, 101, 105, 124 orientalism 109n7 oversight 125, 129, 147 parsimony 15, 69, 191–2 patrol 101, 103, 153, 155–6 performativity 3, 46, 67, 70, 165, 189, 191 philosophy of science 157 police 15, 106, 141–2, 145–60, 165, 167, 172, 175, 177–8, 180–2 Police Cooperation Working Party 180 postcolonial 100 post-humanism 3, 165 post-structuralism 36, 189, 193 power 1, 4–10, 12, 17–8, 24–5, 27, 30–1, 45, 66–8, 74, 80, 89, 91–3, 95–8, 100, 103, 106–8, 114, 116–30, 144, 146, 152, 157, 167–9, 173–4, 180, 190, 192, 194–6, 199, 202 practice 2, 4, 8–9, 11, 13–4, 16–8, 25, 27–8, 32–3, 35–7, 43–5, 47–50, 53, 58–61, 66–81, 88, 91, 93, 100–1, 108n2, 128, 142, 144, 146–51, 157, 164–69, 174, 177, 182–4, 188–95, 198, 202 practice turn 32 pragmatism 35 Predator (see also drone; Reaper; Unmanned Aerial Vehicle (UAV)) 95 predictive policing 15, 141–2, 145–7, 151, 156–8
prison 73–7, 148; camp 73–7 problematization 11, 14–5, 67, 70–2, 74, 78, 80, 149, 175, 180, 182, 194–9 programmer (see also developer) 15, 75, 104, 141, 146, 150–4, 158–60 progress 9, 113, 166, 177, 182 projection 79, 165–8, 175, 182–3 PwC 126 Pyongyang 75 racism 98, 100 Rakhine State 81 real time 78, 81, 96, 104, 124, 156 realism 4–6, 9, 12, 92, 108, 193 Reaper (see also drone; Predator; Unmanned Aerial Vehicle (UAV)) 96–7, 103 refugee 28, 30–1, 76, 78, 124 relational 3–4, 11–2, 16, 19, 26, 33, 35–6, 38, 44, 46, 183, 188, 194, 197 responsibility (see also accountability) 1, 13–4, 30, 45, 48–9, 55–7, 69, 78–81, 97–8, 101, 130, 165–6, 171–2, 174, 178, 180–1 Revolution in Military Affairs (RMA) 6 robot 1–3, 12, 14, 18–9, 36, 37, 42, 48–9, 59–60, 198 Royal Air Force (RAF) (UK) (see also military) 105 Russia 123 satellite imagery 4, 17, 66–7, 72–82, 189, 198 Schengen (see also Visa Information System (VIS)) 165–72, 174–80, 182–3 Science and Technology Studies (STS) 11, 18–9, 24–8, 31–2, 35, 37–8, 44–5, 67–8, 80–1, 94, 114–5, 118, 129–34, 188–94, 197, 199, 202 scientific fact (see also evidence) 11, 27–8, 31–2 scopic regime (see also vision; visuality) 91–2, 94, 100, 108 Second World War 115 secrecy 18, 72–5, 77, 80, 199–201 Secretary General of the International Organization of Securities Commissions 127 self-consciousness 1 sensing 1, 19n1, 42–3, 48, 56, 59–60, 88, 91, 94–7, 99, 101–3, 105–6, 108 Snowden, Edward 198, 200
Index Social Construction of Technology (SCOT) 3, 15, 114, 120–3, 127–9 social media 158 social order 16, 17, 25–9, 31–5, 38, 50, 127, 191 socio-technical system (see also human-machine relations) 2–3, 10–2, 14, 18, 44–52, 55, 58–60 software 1, 2, 15, 19n1, 76, 89, 141–2, 145–60, 165, 169, 174, 176, 179 Solomon Islands 30 Sopra Steria 168 South Korea 79, 130 sovereignty 74, 130n2 St Johann im Pongau 176 stabilization 11, 35, 67, 70–2, 75–80, 107, 120, 177, 182, 191 Stalin 71 Stalingrad 71 Stellar Development Foundation 126 Strasbourg 176 Strategic Committee on Immigration, Frontiers and Asylum 170 Strategic Studies 5, 9 subjectivity 13, 92–3 surveillance (see also CCTV) 1–2, 17, 54, 66–7, 72–4, 76, 78–81, 88, 92–106, 108, 156, 164 symmetry (see also asymmetry) 14, 18–9, 44–6, 61, 68–9, 165, 188, 194–5, 199 Taiwan 194–5 Taliban 98 targeted killing 10, 91–2, 108 techno-fetishism 90 techno-science 190 thermal image (see also heat signature) 96–7 Third Industrial Revolution 115 third-country national (see also visa) 169–71, 174–6, 178–80 threat assessment 54–55, 56 translation 26–7, 37, 47, 69, 71, 81n2, 122, 145–7, 149–52, 166–7, 173–9, 182–3, 190 transversal 143–4, 188, 192, 194–5, 201 Twitter 150 UK Treasury 126 United Nations Convention on Certain Conventional Weapons (CCW) 42
209
United Nations Development Program (UNDP) 28 United Nations High Commissioner for Refugees (UNHCR) 28 United States (US) 6, 12, 42, 48, 77–8, 88–90, 94–100, 102–4, 106–9, 115, 127, 129, 130n1 Unmanned Aerial Vehicle (UAV) (see also drone; Predator; Reaper) 42–3, 53–6, 58–9, 101 US Air Force (see also military) 88–9, 96–100, 103–4, 106–9 US Department of Defense (see also military) 42, 48 US Senate 77 variable 3–5, 8–9, 93, 119, 146, 150–1 verification 74, 116–7, 124, 131n8, 171, 173–6, 181 Video National Imagery Interpretability Rating Scale (V-NIIRS) 95 violence 17, 42, 88–91, 98, 100, 106, 108, 190, 192–3, 196–9 visa (see also third-country national) 164–5, 169–182 Visa Information System (VIS) (see also Schengen) 16, 164–5, 189 Visa Working Party 170 vision (see also visuality) 49, 75, 88–9, 91–102, 105, 107–8; weaponization of 89, 91, 94, 99, 108 visuality (see also vision) 18, 88–9, 91–4, 98–102, 108 vocabulary 34–5, 69, 123, 141, 189, 191–2 voter 28, 30, 161n5 wall 164 warfare 4, 18, 25, 27, 42–6, 49–50, 53, 59–61, 66, 89, 91–2, 99, 102, 105, 107–8, 192, 198 Washington 192 weapon 1, 4, 18, 36, 42–4, 47–54, 58–61, 66, 71, 74–5, 88–96, 99, 101–2, 105, 107–9, 192, 200 Western Union 126 White House National Economic Council 118 World Economic Forum 123 Zhukov 71 Zimbabwe 34