Myths and Realities of Cyber Warfare: Conflict in the Digital Realm 1440870802, 9781440870804

This illuminating book examines and refines the commonplace "wisdom" about cyber conflict―its effects, charact

367 82 4MB

English Pages 230 [228] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Foreword • Eugene H. Spafford
Preface
1. War’s Character and Nature, and the Stuxnet Trap
2. Temporal Mythologies of Cyberwar
3. Mythologies of Cyberwar Effects
4. The Attribution Paradox and Organizations’ Impact on Cyberwar
5. Form and Function of Social Media
6. Unpacking the Mythologies of Social Media
7. Data as a Battlespace
8. A Postmodern or a Premodern Future?
Notes
Bibliography
Index
Recommend Papers

Myths and Realities of Cyber Warfare: Conflict in the Digital Realm
 1440870802, 9781440870804

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Myths and Realities of Cyber Warfare Conflict in the Digital Realm NICHOLAS MICHAEL SAMBALUK Foreword by Eugene H. Spafford

Praeger Security International

Copyright © 2020 by Nicholas Michael Sambaluk All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except for the inclusion of brief quotations in a review, without prior permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Names: Sambaluk, Nicholas Michael, author. Title: Myths and realities of cyber warfare : conflict in the digital realm / Nicholas Michael Sambaluk ; foreword by Eugene H. Spafford. Description: Santa Barbara : ABC-CLIO, 2020. | Series: Praeger security international | Includes bibliographical references and index. Identifiers: LCCN 2019040659 (print) | LCCN 2019040660 (ebook) | ISBN 9781440870804 (print) | ISBN 9781440870811 (ebook) Subjects: LCSH: Cyberterrorism. | Computer security. | Social media. Classification: LCC HV6773.15.C97 S35 2020 (print) | LCC HV6773.15.C97 (ebook) | DDC 355.4/1—dc23 LC record available at https://lccn.loc.gov/2019040659 LC ebook record available at https://lccn.loc.gov/2019040660 ISBN: 978-1-4408-7080-4 (print) 978-1-4408-7081-1 (ebook) 24 23 22 21 20 1 2 3 4 5 This book is also available as an eBook. Praeger An Imprint of ABC-CLIO, LLC ABC-CLIO, LLC 147 Castilian Drive Santa Barbara, California 93117 ­www​.­abc​-­clio​.­com This book is printed on acid-free paper Manufactured in the United States of America

Contents

Foreword by Eugene H. Spafford vii Preface ix 1. War’s Character and Nature, and the Stuxnet Trap

1

2. Temporal Mythologies of Cyberwar

23

3. Mythologies of Cyberwar Effects

47

4. The Attribution Paradox and Organizations’ Impact on Cyberwar

69

5. Form and Function of Social Media

95

6. Unpacking the Mythologies of Social Media

113

7. Data as a Battlespace

135

8. A Postmodern or a Premodern Future?

155

Notes 173 Bibliography 201 Index 215

Foreword

As I write this in July 2019, the news has been full of stories relating to computer issues caused by organized (often, nation-state) groups. This has included accounts of Russian influence on social media to disrupt elections in the European Union and the United States; accounts of ransomware deployed against city governments and health care; extensive theft of intellectual property by Chinese espionage actors; attacks against Iranian command-and-control systems by the United States in retaliation for the downing of a drone; warnings of surreptitious Iranian and Russian penetration of electrical system infrastructure; and warnings of potential financial system attacks by North Korean hackers—all within a two-week span! As context for my remarks, consider that I have been working in what is now called cybersecurity, and national security, for nearly forty years. I learned to program using punch cards and plugboards. I helped run some of the earliest social media (Usenet). I founded one of the largest academic cybersecurity centers in the world (CERIAS at Purdue University). In all that time, I have seen the tempo of computer incidents steadily increase, and the damages continue to grow. Yes, defenses have also been developed, but they are unevenly applied. Moreover, as the population online has grown, so too has the value of finding ways to circumvent those controls to steal items of value, manipulate perception, and damage functionality. The threats now are the most severe I have seen and continue to grow . . . as do the losses. How we describe these losses and actors makes a difference in how we respond to them. Are the perpetrators merely criminals or something

viiiForeword

more? When hostile acts are carried out by nation-state actors are those only belligerent posturing, are they “legitimate” espionage, are they terrorism, or are they acts of war? If they are acts of war, why do we not respond with all our forces? Why do we seem to have ineffectual responses to so many of these incidents? These are semantic issues with underlying policy questions that many people have tried to address in the last few years. In particular, the term “cyberwar” is used more frequently in some circles, but not everyone is comfortable with that term: if it is genuinely war, why is there no full-scale declaration and response (other than the horror and devastation that war would bring)? If it is not war, then what is it? Professor Sambaluk takes on these questions, and more, in this book. He brings the perspective of a meticulous military historian with technical expertise to a problem set too often addressed only by those with narrower domain expertise. He exposes the many questions posed by our increasing reliance on cyber technologies, along with the answers put forward by those working in the realm, backed by extensive references. In Sambaluk’s book, you will find extensive coverage of the questions of how cyberwar might be defined, of how influence operations in social media are used as attacks, the difficulties associated with attack attribution, the role of data as a “battlespace,” and several other difficult topics. His coverage both illustrates misconceptions about these items and exposes some of the thorny disagreements about nomenclature and importance that occur around these topics. As already noted, I have been working in the field for forty years and reading this book both reminded me of several critical incidents that had occurred over that span and also exposed me to some significant items I had not seen before—which I attribute to his careful research and exposition. What is more, I appreciate that the book does not promote a particular interpretation of the prior work and instead exposes many of the essential facets of the issues. As a reader, I am presented with a wealth of information that may be considered, along with references to other useful sources, to enable formulation of my own considered set of conclusions. Students and colleagues often ask me about “cyberwar” and what that term means. I am well aware of the complexities and differences of opinion on the topic, and thus I usually respond with the unsatisfactory (but concise) response “It’s complicated.” Now I’ll be able to respond, “It’s complicated, but read Nick Sambaluk’s book to understand why.” That is still concise but gives me the satisfaction of providing a useful answer. This book is going on my shelf of primary references; I suspect other scholars of the field will similarly value it, as will anyone trying to better understand the complexities of the topic. Eugene H. Spafford July 2019

Preface

I came to the future of conflict—by way of the past. Since my earliest days, my parents had encouraged my interest in understanding how things work because exploring the answers brought an understanding of context and of significance about different people and events, tools and ideas. Furthermore, exploring how and why things work—and the context into which they fit—is, honestly, fascinating. Quickly, I discovered that a youthful aptitude for math and science in no way unseated a fundamental appreciation for history. History drew me because, while learning about the past helped explain how and why things worked the way they did (or do), an earnest examination of history also grapples with the impact of contingency. People make decisions and take actions, and this impacts the world that they (we) inhabit. Conflict exerts profound and multifaceted impact on people’s lives at macro and micro levels, and technology is constantly interwoven into life and therefore into history. I learned quite early that history, especially regarding military affairs and the role of technology, captivated my interest; I wanted to make the learning of these fields and the sharing of what I discovered become my profession. After completing my doctorate, I taught at the U.S. Military Academy at West Point, at Purdue University, and at Air University, the node for the professional military education of U.S. Air Force officers. The academies, and professional military education at large, employ several historians. But they are relentlessly (and understandably) oriented toward an operational mindset that is pointed toward the present and the future. In the classroom, I might be teaching at West Point about Napoleon’s campaigns, or at Purdue about technologies in the world wars, or to Air Command and Staff

xPreface

College and Air War College students about hypersonic efforts at the dawn of the space age. But, repeatedly, I discovered that that operationalized orientation introduced current and future topics into my tasks at work. This was difficult to miss since my teaching for West Point coincided with the Army’s establishment of a Cyber branch and of a think tank co-located with the academy, and with my own association with that entity. Unclassified materials about cybersecurity increasingly grew as a proportion of my professional reading, alongside historical studies of modern warfare. This book is a result of a half-dozen years of reading and reflection about security in the cyber arena. And there are several people who deserve mention and thanks for how their connection, professionally or personally, helped make this book possible. Two of the faculty at Purdue University are prominent among these, serving as friends and mentors. Drs. Joe Pekny and James Eric Dietz model the passion for problem-solving and learning that marks true innovation. A departmental mentor, my thesis advisor Dr. Al Hurley, provided an important grounding in my graduate education; my doctoral advisor Dr. Adrian Lewis tolerated my writing a study of how a historical military-tech project impacted U.S. strategy and policy—and that experience paved the way for many of the landmarks in my early career. Collegial friends and colleagues make an environment more pleasant and engaging. Across different institutions, Dr./Lt. Col. Bill Nance (USA), Dr. Michael G. Smith, and Gaylon McAlpine (USA Lt. Col., Ret.), are three such people. So are former colleagues Maj. Sterling Boyer (USAF), Joe Orlandi (USAF Lt. Col., Ret.), and Carlos Garcia (USAF Lt. Col., Ret.). Reagan Schaupp (USAF Lt. Col., Ret.) and Lt. Col. Dale Halligan likewise helped make this project possible. And there are friends with whom professional and scholastic conversations frequently intertwine, and, in particular, I want to mention Dr. John Blumentritt (USAF Col., Ret.) and Dr.  Joseph “Jay” Varoulo (USAF Lt. Col., Ret.). The encouragement and perspective from close friends like Jean Bennett are simultaneously grounding and uplifting. And, of course, loved ones deserve thanks and appreciation. My parents’ encouragement mentioned earlier endures; their steadfast willingness to listen to my ruminations and ideas eased the burden of writing my dissertation and transforming it into my first book, and with this (my fourth to print) they were the first people to read this manuscript and offer their thoughts and insights. Their dedication is a true inspiration. Stephanie Van Sant’s love and friendship is a tremendously important part of my life. I appreciate and delight in her eagerness to assist in my historical research as I pore over archival materials related to 20th-century military history topics. Learning and thinking about relevant and engrossing topics are some of the things we treasure and enjoy together.

CHAPTER 1

War’s Character and Nature, and the Stuxnet Trap

THE CHARACTER OF WARS AND THE NATURE OF WARFARE War, the policy-oriented use of force against a responding antagonist, is an enduring but unpleasant element in human affairs. Evidence of warfare dates at least as far back as 10,000 BCE. Carl von Clausewitz’s classic On War justifiably remains a landmark work two centuries after its writing, since among other things it explains force and violence as expressions and instruments of policy goals.1 Uncertainty, struggle between human antagonists responding to one another, and the application of force in pursuit of policy goals continue to describe the fundamental nature of warfare. Military history shows antagonists seeking to gain advantage in conflict, and frequently this involves the use of different methods or tactics, the use of different technologies, and often changes in both tactics and technology in order to make more effective use of the two in concert with one another. Combinations of tactics and technologies include the development of the rectangular 17th-century tercio formation equipped with long pikes and harquebuses and able to advance or retreat with relatively equal felicity, oblique order attacks by 18th-century infantry moving in close order and equipped with bayonet-mounted fusil muskets, and the 20th-century emergence of combined arms warfare in which infantry might advance in concert with friendly armored forces and with protection provided by friendly artillery or air components directing firepower against the enemy. The character of combat in these three examples are clearly very distinct: the tercio moves at walking pace and possesses limited organic firepower but can rapidly change direction; whereas the close-order infantry launching an oblique attack move at the same pace with greater force but less flexibility, while wielding greater firepower; combined arms units can move more swiftly and in concert deliver devastatingly larger amounts of

2

Myths and Realities of Cyber Warfare

firepower, while having different logistical requirements for their sustainment as an organization and for their use in battle. Each example shows a different expression of merging tactics and technology to address combat challenges to conflicts of their respective eras. Despite the important impact each would have on the character of an individual war, none of the three alters the nature of war. The pike-carrying infantryman from the tercio, the 18th-century musketeer, the tank commander, and any of their comrades, participate in the use of force to support a policy goal of their respective society or state. The nature of war endures, and the introduction of new tactics or technologies does not change the nature of war. As with new technologies, new domains of combat alter the character of wars, but they do not change the fundamental nature of warfare. However, whereas new technologies frequently appear on battlefields, new domains for fighting emerge only rarely. This can lure people into imagining that the emergence of new fighting domains—which can be of tremendous importance in the character of a conflict—also somehow revolutionize warfare. Moreover, most of the new domains for exploitation have emerged only recently, lending a false sense of validity to the idea that new domains change the core of warfare. Whereas land combat dates to at least 10,000 BCE, the first fighting at sea is believed to have taken place around 1200 BCE, after a lag of about ninety centuries. In contrast, introduction of the other domains has occurred with breathtaking succession from a historical standpoint: heavier-than-air flight inaugurated the air domain in 1903, successful strategic intelligence exploitation of the space domain began in 1960, and research into the networking of computers starting in the 1960s led to the development of the World Wide Web and the internet and arguably marked the advent of the cyber domain between 1990 and 1994. Thus, it is possible that a person born shortly before the Wright Brothers flight could have personally lived to see the air, space, and cyber domains radically change the character of security affairs and conflict. An array of authors, scholars, and other influencers has grappled with the impact of the cyber domain on the kinds of the political struggle that are or resemble warfare. Examining the impact of social media on 21st-century conflicts, journalist David Patrikarakos argues that “we are in need of a new conceptual framework that takes into account how social media has transformed the way that wars are waged, covered, and consumed” in order to “understand the twenty-first century war” because “Clausewitzian war is becoming displaced by what [Emile] Simpson calls ‘coercive communication.’”2 Law professors and co-authors of Striking Power Jeremy Rabkin and John Yoo maintain that “cyber and robotic weapons give nations not only greater ability to coerce each other, but also more means to communicate their intentions in war and their reliability in peace,” continuing that cyber weapons (alongside robotics and space weapons) would mete out instant but non-lingering effects that would

War’s Character and Nature, and the Stuxnet Trap3

enable the limitation of conflicts and the avoidance of mass-casualty conflicts.3 Whereas combat in the physical domains reflects a tendency giving the advantage to a defender relative to an attacker, many analysts and journalists evince the perspective that cyberattack is superior to defense; implicitly this begs questions about whether the cyber domain is an exception to the rule in such a way as to even alter the nature of warfare. Analysts P. W. Singer and Emerson T. Brooking recently asserted that “social media had changed not just the message, but the dynamics of conflict.” Given the new opportunity for global expression and the purported ability of social media campaigning to alter military outcomes, “who was involved in the fight . . . had been twisted and transformed,” begging the question “what, exactly, could be considered ‘war’?”4 Participation in the cyber domain as part of a hybrid or grey-zone warfare marks a frequent theme in writing about the revolutionary impact exerted in struggle via cyber. Journalist David Sanger posits that “the lesson of the past decade is that, unless shooting breaks out, it will always be unclear if we are at peace or war.”5 Michael V. Hayden, who served as head of the National Security Agency (NSA) from 1999 to 2005 and in other key intelligence roles until 2009, bluntly wrote after his retirement, “the cyber domain has never been a digital Eden. It was always Mogadishu.”6 Uncertainty about what kinds of digital animosity constitutes an “attack” in the domain run parallel with confusion about whether sophisticated actions are necessarily the work of states and whether states might use other techniques as well; strategists have worried that “informationalized warfare blurs the lines between peacetime and wartime, between what is considered military and what is considered civilian,” and even about the lines separating offensive from defensive actions.7 THE PURPOSE OF THE BOOK This book aims to examine several of the notable clichés—even mythologies—which have taken hold regarding the cyber conflict. Some excellent works on war and technology, such as Martin van Creveld’s classic Technology and War, have explained that “the idea that war is primarily a question of technology” is “neither self-evident, nor necessarily correct, nor even very old,” his suggestion being that the concept gained ground in the mid-20th century because of World War II.8 Other scholars speaking specifically to a conflict waged through the cyber domain have contributed significant points to the conversation. Martin Libicki’s description of “cybersecurity and system compromise” as “games of competitive learning”9 harken to classical interpretations of military struggle as a contest between actively interacting forces. Other scholars have pointed out that Clausewitzian concepts about war remain valid, that an overtly narrow definition of violence can obscure a deeper understanding of cyber conflict

4

Myths and Realities of Cyber Warfare

and even impede a meaningful understanding of some historical wars, and that new opportunities for “influenc[ing] and control[ling] an opponent’s judgment and will to fight” do not overturn the nature of warfare.10 Few works as robustly comprehend and convey cyberwar as Greg Conti and David Raymond’s On Cyber: Cyber brings new capabilities to bear and will fundamentally alter warfare. However, at its heart, war will remain war. War is ugly, despicable, and something that will remain with us. . . . Just as airpower alone cannot solve all problems, nor can cyber. Cyberspace operations however, will play an increasingly important role in all future conflict.11

Despite this clear point, many works, including several excellent and quotable studies, project messages in dissonance with important evidence and even suggest that classic philosophy about warfare has been overtaken by cyber conflict “expanding the range of possible harm and outcomes between the concepts of war and peace.”12 New forms of danger pose menacing threats, but they do not invalidate a fundamental understanding of warfare, any more than the introduction of artillery, airpower, or battlefield chemical weapons changed the nature of warfare despite their undeniable impact on the character of conflicts. Furthermore, examples of policy-based tension that resist categorization under binary terms of war and peace pose vexing policy challenges; this is not an entirely novel dynamic in the history of military affairs, however. Important stakes, in terms of mortal dangers for personnel and policy objectives for states, means that “war encourages—even forces—efforts to innovate. A part of the nature of warfare is therefore that the character of individual wars will change.”13 Not only can vexing challenges be less new than they might appear, but they are likely to be attempted in hopes of upsetting an antagonist’s preparations and helping to secure victory. Adversaries’ quests for competitive advantage bring the kinds of changes that recontour the character of different conflicts. Where researchers speak of “the mix of technical, policy, and social dimensions  .  .  . combin[ing] to create and complicate a coevolving, complex adaptive system”14 in the cyber domain, historians can identify parallels in the physical realms of combat. Thus, the character but not the nature of war reshapes itself. The application of hostility for political purposes remains central to the meaning of warfare, even if criminal acts and war acts can be difficult to distinguish as they take place. The “implications in calling [a] cybersecurity crisis a cyberwar” are considerable because the responses that may be considered in defending against an aggressor diverge from the work involved in apprehending a criminal. Of course, too, the scope of an incident may differ importantly as well. These considerations factor into why policy makers and their advisors—and even international

War’s Character and Nature, and the Stuxnet Trap5

alliances constructed for collective security—can show reluctance to apply the term “war” to cyberattacks, even on occasions when adversary states are suspected to have prompted the attacks and even when political factors appear connected to their motivations.15 This is arguably separate from still other concerns about how a threshold for physical destruction or death might also impact how hostile cyber activities measure up in relation to “war.” Fundamentally, however, deciding whether to consider acts to constitute warfare is a political question. This decision rests on many factors, and different decision makers use different criteria and contexts for their decisions. This virtually enforces an absence of consensus. This book considers cyberwar broadly as entailing struggle in the cyber domain in pursuit of policy objectives. Cyberwar, therefore, relates to technical actions, and these receive attention in this chapter and are the focus of Chapters 2, 3, and 4. Cyber privacy expert Bruce Schneier aptly described malware as “a functional upgrade. It’s one that you didn’t want, one you didn’t ask for, one you didn’t need, one that was given to you by someone else.”16 Actions that use technical methods such as malware to gain illicit access in support of politically related motivations fit this work’s broad definition of cyberattacks. Other, less subtle activities including the denial of service (DoS) of a target, when launched for political purposes, also fit within the category. Myriad examples of oft-repeated wisdom regarding cyberwar have become entrenched mythologies. These include ideas about the temporal character of cyberweapons as striking instantaneously, as rendering distance and geography irrelevant, as being easily and widely available, and as being single-use tools because their use betrays their underlying exploits. Other mythologies relate to the effects of cyberweapons, including ideas about their inherently reversible effects, about the natural dominance of cyber offense, about the role of targets in their own victimhood, and about the relationship between cyber and the older domains of conflict. Politically motivated hostile use of social media and the leveraging of stored data dominate the attention in Chapters 5, 6, and 7, while also appearing in Chapter 8. This tends to mean that the cyber domain is used as a vehicle for activities that might be recognizable in other domains. As a leading source of news and connection for an increasing segment of the global population, social media is a tempting avenue for information operations. Different social media and social networking apps are particularly suited to other kinds of conflict-related activities as well, including discrete communications among separated members of nonstate groups. The utility of social media in conflicts dovetails with its impact in relation to purported democratization of speech and the decline of institutional advantages and monopolies on political speech and on the use of

6

Myths and Realities of Cyber Warfare

force. Digital communication inherently produces data, which is regularly exploited for different purposes, including being repurposed in support of strategic aims. As is typically the case for popular clichés, facts are extrapolated and contorted to the point of requiring study and clarification. This work is meant to contribute an important analysis of some of the concepts that have gained traction about the politically-oriented struggle in and through the cyber domain. DEFINING “WAR” BROADLY Definitions matter, and answering questions about what constitutes “cyberattack” or “cyberwar” strongly impacts ideas about how serious a hostile activity in the cyber domain is and what can or should be done in response to one. Perspectives abound in relation to these questions, however; lasting and concrete consensus proves elusive at national and international levels. Scholarship and action seem to have cumulatively fallen into three broad categories of thought: interpreting cyberattack and cyberwar broadly, interpreting them narrowly, and categorizing events on a case-by-case basis with more reference to situational context than to ironclad tenets. Broad-definition advocates can argue that the political orientation of aggressive activity in the domain indicate more about an act being an “attack” or part of a “war” than any benchmark in terms of damage or lethality.17 Some authors provide very broad definitions, such as “cyberwarfare is the use of information systems against the virtual personas of individuals or groups.” The U.S. National Institute of Standards and Technology (NIST) declared “any kind of malicious activity that attempts to collect, disrupt, deny, degrade, or destroy information system resources or the information itself” to constitute an attack in cyberspace.18 If an attack’s effectiveness matters less than the attacker’s motivation, then even failed attacks should, so the argument goes, be recognized for what they were meant to be. Ambiguity in cyberspace can be interpreted to support this perspective. Technical cyberattacks resemble cyber espionage in a way that kinetic attacks by a military force do not resemble spy work in the physical domains. Even when a military unit seeks to infiltrate an enemy’s defensive positions, the weapons, equipment, and even clothing worn by the infiltrators may likely correspond to those of combatants. But the digital intruder who aims to research a targeted system for information cannot be well-distinguished from the intruder who plans to mete out the kinds of disruption, denial, degradation, or destruction identified by NIST as an attack. This similarity comes because the prerequisite steps before causing such injury involve understanding how a targeted system works

War’s Character and Nature, and the Stuxnet Trap7

and therefore how to damage it; covert exploration of a targeted system is therefore not only the purpose of the spy collecting intelligence but also of the fighter preparing an attack.19 A broad definition of cyberattack benefits from the conclusion that an intrusion can credibly be suspected as the first stage of a potential attack rather than as the calling card of espionage. Evidence suggests that some of the more infamous and aggressive state actors in the cyber domain subscribe to broad, even haphazardly expansive, concepts of digital realm operations. Technical cyberattacks using intrusion tools and malware are interpreted as residing on one part of a long spectrum that also includes the use of digital communications as a vector for disinformation in support of a state’s strategic aims. Absent an explicit international prohibition on cyberweapons, some state actors reportedly interpret the whole of this spectrum therefore to remain open to employment. This outlook has been extended so far as to include the use of ransomware like the 2017 WannaCry attacks linked to North Korea. “It is doubtful the North Koreans knew, or cared, which systems would be crippled.” In the case of North Korea, funds extorted or stolen by state-launched criminal activities have been connected with the funding of the country’s kinetic weapons development programs. Scholars have suggested that the earlier hack and release of Sony documents in 2014, attributed to North Korea because of the state’s opposition to the impending release by Sony of a satirical film mocking North Korea’s dictator, indicates that “a state can successfully alter the behavior of adversaries if it can maintain even modest amounts of anonymity so long as its attack surface is small.”20 This logic inherently views the politically motivated use of hacking as an attack and in keeping with patterns of war. Other advocates of cyberattacks hope that cyberwarfare could be used and accepted as a means of carefully reducing the numbers of high-casualty wars and the death toll of those conflicts. Arguing that “the rules of war must evolve to keep pace with technology,” Rabkin and Yoo celebrate their belief that “cyber weapons could  .  .  . offer more precise and controlled power than a kinetic weapon” and thereby enable coercive messaging without letting a war slide into the more “total” and heavily kinetic end of the spectrum. To eliminate the risk that a targeted state interprets cyberattacks against its military command and control systems (which Rabkin and Yoo concede could be interpreted as a cyber domain preparation for an aggressor’s subsequent kinetic attack), “limiting the risk of escalation to all-out confrontation” might make it “prudent to launch cyber attacks on civilian infrastructure.” While suggesting that “precision cyber and drone attacks provide more steps of coercion beyond diplomacy and economic pressure but are short of conventional armed conflict,” Rabkin and Yoo not only interpret cyberattack as accomplishing similar aims as wars but also advocate the acceptance of a range of hostile actions through cyberspace as acceptable actions within the context of a conflict.21

8

Myths and Realities of Cyber Warfare

To say that Rabkin and Yoo support an unusual and unconventional view among Western scholars is an understatement. But other scholars have suggested that more careful and curtailed usage of cyberweapons could be accepted. Noting that “an ideal war would be a war wherein civilian casualties were minimal or nonexistent and where acts of violence perfectly discriminated between combatants and noncombatants,” one modern philosophy scholar has argued that “cyberwarfare has made possible this kind of ideal warfare.”22 While some advocates of hostile cyber domain actions tout the purported advantages of using the domain to advance policy aims analogous to the use of war, still other analysts define cyberwar broadly, while doing so in ways that seek to restrict the accepted range of activities. “Legal and ethical issues” have been extended even to the active defender’s work to trace and potentially punish an intruder. Because this active defense, also referred to as hack back, “normally involves breaking into a number of privately owned computers along the way, it is generally illegal under the Computer Fraud and Abuse Act.”23 This line of argumentation shows that holding a broad definition of cyberwar should be assumed necessarily to represent enthusiasm for conducting hostile activities in the domain. DEFINING “WAR” NARROWLY Many authorities on cybersecurity issues prefer to define war more narrowly. Reluctance to engage in acts deemed cyberwar frequently accompanies these views, although some of these scholars go so far as to virtually declaim the concept of cyberwar and notable practitioners distinguish between exploration of systems on one hand (which are deemed legitimate) and the destruction or manipulation of systems on the other (considered illicit). Whereas hack back involves the defender’s actually tracing and potentially retaliating against an assailant, more passive alternatives exist as well. One time-honored approach is the use of so-called honeypots. Borrowed from the physical domains’ use, notably during the Cold War, of interpersonal seduction in order to gain intelligence against unwitting targets, cyber honeypots attract an intruder to explore red herring elements in a defender’s system. These are maintained specifically to watch for intruders, since to an outsider they resemble a valuable target but to insiders, they are extraneous or false elements in a system. Cyber honeypots come in several forms, including honeyports designed to “detect network scanning and enumeration attempts,” and honeywords that simulate an administrator’s passwords. Honeypots quickly alert defenders to the presence and activity of an intruder, and this helps defenders work to prevent future intrusions or minimize the impact of future incidents.24

War’s Character and Nature, and the Stuxnet Trap9

Honeypots can also be “poisonous,” containing code that can harm an intruder. Poisonous honeypots are more controversial than other types intended merely for tracking intruders, and scholars disagree in their views about the likely implications and reactions to their use. Such offensively oriented digital countermeasures “may unnecessarily antagonize other states to such an extent that kinetic hostilities erupt,” but other scholars note that “the State utilizing a weaponized honeypot may be able to defend the legality of its actions on several grounds,” particularly since the damage wrought by the honeypot was caused by the intruder’s actions rather than the defender’s. A U.S. Central Intelligence Agency (CIA) honeypot from the 1980s was declassified in 2004, referring to U.S. intelligence seeding gas pipeline control software (which Soviet agents were known to want to steal) with malware that reportedly precipitated a malfunction and explosion causing millions of dollars in material damage in Siberia.25 In line with the concern that offensively oriented countermeasures could bring escalation, renowned cyber analyst Martin Libicki argues that “a cross-domain response may be viewed by others as an act of aggression, despite the antecedent cyberattack.”26 This dovetails with his advocacy of what he calls “Las Vegas Rules,” in which what happens in cyberspace should stay in cyberspace and exchanges should not be allowed to expand to other domains and thus escalate. Other researchers agree, suggesting that an eye-for-an-eye approach in responding to cyberattack “is too permissive” or asserting that classifying aggressive cyberspace activities, such as the 2008 harassment of Georgian websites attributed to Russia, as attacks “would probably only have led to more intense and prolonged hostilities which could have resulted in more destructive physical effects on the ground.”27 These considerations clearly frame aggressive cyber actions as separate from—and lesser than—“real” acts of war and these arguments opt for narrow definitions of cyberwar in order to avert the casualties expected to accrue from an escalation of tension. Thomas Rid, the political scientist famous for declaring that “cyber war will not take place,” concedes that “‘when’ seems to be a more appropriate conjunction than an ‘if’” with respect to “when the first casualty is caused by a cyber attack.” Nevertheless, Rid argues that the impulsive shock “is likely to subside and a more sober assessment” will continue to view cyberattacks in terms of their limitations.28 Rid expects that the awareness of cyberattacks’ limitations will help contain long-term fears; he further asserts that the cyber domain offers greater opportunities for reform-via-subversion than they pose dangers by weaponization. Again, the message is one of narrowing concepts about the impact and relevance of cyberwar. Although conceding that cyber technology is much more widely available than other means of harassing opponents, political science icon Joseph S. Nye argues that “unlike nuclear [weapons], cyber does not pose an existential threat.”29

10

Myths and Realities of Cyber Warfare

Nonbinding meetings by international scholars, conducted at Tallinn, Estonia, and yielding the Tallinn Manual, hold that victims of cyberattack may respond only in the event that their defenses fail and that although the victim’s response may acceptably come in either a cyber or kinetic form, its effects should not exceed the impact wrought by the aggression that triggered the response. U.S. manuals similarly argued that a resort to force was to be rejected except after the failure of peaceful alternatives.30 Even seemingly straightforward definitions can fall apart over disagreement about what sorts of problems (e.g., reduced functionality of systems) constitute “damage.” Noted cybersecurity thinker Lucas Kello agrees that digitally perpetrated “physical destruction or loss of life” reaches the definition of cyberwar, but he finds that “‘cyberwar’ [is] a term that should be used sparingly given that the vast majority of cyberattacks do not” cause kinetic damage or death.31 In addition to the myriad propositions advanced by various scholars and analysts regarding cyberattacks and cyberwar, practitioners possess strongly held views that are influenced by their own roles and experiences. These are understandably less frequently enumerated for public consumption. However, Hayden’s roles as chief of the NSA and later the CIA provide him with a valuable and interested perspective. His writing and appearances following his 2009 retirement offer a careful appraisal from a practitioner’s vantage. He has described how technical penetration, exploration of a targeted system, and exfiltration of copied data are examples of spy work, which is separate from combat: “we don’t call that an ‘attack.’ We call it ‘espionage.’”32 Espionage has been variously romanticized and condemned. However, spying conducted outside the bounds of an already declared war has traditionally been considered a normal activity by states and separate from warfare. Hayden’s statement therefore contributes to those arguments narrowing the definition of cyberattacks and cyberwar. DEFINING “WAR” IS POLITICAL In the face of the various arguments—and often divergent underlying rationales—about whether to adopt broader or narrower ideas about the definition of cyberattacks and cyberwars, policy makers and strategists are very likely to consider individual scenarios on a much more case-by-case basis. Robert Gates, U.S. Secretary of Defense from 2006 to 2011, is reported to have evinced dissatisfaction with this dynamic and notably described conflict in cyberspace as “dark territory” analogous to the dangerous and ambiguity-riddled stints of railway track lacking signals and controls. Cyberspace dangers did not even appear in the threat assessment reports appearing in the earlier phases of Gates’s tenure.33 P. W. Singer and Allan Friedman, senior figures at the Brookings Institute, observed that determinations about when cyber activities count as

War’s Character and Nature, and the Stuxnet Trap11

warfare “will come down to making tough political decisions.”34 A number of considerations can factor into calculations about whether an identified intrusion or attack merits a military response. One of these is potential uncertainty about whether an attack on a database was conducted for criminal or military purposes.35 Since criminal elements are coopted by the regimes of some states for other purposes and since some states such as North Korea evidently engage in criminal-style activity for reasons connected to their strategic positions, this determination can be challenging. The blurring of the lines between military and civilian extends even more deeply. The infrastructure enabling the internet for military uses is typically the same as the infrastructure for wider nonmilitary uses. In the United States, that infrastructure is mostly owned and operated by the private sector. Some activities such as attacks via botnets of machines that have been secretly compromised without the knowledge of their legitimate owners belong to civilians, even though the machines might be quietly supporting actions by state or nonstate actors including those operating overseas. In other cases, civilians may voluntarily contribute their machines to a botnet for purposes of “patriotic hacking,” in which noncombatants willingly deploy their machines for a political or strategic purpose. Chinese military experts have published assertions that “the fact that information technology is increasingly relevant to people’s lives determines that those who take part in information war are not all soldiers and that anybody who understands computers may become a ‘fighter’ on the network.” Many Russian civilian computer users are reported to have used websites, including one called StopGeorgia, to contribute to Russian efforts to flood Georgian websites with traffic that would temporarily bring them offline. This coincided with kinetic fighting between Russian and Georgian military personnel in 2008.36 In such cases, determining combatants from noncombatants is an inherently political decision, and it is closely associated with choices about whether to identify the cyber aspect of the conflict to constitute warfare. This, in turn, will be impacted not only by the circumstances themselves and the surrounding context but also by the people in leadership positions tasked with making policy choices. Damage to critical infrastructure has been offered as a demarcation of war, for example. Author Fred Kaplan posited: What is critical infrastructure even? I don’t think too many people would have guessed that the first time an American president made a public statement identifying a hacker into some American facility and saying ‘we will retaliate against this in a manner and time of our choosing’ . . . [that] many people would have guessed that this would have been about North Korea’s hacking of Sony Pictures over a movie. . . . I don’t think North Korea would have predicted that such a fuss would be made about this.37

12

Myths and Realities of Cyber Warfare

The suggested reason that a moviemaking subsidiary of a foreign company was deemed to qualify as “critical infrastructure” is that policy makers identified a need to convey a willingness to thwart a rogue state’s ongoing pattern of brazen activities. Yet, despite the fact that U.S. policy makers consider the targeting of the country’s electric grids to be an act of war, the malware-induced six-hour interruption of electrical service for a quarter of a million people in Ukraine, attributed by the U.S. Department of Energy as coming from Russian cyberattackers, was not referred to officially as an act of war.38 When other policy makers come to power, they may interpret predecessors’ decisions as adversely shaping a strategic context. “Red lines matter” and prioritizing the need for reestablishing deterrence can constitute important factors in impacting decisions about how to respond to particular situations. Policy makers are likely to feel the need to justify visible responses to an attack by demonstrating that their actions come in response to foreign aggression. Expert opinion regarding attribution is divided, as discussed in Chapter 4. The political urge to produce evidence raises questions about the extent and character of evidence linking an act to an actor that should be provided. “Attribution is rarely made for the sake of it,” and decisions about how to release attribution information, when to do so, and what data to provide are intrinsically political topics.39 From a technical standpoint, Libicki persuasively describes cybersecurity as “sit[ting] at the uncomfortable intersection between engineering and conflict. Winning a conflict is not the same as solving an engineering problem, largely because the other side never stops evolving to frustrate the engineering.”40 That evolution in the adversary lies at the core of the nature of warfare. Moreover, as Conti and Raymond note, whereas modern kinetic forces engage in episodic deployments to combat, “cyber operations forces are engaging adversaries every day.”41 Definitions and perspectives on bad actors may vary but cyberspace witnesses bad actors’ activities in volume and constantly. “Acceptable norms of behavior are often unclear in cyberspace.”42 Perhaps this is partly because cyberspace is sufficiently new or because cyberspace is an environment in which hostile activities have been identified belatedly or countered sporadically. The assertion, already a decade old, retains its validity. This helps ensure that major cybersecurity events and actions potentially liable for consideration as warfare will be treated to a large extent as an endless series of one-off unique cases. It is possible that an absence of norms breeds a norm of chaotic normlessness. Former cyber czar Richard Clarke has insisted that “any ban on ‘first use’” of some kind of effective cyberweapon “would probably only apply prior to kinetic shooting. Once a war goes kinetic, most bets are off.”43 Although perhaps seemingly drawing a line dividing cyberwar from “real” war, Clarke’s perspective more strongly reflects an awareness that

War’s Character and Nature, and the Stuxnet Trap13

the perception of an existential threat drives combatants to ever-greater lengths to ensure victory and survival. Conti and Raymond observe that the constancy of cyberattacks and the ubiquity of the internet mean that “there is no return from the front to the safety of the homeland.”44 This concept touches on the meaning of distance and terrain in cyber conflict. It, coupled with Clarke’s ominous prediction, also begs the question about worst-case scenarios envisioned about a potential cyberwar. A “CYBER PEARL HARBOR” U.S. Defense Secretary Leon Panetta’s comment in 2012 that cyberattacks against infrastructure could trigger “physical destruction and the loss of life” and thereby “be a cyber Pearl Harbor”45 leveraged provocative imagery to raise awareness about the direst prospects of an act of cyberwar. The 20th-century forebearer attack on Pearl Harbor was meant as a stunning blow that would foreclose U.S. opportunities to undertake strategically or operationally significant actions opposing Japan’s seizure of petroleum resources in the Dutch-owned islands of modern Indonesia. Although not a war-winning act, it devastated U.S. naval power in the short term even more than contemporary reports to the U.S. public indicated. A forgotten but important point about Pearl Harbor was its prominent but not singular place within a larger context of simultaneous surprise attacks against various targets in more than half a dozen locations stretching 6,800 miles from Singapore to Hawaii. The Pearl Harbor attacks are the most widely remembered in the United States; but from a strategic standpoint in 1941, the destruction of U.S. battleships nested within a context of surprise attacks. Those surprise attacks supported Japan’s objective of seizing and securing oil fields and other key resources in the southwest Pacific Ocean. Understanding that Pearl Harbor was a massive portion of a larger campaign is important if the allusion to Pearl Harbor is to be utilized in contexts such as cyberwar. Another crucial point is that Panetta’s statement was not the first Pearl Harbor allusion regarding a vulnerability in cyberspace. However, the concept of an “electronic Pearl Harbor” dates to 1991, as demonstrated by computer security analyst Winn Schwartau’s testimony to Congress that summer: “computer systems are so poorly protected today that they can essentially be considered defenseless; essentially, an electronic Pearl Harbor waiting to occur.”46 While individual hackers appeared to represent annoying challenges, I don’t think that we have to worry as much about hackers as we do about organized groups who are much more well organized, well funded, and well motivated, who may have real reason to do penetrations of systems for either economic or industrial advantage.47

14

Myths and Realities of Cyber Warfare

The prospect of organized and politically motivated cyberattack is fundamental to any coherent concept of a cyber Pearl Harbor. An awareness of the fact that the original Pearl Harbor fit within a larger strategic context—that the attack supported a strategic purpose and was the most prominent among what was actually several virtually simultaneous strikes—should also be demanded of sources predicting digital equivalents. The term has been adopted with a degree of gusto. While author Richard Stiennon notes that “only a crippling military defeat thanks to overwhelming control of the cyber domain deserves to be labeled a Cyber Pearl Harbor,” he has also predicted that “in the near future there will be a devastating use of cyber attack against military systems; a true cyber Pearl Harbor.”48 Whether or not the prediction is realized, the phrasing does at least indicate an awareness that calling a cyberattack “Pearl Harbor” carries meaningful connotations. Not all authors seem to fully appreciate this fact: “imagine if Pearl Harbor had been attacked and there had been no response from Washington. This is the actual case today” through the “secret war against the United States” conducted through cyberattacks.49 Other authors have either skirted Pearl Harbor imagery or used it to highlight potential distinctions between the provocative visions and the potential dangers they envision. Ronald Deibert, the founder of the Canadian cybersecurity analysis group Citizen Lab, pointed to fragility in internet infrastructure from threats as diverse as a ship’s anchor severing undersea communications cables to cascading satellite debris effects known as the Kessler Syndrome “end[ing] global cyberspace as we know it.”50 Perhaps partly because this fragility is not vulnerable only to intentional actions, Deibert avoided using “Pearl Harbor” allegories. A work describing cyberwar’s “potential to make planes fall from the sky or cause nuclear power plants to melt down” nonetheless went to some length to avoid inserting Pearl Harbor allusions.51 Pearl Harbor can even to some extent provide a foil to the prospect of cyberwar: whereas Japanese planes strafing Ford Island or bombing the USS Arizona were marked with Japan’s red-sun roundel, attribution “won’t be that simple” in the wake of a crushing cyberattack, and “after we answer the question of who did it, the next obvious question . . . would be, ‘Is it war?’”52 For Conti and Raymond, although a shattering cyberattack “is possible, and will likely occur,” the predominant context is marked by states “quietly spar[ring] in the shadows seeking to gain access, positional advantage” and intelligence information. They describe an extensive range of targets across the military, commercial sector, defense industrial base, emergency services, food production and distribution, manufacturing, transportation, and other key sectors. In each area, they delineate examples of strategic, operational, and tactical targets. Despite these observations, Conti and Raymond conclude that “a long-term perspective” aware of ongoing

War’s Character and Nature, and the Stuxnet Trap15

and persistent efforts including sophisticated espionage “imply death by a thousand cuts rather than a forever looming ‘Cyber Pearl Harbor.’”53 This is a prediction paralleled in other works by cybersecurity analysts Chris Demchak and Aaron Brantly, with the former underscoring the complications in responding to relatively low-key intrusions and the latter suggesting that hostile cyber activity would maintain a generally steady tempo occasionally punctuated by an uptick in effect “as [new] capabilities are developed and utilized.”54 Predicting a sudden searing defeat can tempt some authors to prognosticate disaster. That is a tendency not limited to writings on cyberwar. Although the potential for destruction has been demonstrated through a so far small number of cyberattacks wreaking physical damage, conflict is about more than simply destruction. Violence in warfare is a means to an end, and unless a state’s policy goals are advanced, violence breaks without building. That fact supports the contentions of authors alert to some of the more gradual actions conducted through the cyber domain. Gradual actions can answer the strategic and political goals, and they can do so without precipitating the kind of flashpoint crisis that can goad targeted states into identifying trends and developing coherent and concerted responses. Key among these gradual approaches, and massively evident in the cyber domain, is espionage. WHERE ESPIONAGE FITS Where scholar Paul Rosenzweig observes that “in cyberspace, the line between intelligence and war is ill-defined, at best,” Hayden is even more blunt: “the distinctions break down entirely in the cyber domain.”55 This complicates the implications of other concepts about espionage, however, and paint the cyber domain as a dangerous landscape. The Tallinn Manual 2.0, published in 2017 and built on the shoulders of its predecessor, declares that “the International Group of Experts agreed that customary international law does not prohibit espionage per se.” Thus, although the cyber spy may be virtually indistinguishable from the cyber warrior, the former’s activities cannot be universally condemned even in an abstract sense. The year Tallinn Manual 2.0 appeared, Hayden himself added, “the theft of . . . data is acceptable international practice. It is honorable international practice.”56 One aspect that does attract distinction appears within espionage. Hayden and most others in the profession among most countries find a distinction between acceptable types of espionage that deal directly with security issues, on one hand, and improper espionage that seeks to uncover information that is not directly connected to security issues but which is linked to other areas, such as the development of one country’s economic sectors, on the other. If distinguishing between a cyber spy’s intelligence

16

Myths and Realities of Cyber Warfare

gathering and a cyberwarrior’s attack preparation is difficult because the activities can be identical, then distinguishing between security-oriented espionage and economically oriented espionage poses challenges because the target sets can overlap. Rosenzweig explains that “at some point, economic espionage . . . blends into national security espionage, and criminality becomes spying.”57 This is particularly likely when one country targets the industries and business sectors of another country that do business in sectors that support the targeted country’s defense industrial base or the spying country’s own overlapping economic and defense priority areas for development. The People’s Republic of China (PRC) is frequently introduced as the iconic example of this kind of espionage. For two decades, most of the longest-lasting espionage projects discovered and disclosed by the United States and other countries have been traced to hackers in China. State-sponsored and persistent hacking groups will be discussed in the next chapter since the legacy and seriousness of their deliberate and patient efforts put important nuance on assumptions about illicit and hostile cyber activities striking with instantaneous speed. Within the Chinese People’s Liberation Army (PLA), the General Staff Department is entrusted with the establishment of war plans and doctrine, as well as military training and intelligence gathering. Its Third Department comprises a dozen bureaus, and while the First Bureau “appears to be responsible for preserving information security of the PLA,” each of the others is linked to conducting espionage directed at targets in various regions. For example, the Second Bureau (also known as Unit 61398) focuses on economic and military espionage targets across the United States and Canada; this group was the first identified “advanced persistent threat,” and so also became known globally as APT1.58 Yet, when confronted by assertions and evidence of spying, journalists identified a long-standing pattern “in a scripted Chinese response: it’s not us. . . . It’s a bunch of teenagers, or criminals, or miscreants.” This argument is dismissed by some analysts, who suspect that the surveillance that China conducts internally means that the domestic population has little opportunity to undertake cyber mischief without at least implicit authorization. Evidence of “underlying defects” in exported products reinforces such impressions, particularly when the faults facilitate espionage and when the exported products constitute an increasingly comprehensive presence in telecommunications infrastructure.59 Schneier observes a correlation between the intrusiveness of surveillance and the ability to cease being noticeable. Nearly ubiquitous Chinese electronics hardware exports, and evidence of malware insertions into devices feed concern that extensive espionage is facilitated through bugged devices sold to targets.60 Researchers note that technology that might be imagined as complicating or even foreclosing opportunities

War’s Character and Nature, and the Stuxnet Trap17

for espionage may actually be adapted to facilitate its continuance. One of the originating motivations in the development of the first effective 20th-century computers was itself the identified need to break enemy codes and gather intelligence. While computers have not removed the need for human beings in various areas of espionage work, computerization has provided increased efficiencies while reducing associated costs and risks.61 The advent of cyberspace may not allow instantaneous spying, but it can enable activities at geographic ranges to protect individuals conducting espionage, and it has been used to obscure attribution or to create a fig leaf of purported deniability. Intelligence information is important not only to an attacker but also to a defender, and this is as true in the cyber domain as in the physical realms. Nowhere can intelligence be expected to be perfect, but relatively accurate and timely intelligence can provide decision makers with a strategic advantage. This is as valuable to a defender as to an aggressor. For that reason, in a military context, writers have suggested that reconnaissance about an adversary’s own cyber espionage and cyberattack and defense capabilities could constitute areas of interest, and implicitly, be areas worth guarding against the adversary’s own parallel efforts. In a broader context, tools like deep packet inspection is a method associated with intrusive surveillance and potential censorship, but it can also “be programmed to detect certain bit sequences associated with known malicious code.”62 ENCRYPTION Predictably, encryption is as complex an issue as is espionage. Conceptually, the purpose of encryption is to enable confidential communications to remain private. Since technologies are not “good” or “bad,” and cannot be reserved for use only by “good guys” or “bad guys,” encryption technologies might be thought of as used for different purposes and by different actors as much as is the case regarding espionage. Nonstate actors’ involvement in war illustrates a demand and use of encryption. The creation of the internet in the 1990s occurred at the same time that jihadist groups consolidated their political hold on Afghanistan and as one of their former financiers, Osama bin Laden, formed a dispersed network of militants known as al-Qaeda. Intent on launching periodic attacks against Western targets, planners in different locations used encrypted email to coordinate efforts and prepare a series of bombings, most of which were directed at diplomatic or military targets in Africa or the Middle East. The expansion of the internet coincided with the Global War on Terrorism, launched in response to al-Qaeda’s attacks against key political, strategic, and economic landmarks on the U.S. eastern seaboard. A muscular military response by the United States and other nations

18

Myths and Realities of Cyber Warfare

included a multipurpose and eventually controversial military operation in Iraq. Evident concern among jihadist forces about the possibility that commercially available encryption tools might not be secure for their communications reportedly spurred affiliated groups toward the development of indigenous encryption tools. The first of these, Asrar al-Mujahideen, was developed by an al-Qaeda affiliated entity called the Global Islamic Media Fund (GIMF) in 2007 and its existence was subsequently advertised in the al-Qaeda platform Inspire.63 The advantages of privacy notwithstanding, for a variety of reasons terrorists do not always adopt practices meant to ensure security. For example, the extremists affiliated with the Islamic State of Iraq and Syria (ISIS) launched attacks in Paris in November 2015, but they reportedly “didn’t bother to use encryption while communit[ing].” This neglect, which ran afoul of operational security (OPSEC) guidance distributed on contemporary ISIS online forums, enabled police to raid plotters before they could undertake a follow-on attack on the business district of ­western Paris.64 A secrecy requirement can pave a path to paranoia. Al-Qaeda reportedly created and released additional encryption tools, under the GIMF banner as a false-flag so that other jihadist groups would not recognize its exact origin. These tools were “devised so as to keep track of rival groups,” as well as to guard against suspicions that one or another militant organization might be a decoy established by U.S. or allied intelligence entities to communicate with, identify, and entrap extremists. GIMF even issued a recall in November 2013 on an encryption tool purported to have been created by the ISIS. Many ISIS members are reported to have favored the so-called Snowden Phone because of the Android device’s prominent encryption capabilities.65 Nonstate militants and other violent extremists are not the only groups desiring private communications or adopting encryption, however. Corporate giants in the world of electronic search, networking, and telecommunications have responded to this interest with projects and policies designed to preserve client privacy from external surveillance. Google began its investment in undersea cables as a means of carrying vast amounts of internet traffic as early as 2008. In the following years, it would initiate work on constructing its own fiber-optic lines from the United States to Asia and Europe, spanning both the Pacific and Atlantic Oceans. Information assurance may have played a part in these developments and clearly was involved in the company’s decision to encrypt the data transmitted between its data centers. Apple, another giant of the 21st-century economy, took steps to provide encryption not only from other individuals and groups but even from Apple itself. This model was promoted through the argumentation that if Apple possessed an ability to monitor communications on its devices, then any and every government around the world

War’s Character and Nature, and the Stuxnet Trap19

might then make claims on Apple to do so, for a variety of reasons and with a range of privacy and security implications.66 Inadequate or flawed encryption might invite dangers through a false sense of security, and the extreme German confidence in the World War II-era Enigma machine stands as a paramount example of this. An expedient prioritization of functionality can combine with technological confidence to precipitate inadequacies or even the absence of security tools such as encryption. Dramatic advances in the practicality of unmanned aerial vehicles (UAVs) and their increasing and diverse utilities encouraged rapid work toward functionality that some have suggested was delivered at the cost of security. One result was that a Russian-developed commercially available $26 software application called SkyGrabber reportedly allowed insurgent forces in Iraq “to view, record, and share video relayed by” U.S. UAVs. Although the software did not extend to other hacking capabilities, the case was used to illustrate and question the rush to functionality as being short-sighted.67 Some writers on the dynamics of encryption have commented on the paradoxical balance between strong and weak encryption alternatives; controls over exportation and dissemination of encryption tools waned at the same time that the internet began to grow, and as a result, maintaining secure encryption implied an adversary’s ability to use the same tools to resist surveillance, whereas weaker encryption systems might allow surveillance but would also leave one’s own data vulnerable to the adversary’s own efforts.68 Experts have also noted that even the availability of strong encryption does not preclude an entity’s ability to derive conclusions from data. “Anything that is worth anything now is being encrypted,” Hayden conceded in 2017, but the absence of content itself means that organizations that had previously analyzed messages and broken encryptions must adapt, using other information from the flow of traffic to provide usable information.69 This matters because it illustrates an enduring trait in the nature of conflict. Antagonists respond to their environments and to the actions of one another. Evolution is endemic to conflict, and this can make accurate extrapolation a challenging task. THE STUXNET TRAP AND EXTRAPOLATIONS Many of the more provocative and disturbing scenarios regarding a cyberwar involve the use of malware to wreak kinetic damage or even mass death, either directly or by the secondary effects of cascading failures triggered by cyberweapons causing life-sustaining infrastructure to malfunction. To date, no deaths have been attributed to cyberwarfare. A small number of incidents have occurred in which cyberattack has been connected to physical damages incurred on a target. The first instance of such cyberattack occurred in Australia in 2000 when sewage treatment

20

Myths and Realities of Cyber Warfare

facilities were sabotaged due to personal grievance; in 2007 U.S. researchers in Idaho confirmed that malware could pose a threat to computers overseeing industrial control systems; in 2010 the Stuxnet malware was discovered and was suspected to have been connected to the policy objective of slowing Iran’s nuclear enrichment program; in 2014 an undisclosed steel mill located in Germany and rumored to belong to Thyssen-Krupp suffered physical damage to its furnaces, although motivation behind the attack remains unclear. Extrapolating patterns from these examples have proven problematic. The personality factors in the Australian case mean that it was indisputably not “cyberwar,” although it suggested that cyberattack could trigger hazardous malfunction for a computer-monitored infrastructure system. The Aurora test conducted by Idaho National Laboratory in 2007 demonstrated the same point, and as an experiment, the event was specifically not “cyberwar.” Because researchers connected Stuxnet to both physical damage impacts and a suspected political motivation, it has attained prominence as a key example of cyberwar. Ambiguities regarding the German steel mill include the identity of the targeted mill, the exact portions of the complex that were damaged, attribution as to the origin of the cyberattack, and motivation. The paltry information precludes the notoriety or examination to which Stuxnet has been subjected. Forensics experts and cybersecurity analysts describe Stuxnet as a large and sophisticated piece of malware. Half a megabyte in size, it was between twelve and fifty times larger than most examples of malware examined by cybersecurity professionals. To ensure its successful intrusion into targeted systems, it reportedly leveraged four zero-day exploits. This term denotes a method taking advantage of a vulnerability that has not been identified to the public or even to the creators of the targeted software, and the topic will be addressed further in Chapter 2. The presence of one or more zero-day exploits indicates sophisticated code that requires time and money to construct; analysts estimate that Stuxnet’s development cost “in the low double-digit millions of dollars.”70 Whereas many examples of malware, such as the mysterious Conficker worm that infected six million computers in 2008, are designed to facilitate their own propagation, analysts noted that Stuxnet was written to limit its own propagation, “going after a specific industrial controller, manufactured by Siemens, configured to run a series of nuclear centrifuges” in “the exact setup at the Natanz nuclear facility.” As a result, analysts concluded that the malware had been written with the policy objective of slowing Iran’s nuclear enrichment, which many experts believed was associated with that country’s ambition to develop a nuclear weapon. Initially limiting its own propagation and written so that its “sharp focus on the Iranian centrifuges rendered it harmless to other computer networks,”71 Stuxnet differed greatly from most malware operating as of the 2010s. Since

War’s Character and Nature, and the Stuxnet Trap21

Stuxnet stands as both the sole example of a kinetic-effect cyberweapon and as an exception to many of the ruling patterns regarding cyberweapons, the urge to extrapolate from its example carries uncertainty. This has not prevented comment about the future of cyberwar, based on the experiences and interpretations of Stuxnet. Important aspects remain unclear a decade after the malware’s first discovery in the spring of 2010. Different sources disagree even about the spread of the malware’s final version. Some analysts have described Stuxnet as “a well-designed cyber weapon that did not cause global effects,” and others used it as evidence that “a cyberweapon could be kept on a short leash.”72 In contrast, one New York Times article in the months after the malware’s discovery declared that “Stuxnet  .  .  . was splattered on thousands of computer systems around the world, and much of its impact has been on those systems.” Symantec reported that the initial infections spread to reach a total of 100,000 devices located in more than 100 different countries. An executive at Microsoft reflected in 2018 that “we essentially had to end up defending over a billion customers in a response to what ultimately was meant to be a highly targeted, highly precise” action.73 Of course, different vantage points can shape diverging views of what constitutes the impact of an attack. Whether it is an inert piece of code half the size of a single camera phone image, on one hand, or the shadow of a cyberweapon, on the other, depends in part on perspective. Libicki seems to conclude that uncertainty may prove to form a pattern with cyberattacks; in contrast to the tools for battle damage assessment following a kinetic strike, cyberweapons “may leave even the victim scratching his head about what the damage was.” Rid noted that the slow and erratic trend of eroding system efficiency helped preserve the malware’s secret, as the malware was believed to have run for two years before its eventual discovery. Other researchers have suggested that its belated discovery was retarded by the same secretive and hermetic approach to security-related issues that had made the enrichment technologies such an otherwise inaccessible target in the first place.74 Despite important and lingering ambiguities, Stuxnet has inspired some notable predictions about the future of struggle in the cyber domain. Kello identified several “doctrinal quandaries” about making the malware, including issues of “tactical viability,” the potential of the technology to be adopted and repurposed by other entities, and “anxieties over the dangerous precedent that the operation would set.” Other experts predicted that a mere six months would elapse before copycat attacks were conducted by criminal groups eager to leverage sophisticated code.75 Credible accounts of such an occurrence with respect to this code have not appeared in over ten years, although analogous allegations have been made regarding repurposed weaponization of other reportedly leaked software.

22

Myths and Realities of Cyber Warfare

Contrasting statements have noted that the complexity of computerized supervisory control and data acquisition (SCADA) systems reduce the likelihood of anti-SCADA malware being something eligible for rapid development or easy leverage against a large number of targets. This does not allay others’ concern about long-term espionage exploring and potentially seeding complex systems with malware, to be triggered at a moment of the adversary’s choice.76 Still, other predictions argue that “clearly, future cyber attacks will target systems in a more sophisticated manner than Stuxnet.”77 Pronouncements such as this carry a palpable air of certainty—and an element of a subconscious presumption that more sophisticated weapons are more “modern” and more effective. While the development and use of cyberweapons may possibly reveal a trend toward increasingly complex and sophisticated forms of malware, there is no deterministic set of forces fating them to do so. Cyberweapons are technologies—and technologies are made, adopted, and adapted, to address perceived needs in acceptable ways. Trends in the development and use of cyberweapons will be contoured by ideas about policy goals and also by decisions about the strategic, operational, and tactical approaches that will support the pursuit of those objectives. Understanding that fact facilitates a meaningful understanding of warfare. Studying conflict in cyberspace means questioning commonplace wisdom about cyberweapons and examining operations through cyberspace more deeply.

CHAPTER 2

Temporal Mythologies of Cyberwar

INSTANTANEOUS SPEED Perhaps the most important of the temporal mythologies regarding cyberwar deals with time itself. The actions targeting Georgia in 2008, coinciding with the kinetic combat between that country and Russian military units, was for example dubbed by some as “the internet version of blitzkrieg.”1 This in itself is a significant characterization. “Blitzkrieg,” the infamous use of “lighting war” by Nazi forces against hapless Allied powers, evokes such visceral imagery that it has become almost a cliché. And in a sense, this illustrates the danger of mythologies. What journalists from neutral nations saw in 1939 and 1940 and called “blitzkrieg” was, for the actual practitioners, something different. Certainly, it was a war of movement. It was a “quick and lively” approach to campaigning that sought to leverage speed in order to sow confusion within the enemy and facilitate an attacker’s exploitation of opportunities.2 But the instantaneous speed of German advances in World War II was itself an impression, a reaction—and a mythology. Figures who toil to alert the public or decision makers to serious opportunities or dangers frequently come to feel that their exertions go insufficiently rewarded. These figures can respond by raising their cry still louder and in more extreme tones. “Cyber war happens at the speed of light,” former cyber-czar Richard Clarke has declared. Since “photons of the attack packets stream down fiber-optic cable, the time between the launch of an attack and its effect is barely measurable,” bringing “risks for crisis.”3 Others have posited that electronic interconnectivity has crucially altered counterterrorism, since “time and distance have shortened while the speed of operations has increased dramatically in the Cyber Age.”4 Conflicts in the first decades of the 21st century have already frequently

24

Myths and Realities of Cyber Warfare

produced scenes that appear to reinforce the idea of cyberattacks hitting targets with instantaneous speed. The Syrian Electronic Army’s activities in support of Bashar al-Assad’s regime during the civil war that began in 2011 included abrupt defacements of more than 120 media outlets and other websites, including leading entities such as CNN, Huffington Post, Forbes, the Washington Post, and Facebook; Shamoon malware wrought $15 million in damage when it made tens of thousands of Saudi Aramco’s computers inoperable and wiped out data that had not been appropriately backed up as a result of the coinciding Ramadan holiday festivities.5 Among would-be targets, a potential corollary to the concern about the instantaneous speed of cyberattacks is that a defender is left with two alternatives in response. One is to succumb in the face of blindingly rapid aggression, either consciously or after a futile attempt to resist with outpaced actions from a bygone era. Popular impressions of the 1940 campaign in Western Europe nest closely with this picture. Such impressions enjoy a degree of validation, in the same way that stereotypes and generalizations tend to receive adequate reinforcement to forestall their being more seriously examined and questioned. Thoughts of cybergeddon are reinforced by cinema plots fed by a movie industry that both shapes and reflects popular assumptions and fears, and such pictures run surprisingly parallel to grainy images of burnt Dutch towns or of German panzers rolling through France. Some authors argue that the forlorn implications of the first alternative illustrate the need for a second alternative: a dramatic acceleration in the speed with which the defender can act. Indeed, as with some attacks such as DoS attacks, a persuasive case might be made that the speed of a response could and should be mounted at a much more rapid pace. Automation might permit straightforward and rapidly launched and scalable responses to specific (usually rudimentary) forms of cyberattack.6 The problem with adopting an overly broad-brush approach to thinking about aggression in or through the cyber domain is not unique to the domain: generalizations can obscure potentially nontrivial details and contexts. For example, some events can be created rapidly, while others can only be launched rapidly but require other factors to come into place before this is possible. These issues become even more nuanced when arguments begin about what kinds of unfriendly activities constitute a cyber “attack,” let alone cyber “war.” The covert nature of espionage obviously means that it is impossible to know its full scope and volume, but experts openly concede that an enormous amount of unfriendly activities occur as cyber espionage. Spying is perhaps as old as war itself, and it is closely bound to strategic issues and to war, but in the physical domains spying is not considered to be “war,” unless perhaps it is spying conducted during an actual armed conflict. Scholars seeking to bring order and legal consensus to the cyber domain have observed that “cyber espionage can differ in both speed and volume

Temporal Mythologies of Cyberwar25

from more traditional methods of espionage,” including opening the door to remote spying.7 In some sense, an East German-sponsored online spying gambit played midwife to the birth of a wider awareness of the need for cybersecurity.8 Cyber espionage has since matured and enabled monumentally larger volumes of materials to be accessed and exfiltrated. One former intelligence figure has noted that “the speed, the volume, the velocity, your inability to determine the veracity of this information, the variety of different ways in which it comes in, that’s the different part of the [spy] game now” due to the cyber domain.9 The former head of the NSA, Hayden, has called the infamous actions of Edward Snowden to be “the worst hemorrhaging of legitimate American secrets in the history of the American republic,”10 although, of course, the veracity of reportedly leaked documents is impossible to verify by legitimate means. An obvious inference to be drawn from such assertions relates the massive volume of exfiltrated materials (and the opportunities of abrupt large-scale disclosure online) to the purportedly instantaneous speed with which untoward activities can be undertaken online. It is valuable to dig more deeply in the face of these assertions, interrogating ideas to see how widely and how realistically they are applicable. Rid, a scholar renowned for his opposition to the notion that cyberwar will occur, has pointed out that many landmark espionage malware cases, including Stuxnet, Duqu, Flame, and Gauss, share a common trait: they were likely to have been “clandestinely operating for years before security researchers in private companies detected them.”11 One of the small but existing number of acknowledged cyberattacks to wreak physical damage occurred in 2014 when an undisclosed German steel mill was hacked and major equipment was brought to a point of destructive malfunction. SANS researchers reported that the attack required “establishing a foothold on the network” by compromising some of the workstations, and that follow-on reconnaissance “would have provided access of credentials or unsecured systems and connections.”12 Such efforts take time, suggesting perhaps an adjustment to the mythology of instantaneously effective cyberattacks: while some activities can be created quickly and others can be launched abruptly, many attacks (and especially many of the complex and tailored attacks) cannot be created without significant reconnaissance and preparation to better ensure that intended effects will occur. That kind of work flies in the face of assumptions about cyberweapons always striking at the speed of light simply because they are transmitted via electronics. “The visible strike of an attack in the cyber domain is fast and the effects can be sudden; however, the planning and organization needed to create a precision effect demand significant time and resources.”13 And, at other times, although development time is needed, the “visible strike” itself is not even instantaneous.

26

Myths and Realities of Cyber Warfare

A realistic appraisal of the myth that cyberattacks possess instantaneous speed, therefore, demands an examination of some of the different kinds of hostile activities. While many other forms of attacks exist, further consideration of distributed denial of service (DDoS) attacks and of advanced persistent threats (APT) activities can help shed useful light on the issue of the presumptively instantaneous speed of cyberattacks. DISTRIBUTED DENIAL OF SERVICE Denial of service (DoS) attacks represent possibly the most common example of an easily established, swiftly striking cyberattack. Different approaches exist, but the essence of a DoS attack is that attacking devices flood a target with either deliberately incomplete or spoofed signals. The target is overwhelmed in the first case because it first interprets and reacts to the attack as if it were a series of legitimate incoming traffic encountering some technical hurdle, or it is overwhelmed in the second kind of DoS by the deluge of spoofed pings.14 DoS attacks can be distributed (thus DDoS), and although they are commonly launched to enunciate some political statement, the threat of DDoS attacks has also been used by criminals to extort money.15 This points to a recurring theme, that intersections between crime, espionage, and conflict in the cyber domain are frequent and the distinctions can be either obscure or even dubious. Although Mafiaboy’s DDoS attacks against Yahoo and other websites in 2000 helped usher in the DDoS class of attacks, one of the early politicized uses of DDoS occurred in Estonia in 2007, when the small but electronically connected Baltic country moved to rid the capital of one of the Soviet-era landmarks imposed on it by its occupiers after World War II. Estonian actions were interpreted by Russian nationalists as hostile to Estonia’s much larger eastern neighbor, and successive waves of cyberattacks befell Estonia: first the DDoS targeting of government servers and media portals from April 26 to 28, and then a sustained flood of attacks from April 30 to May 18 against the country’s information infrastructure including its Domain Name System servers’ backbone routers. Scholars called this second phase “sophisticated, massive, and well coordinated,” as artificial traffic sent via 178 countries simulated traffic “400 times higher than its normal rate.”16 It was, in fact, the Estonia cyber struggle that triggered the first use of the term “cyberwar” in the mainstream press such as the New York Times.17 The “invocation of ‘warfare’” terminology in the media did not coincide with North Atlantic Treaty Organization (NATO) agreement that the activities triggered the Article 5 provisions for collective defense.18 However, steps in response to these attacks included the establishment of the NATO Cybersecurity Center of Excellence the following year and the ostentatious citing of its headquarters in Tallinn, Estonia. Although mixed results can be seen at the tactical level from the

Temporal Mythologies of Cyberwar27

Estonian 2007 episode, at the strategic level the awakening of NATO to cybersecurity issues was likely a setback. Fourteen months later, another small country on Russia’s borders suffered DDoS attacks but with more successful results for the attacker. The status of separatist regions within Georgia represented one item in a list of tensions between Georgia and Russia, as the former country sought closer ties with Europe and the United States and the latter held suspicions about countries in its “near abroad” building relationships with other powerful states. DDoS attacks interrupted access to Georgia’s governmental websites in July 200819 and continued into August, coinciding with kinetic fighting between Georgian and Russian soldiers. Georgia’s government was effectively gagged on the world stage, while its forces were pressed back into the interior. In addition to DDoS attacks to eliminate accessibility of Georgian government sites, other relatively common attack types like SQL injections were used to deface Georgian websites, such as by conflating images of the country’s then-president Mikheil Saakashvili with photos of Nazi dictator Adolph Hitler.20 The pairing of internet interruption attacks like DDoS with simple website defacements reinforced the point that in terms of messaging and diplomacy, Georgia was cut off from the larger world as it already was geographically. Political leveraging through DDoS attacks is not limited to Russian geopolitical objectives. Contemporary to the Georgia war in 2008, targets as geographically disparate as Burmese political dissidents and Ukrainian newspaper websites were temporarily muzzled by DDoS attacks.21 As will be discussed later with respect to attribution, determining who might be coordinating or directing a cyberattack is a complex process. Experts in the wake of the DDoS attacks against Estonia noted that “sources of the attack were worldwide rather than concentrated in a few locations,” and as a result some suggested that they were “the product of spontaneous anger from a loose federation of separate attackers,”22 where others have seen the fingerprint of a more centralized coordination and the use of bot devices that are centrally controlled, although the devices’ own users are unaware. Where DDoS attacks are launched by individual conscious users, networks controlled for strategic purposes, or botnets rented out on the Dark Web, DDoS attacks do represent an easy and quick-acting approach in which, “you don’t need a lot of skillsets to do this. But you can do a lot of damage with this.”23 Attackers’ capabilities advance steadily, as Mirai malware was advanced by a botnet encompassing an estimated 400,000 devices, and the heaviest DDoS attacks had already by the fall of 2016 exceeded a peak of 1.1 terabytes per second.24 This does not mean that DDoS attacks pose insurmountable problems, however. Libicki noted not only that “DDoS attacks are . . . not all that powerful” but also that “their military uses are modest,” since they interrupt access to data but neither destroy nor manipulate data. What DDoS can

28

Myths and Realities of Cyber Warfare

accomplish is to keep sites temporarily from being accessible and thereby “restrict the ‘freedom of maneuver’ in cyberspace” that a target might otherwise possess.25 Others have pointed out that appropriate equipment, knowledge, and information bandwidths can overcome a would-be DoS, and that as a result “DDoS attacks aren’t the breathless military assaults that the press makes them out to be.”26 ADVANCED PERSISTENT THREATS ESPIONAGE The ramifications of long-standing espionage activities cause enduring problems long after a DDoS has ceased interrupting access to one or another website. Pernicious espionage efforts by advanced persistent threats, or APTs, so-called because of their technical sophistication and the longevity of their efforts, cast a long strategic shadow—and yet the fact that such espionage does not typically cause destruction, no real consensus has emerged in favor of necessarily treating such activities as “attacks,” per se. Spying is understood as something done generally by countries rather than being something unique to particular states or regimes or political philosophies. Indeed, cybersecurity writer Brian Mazanec observed that “the preeminent cyber actors  .  .  . [have] more to gain from engaging in cyber warfare than from significantly restricting it or giving it up entirely,”27 and if such is the case with cyberwarfare, it is all the truer still regarding espionage. Rather than following a haphazard approach to confirm whether common vulnerabilities leave a target undefended before continuing on toward perfunctory screening of another potential target, APTs carefully select targets that are suspected to possess data of some value, economic or otherwise. A reconnaissance team may prepare for months by watching for patterns in a target’s communication habits, in order to eventually launch a spear phishing attack against a target and frame a message as plausibly as possible to appear legitimate and trustworthy. Successful attempts deploy malware against a target by means of bogus links that a targeted user might follow or malware-laden attachments that a targeted user trusts.28 From this point, the espionage team reinforces and expands its ability to access and navigate a system. This can include ensuring new backdoor routes through which the APT can regain entry if the target were to recognize and seek to end the intrusion. The risk of the target’s own realization is minimized, however, by the APT unobtrusively elevating the access privileges of users it has compromised, or else by compromising further users within a system. One eerie and infamous espionage tool is to activate the microphones and webcams that have been almost ubiquitous on computers and phones for more than a decade.29 Another reported tool is a keystroke logger that

Temporal Mythologies of Cyberwar29

records and compromises what is typed onto a device. APTs can operate, navigate, and collect for months or even years without detection. Collection can proceed quietly for an extended timeframe, compiling staggering troves of data. In fact, the enormous scale of the data that may be exfiltrated is reputed to be one of the more serious potential challenges confronting an APT. The abrupt exfiltration of a torrent of data could spell a telltale sign of illicit presence within a network, prompting APTs to take steps that obscure the flow of data and potentially the paths that data takes as it flows back to the APT. This reportedly offers a side benefit, by complicating the process of following the data flows as a means of achieving attribution. The term “advanced persistent threat” (APT) reportedly dates to 2006, when the U.S. Air Force was trying to describe at the unclassified level the patient and persistent cyber espionage identified as originating in China.30 A few years later, engineers at Google noticed signs of the kind of sophisticated, well-resourced espionage that would come to define APTs. In order to first gain authorization to operate in China, Google had already agreed by 2005 to strictures that would not offend the regime, whose interest in “social harmony” encompassed elimination of inquiry and discussion topics deemed controversial and ranging from pornography to the regime’s handling of the Tiananmen Square demonstrations in June 1989. Soon Google also learned that its mapping services were also running afoul of the regime’s preferences.31 Operation Aurora was a turning point, however. By January 2010, forensics showed that hackers had been aggressively but covertly seeking a range of information from Google. One notable target was the source code for Google’s search engine, something of clear economic value to any cyber thief. Other targets, such as the Google accounts of human rights activists, supports of the Dalai Lama, and Google’s own records about what court orders had been delivered to the company regarding potential foreign agents, pointed distinctly to the People’s Republic of China as the operation’s instigator. Google chairman Eric Schmidt reflected to a journalist later that “the Aurora attacks had pretty much ‘ended the debate inside the company about what our future was on the mainland’” of Asia. After uncovering the operation, Google threatened to stop cooperating with its earlier agreements to restrict Chinese online searches, and within months Google was leaving China.32 Simultaneously, other intricate espionage projects were occurring against a range of other targets. Espionage traced to IP addresses in China is reported to have operated from 2007 to 2009 and to have targeted defense contractors developing the F-35, the stealthy fifth-generation jet multirole fighter undertaken by the United States.33 Some have alluded to a connection between these activities and the 2011 first flight of China’s J20, its own introduction to the fifth-generation class of fighters.34

30

Myths and Realities of Cyber Warfare

What makes the People’s Republic of China garner the special attention of cybersecurity experts is not that it is alone in spying but that forensics points to both widespread efforts by concentrated teams, coupled with self-reverent interpretations of international norms. Specifically, targets include both strategically relevant entities (which it apparently defines broadly) and a range of economic players and research institutions as well. Richard Clarke’s vivid description of “every interesting lab, company, and research facility in the US being systematically vacuum cleaned by some foreign entities,” who “never leave any marks that they were there, except when they want you to know,”35 might hyperbolize but it does make the point that while APTs dig deep, they also mine for a wide range of information across many economic sectors. The first identified APT was found to have been involved in espionage against organizations related to twenty distinct industries and economic sectors.36 Other cybersecurity figures have suggested that “economically motivated cyberespionage” exert only limited economic impact on countries like the United States whose intellectual property is being sacked (to a degree that is impossible to know, given the stealthy character of cyber espionage).37 This argumentation overlooks the fact that not only the scale but also to a considerable degree the character of the information being exfiltrated cannot be known. Also, unknowable is the motivations behind the seizures. As a result, presuming that intellectual property is being violated “only” to answer economic aims could potentially be an overly simplistic conclusion that misses part of the strategic picture. Chinese diplomats have rebuffed accusations about cyber espionage by insisting that their own country is a principal victim of hacking.38 It is worth noting here that Chinese officials consider some unrestricted web searches and blog postings to constitute “hacking,” and while foreign corporations’ secrets are regularly infiltrated, Chinese companies are linked to the regime in ways that allow the regime to consider them inviolate.39 Analysts can point to a wealth of information leading them to conclude Chinese culpability. One example is that espionage was conducted “almost always” during “office hours (China time) and not [during] national holidays (in China).”40 Although others could certainly run false flag activities so as to implicate China, this illustration is accompanied by the targeting of a range of industries and sectors that had been identified by the Chinese government as priority areas for development. Other incidents, such as the temporary redirection of 10% of the world’s routers to send a disproportionate segment of the world’s internet traffic through Chinese telecommunication networks in April 2010,41 raised some concern among those following cybersecurity news. Mandiant’s APT1 Report in 2013, which officially identified People’s Liberation Army Unit 61398 in Shanghai, formed a landmark event in the process of understanding and confronting the issue of sophisticated

Temporal Mythologies of Cyberwar31

espionage groups. Citing activity that began in 2006 when the Air Force was struggling to describe the then-new operations, APT1 had in the following seven years targeted at least 141 different organizations, the overwhelming majority of which (98.6%) were English-speaking and the vast majority of which (81.5%) were located in the United States.42 The case made by Mandiant reputedly created a diplomatic space in which nation-states (specifically U.S. policy makers in the Barack Obama administration) could more comfortably follow on and raise the topic of APTs; this thinking provides that a nation-state would be expected to proffer a more exhaustive amount of forensic proof of Chinese culpability than would be demanded of a private company specializing in cybersecurity issues.43 By no stretch of the imagination has the APT been a creature solely of Chinese making, although its activities do constitute a plurality of the ones identified to date by cybersecurity researchers. A European Union study in 2018 identified seventeen APTs by name, just over half of which were attributed to the People’s Republic of China. Various APTs specialize in particular sectors. APT18 (attributed to China) pursues aerospace, defense, and engineering targets, although it has also been associated with telecommunications and medicine. APT15, also attributed to China, targets European government ministries; China’s APT16 focuses on high tech, government, and financial areas in Japan and Taiwan. From Russia, APT28’s targets include NATO and various European militaries, while APT29 targets Western European governments and was connected to the hacks of the Democratic National Committee during the U.S. 2016 elections; analysts believe the APT “Snake” is a Russian entity dedicated to targeting the German government. North Korea’s identified APTs focus on South Korea, Japan, and Vietnam, as well as various industries such as manufacturing and aerospace that are seen as valuable by the regime. Iran’s APTs have been associated with espionage against the United States as well as countries in the Middle East region and beyond, several industrial sectors including energy, telecommunications, and finance, and also a range of individual targets working in academia, journalism, and human rights.44 One high-profile and recent example of persistent and politically motivated espionage involves Russian activities meant to disrupt dialog and the democratic process. Russian actors have been connected with disruption campaigns across several countries, of which the 2016 U.S. elections are the most widely identified. An exhaustive two-year investigation “did not establish  .  .  . coordinat[ion] or conspir[acy]” that had been alleged between Russian state-sponsored “election-interference activities” and the campaign of future President Donald Trump. Extensive evidence of Russia’s investment in sowing discontent among U.S. voters was identified, however. In terms of espionage, this included work by Units 26165 and

32

Myths and Realities of Cyber Warfare

74455 of the Directorate of the General Staff (GRU). The former appears to have led efforts in espionage initiated through spear phishing, while the latter “assisted in the release of documents stolen by Unit 26165” and propagation of controversy fueled by the revelations.45 Methods would predictably differ in the espionage and extraction of data, on one hand, and the utilization and exploitation of exfiltrated data, on the other. The common threads among APTs fundamentally deal with strategic issues. Different countries will define strategic priorities—both their threats and their opportunities—differently. This accounts for the potentially broad range of targets that can be found in the crosshairs of different APTs working at the behest of various governments. Because these entities focus on strategic priority areas, they are likely to be well-resourced, including with regard to skilled personnel and sophisticated equipment and techniques. It is important to remember that strategic priorities seldom change overnight. As a result, when priority areas are identified, powerful entities are likely to work in a way that borrows on the old Medici slogan: “Make haste, slowly.” This describes the modus operandi of this type of sophisticated threat, and the relentlessness and systematic approach taken by APTs. It also illustrates how some of the most important long-term threats in the cyber domain defy, from their very nature, the simple assumption that cyberattacks (defined broadly) strike instantaneously. THE IRRELEVANCE OF DISTANCE Whether interpreted as an opportunity or as a danger, the point that cyberattacks can be launched by far-off adversaries has given rise to an assumption that distance has now become an irrelevance in warfare—or at least in the forms of conflict related to cyberattack. While there is ample reason to conclude that the cyber domain is reordering how distance matters and how distances might even be defined, geography has not been made quite so irrelevant as many assertions would imply. The changed relevance of distance and geography due to communications technology is less than brand new, either. One of Britain’s first actions in World War I was the cutting of German transatlantic communications cables. While these carried telegraphy rather than internet traffic, the effect was fundamentally to interrupt Berlin’s communications with the disparate holdings of its empire, and its ability to communicate with neutrals outside Europe. The United States responded by allowing German diplomatic traffic to flow on neutral U.S. cables so that at least German embassies could keep abreast of instructions with their capital. Ironically British espionage targeting this route of assistance provided the first intelligence information on Germany’s subsequent gambit to negate a possible

Temporal Mythologies of Cyberwar33

U.S. entry into the war, and the Zimmerman Telegram became one of the major factors catalyzing the entry that German planners had intended to mitigate.46 Although parallels to the infamous Zimmerman telegram have not been encountered, parallel examples of international assistance for accessibility and cybersecurity to small aggrieved targets of cyber aggression have occurred. Estonia’s plight in 2007 prompted foreign cybersecurity assistance, and the following year Georgia received help when some of its capabilities were restored by being relocated to U.S. servers, both in California and somewhat ironically in the state of Georgia. “While Estonia experienced a cyberattack in 2007, it essentially defended in place; Georgia, on the other hand, maneuvered” by restoring key cyber capabilities to overseas services and to using applications like Google blog services for its communications.47 Regarding Georgia, “maneuver” refers both to the physical rerouting of the targeted traffic and also to the revived ability to adopt a more proactive posture because that traffic is restored. But the term “maneuver” itself is adjusting to the impact of the cyber domain, and this transition can be identified in the term’s usage in doctrinal manuals. For example, the U.S. Joint Operations JP 3-0 states, “Maneuver is the employment of forces in the O[perational] A[rea] through movement in combination with fires and information to gain a position of advantage in respect to the enemy.”48 This reflects a more spatial focus. A contemporary Army publication alerted readers, however, that “it is important to recognize that maneuver and seizing the initiative occur in more than just the land domain,” because attaining favorable conditions “requires the effective integration of information operations, cyberspace operations, and electronic warfare capabilities.”49 Other thoughtful and unofficial works on cyberwar also grapple with questions about how maneuver and terrain factor into the cyber domain. Former Army cyber officers Conti and Raymond, for example, argue that “technology has always changed the way we maneuver” but that maneuver’s essential purpose of keeping an enemy off-balance and vulnerable is entirely applicable to the cyber domain. Unique manifestations exist and matter, such as a defender’s “temporary use of a virtual machine that is discarded after a desired period, at which point a duplicate system is started from a pristine master copy.”50 Although maneuver in the cyber domain can take on logical as well as geographic features, concepts such as maneuver and distance do appear to retain relevance. Nonetheless, assertions about the global reach of cyberweapons continue, and the potential to interpret this as the abolition of distances or the annulment of geography persists. Clarke’s point that “cyber war is global,” when adjacent to his argument that “cyber war skips the battlefield,”51 risks giving an impression analogous to the pictures indicated by

34

Myths and Realities of Cyber Warfare

the early and overly simplistic advocates of strategic airpower between the world wars. In a sense, also, the idea that cyberwar “skips” the battlefield misses an opportunity to explain how attacks in the cyber domain could do something much more like transcending or extending the battlefield than simply skipping it. The latter imagery more closely matches the historical pattern in which weapons that possess a potential for longer reach are leveraged not only at the most extended ranges but are instead used across the spectrum of their reach. The development of long-range strategic airpower did not preclude the extensive use of combat planes at the operational level or in close air support roles across battlefields from World War I onward. Thus, although cyber advocates may argue that “what makes ‘cyber attacks’ seem revolutionary is that they can strike from thousands of miles away,”52 it should be remembered that this means that geographic ranges of an adversary are less pressing than the logical ranges and that the adversary’s distance and physical location may be difficult to discern. Thus, advances in cyberwar capabilities constitute the emergence of a fighting domain wherein logical geography may trump physical geography. One upshot of this is the implications it brings for strange bedfellows and unusual coalitions, although in the past geographically widespread conflicts have already shown the wartime alignment of diverse interests: Western democracies shared little in common with the Soviet Union other than the threat posed by Hitler’s Nazi regime, and rebellious Sepoys garrisoned in Singapore had no more common cause with Germany in 1914 than Faisal bin Hussein’s tribesman had in common with the British Empire two years later. Demchak’s observation that “cyberspace enables long-distance access” but also increases the chances of “multiparty alliances and coalitions for attacks” fits in that context.53 The capacity for attackers to “set up anywhere where internet cafes are found” or to “initiate attacks from almost anywhere on the globe instantaneously” is a shocking thought when long ranges of attack are considered wholly from a geographic standpoint.54 The map of logical terrain, which is artificial and continually subject to flux, overlays the geographic map. Furthermore, its implications will grow to seem more natural—they will be viewed as “intuitive” once they become familiar. In a similar vein, transportation technologies have long been taken for granted, and for that reason few would be surprised to imagine that a woman in the Tokyo airport is effectively closer to New York City than is a man on foot standing in Buffalo: we have become accustomed to understanding the implications of transportation technologies and their impact on the time factor, even though the former is a distance of 6,740 miles and the latter is 134.5, just 2% away from the destination in physical terms. It is precisely this distance factor that enables attackers to establish and deploy bots for their attacks and to direct bots from various physical

Temporal Mythologies of Cyberwar35

locations against a common target in an attack such as a DDoS.55 The other side of the coin is shown when physical distances do not matter and yet other manifestations of distance arise. One historical example was when Cliff Stoll traced mysterious hackers as they navigated Berkeley’s computers in 1986. Authorities ultimately discovered that this was a remote hacking operation conducted at the behest of communist East German intelligence services. Thanks to the possibilities of the cyber domain, they were unaffected by the physical gulf of Western Europe, the Atlantic Ocean, and the American continental shelf. But Stoll mused that another kind of distance hindered these spies: the unfamiliarity of the operating system.56 Prior to the rise and ubiquity of Microsoft software, this posed a recurring challenge, and the quest for greater interoperability stands at the core of the systems that were developed to create some of the early interconnections between computers that bridged between the ARPANET research project and the internet in its manifestations of the mid-1990s. The plasticity of cyberspace means that opportunities for mischief or hostility change continually. Reconfigurations can mean that exploits that had worked a day earlier might be moot, whereas new vulnerabilities might have been created. The distinction relative to the physical domains is less this characteristic itself than the speed with which the “landscape” can be changed. Overlaying that ever-changing landscape atop the context of the physical domains in which people also already live and operate carries significant implications. Observers have noted that, rather than being “national” or “international,” cyberspace is transnational and global. This means that the actions of software giants are not only “larger than most countries or national governments” might be but also that the giants of the internet world can exert influences on the planet that are felt in different ways—and that these same giants must potentially react globally to circumstances that were once more regional in their contexts. And regional strife quickly exerts global impact. NotPetya “was eating Ukraine’s computers alive” in June 2017, but it also wrought serious problems for the shipping giant Maersk, as well as pharmaceutical, manufacturing, construction, and food production companies across several countries. When “national borders have no meaning . . . every barbarian is already at every gate.”57 This does not quite mean that “distance is no defense” or that distance has no meaning, but it does point to a need to reconceptualize what ideas like range, distance, and terrain mean—both in the cyber domain and in a cybered world. AIR GAPS The prominence of what we might call logical range can be seen in the use of air gaps as a tool for promoting cybersecurity. Air-gapping, an action that “ensure[s] that there is no connection to any other network,”

36

Myths and Realities of Cyber Warfare

has been identified as a “best practice” for cybersecurity regarding critical systems.58 Computer science professor and cybersecurity pioneer Gene Spafford of Purdue University has gone much further: “The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards—and even then I have my doubts.”59 When this statement was made over three decades ago, computers were infinitely less ubiquitous and less interconnected than they are today. There is no real way to follow the earlier precept as advice if one wanted to. Interconnection stands at the center of what modern devices are intended to do. Air-gapping is a standard compromise for protecting devices whose operation is of critical importance but whose functionality is not crucially impacted by electronic isolation. The literature reflects debate about the degrees of effectiveness of partial air-gapping. It has been suggested that time-sensitive attacks like temporarily impeding an air defense system might be foiled or at least frustrated by “intermittently air-gap[ping a] system.” Cybersecurity writers have described how the introduction of an air gap not only complicates the journey of malware to a target but also obscures the effect of the malware from observation by attackers, who are left to extrapolate or guess rather than survey the targeted system, the effects, and the defender’s own response.60 Personnel at the U.S. Naval War College identified cyberattacks in 2006, and as these activities continued unabated, officials opted essentially to air-gap the entirety of the U.S. Naval War College by deliberately taking the facility offline. Similar steps were taken the following year at the Pentagon when cyberattacks triggered the decision “to temporarily disconnect part of the unclassified network [NIPRNET] from the Internet.”61 Although temporary or intermittent air-gapping is not an uncommon practice in the face of an identified and episodic cyberattack threat to systems that can survive a period of disconnection, it is an impractical option for protecting systems that must always be operating and connected—and periodic air-gapping is also of little use in the face of attacks that aim for quasi-randomized results. Forensics research has pointed to Stuxnet malware arriving at targeted machines perhaps because “someone carr[ied] the infection from one machine to another via a USB flash drive.” Once introduced to a device on a network, it is so to speak “inside the wire” and can propagate to other machines. The introduction of espionage malware to U.S. NIPRNET machines, discovered in 2008 and addressed by actions dubbed Buckshot Yankee, required the reimaging of a reported seven million Department of Defense (DoD) electronic devices.62 Cybersecurity experts have noted both that “air-gapping has to be complete to be totally effective” and also that even constant and diligent air-gapping may not be sufficient. “If an infected but air-gapped machine sits close enough to an infected Internet connected machine, the two

Temporal Mythologies of Cyberwar37

may be able to exchange signals over low-power radio or low-frequency sound waves.”63 This point also highlights the fact that machines that are designed to connect can be difficult to keep separated. The fact that air gaps can be foiled by malicious use of their embedded and almost ubiquitous microphones shows that even when the logical distances have been consciously designed to establish isolation for specific critical systems, the default configuration of devices can be leveraged to subvert that effort. Furthermore, it is worth noticing that physical distance can continue to play a role in such attacks. Rather than blithely conclude that the cyber domain has brought an end to the significance of geography and physical distances, it is more useful to consider how the cyber domain impacts the way that distance, geography, and maneuver can be conceptualized. CHEAP AND WIDELY AVAILABLE The price and availability of cyberattack tools range as wildly as does their potential effects. As a result, the widespread cliché that cyberweapons are rampantly available and virtually free is simultaneously true and misleading, depending on what kind of target and what sort of cyberattack is being imagined. Forensics analysts conclude that the high-end of cyberweapons are tools that demand extensive development and investment. In a similar way to how building is more challenging than destroying, destroying in a specific way can also form a high bar. To reliably wreak specific outcomes, testing is required, and this translates into costs that are manifested in time and in money. Additionally, the potentially global physical reach of a weapon means that attack sponsors must determine how they balance precision with cost-effectiveness. Minimizing the possible impact on bystander machines—or on one’s own devices—begs the question about further investment still. The Slammer Worm that spread in 2003 was just 376 bytes long, which actually facilitated its propagation even as it froze up computer networks.64 Cybersecurity experts state that while Stuxnet was a 500-kilobyte piece of malware that eventually leveraged four zero-day exploits, the Flame malware that is reported to have helped support and prepare for Stuxnet’s use was a 20-megabyte piece of malware constituting 650,000 lines of code. Experts “estimated it would have taken a team of half a dozen programmers at least three years to code it all.”65 These concerns point to a conclusion highlighted in a study conducted in the early 2010s, indicating that kinetic-effect cyberattacks would be poor choices from a cost-effectiveness standpoint. Calculating the materials costs of the 1995 Oklahoma City terrorist bombing, the preparation for the September 11 terrorist attacks, and estimates about the costs and effects of cyberattacks subverting a regional hydroelectric dam or air traffic control system, analysts have pointed to steadily diminishing returns:

38

Myths and Realities of Cyber Warfare

$30 per lethality in Oklahoma City, $160 per lethality on September 11, and between $8,000 and $24,000 per lethality for the hypothetical cyberattack.66 Although this is correct from an analytical standpoint, it does not address the question of psychological impact providing a multiplier effect; historian Alex Roland has aptly noted that psychological effects can be surprisingly useful in combat. Furthermore, psychological impact lies at the heart of terrorists’ objectives, even compared to a nation-state. A reasonable conclusion might be that although nation-states are more likely to possess the resources needed for a spectacular cyberattack with kinetic effects, a nonstate entity such as a terrorist group would be more likely to desire one. A decade ago, analysts explained that “terrorist groups in general do not have the expertise to conduct advanced or complex cyber attacks” and therefore for the time being have been “limited to exploiting the same basic vulnerabilities that are constantly being targeted by thousands of hackers around the world.”67 Perhaps this explains statements such as the U.S. Navy deputy for information warfare stating that although today we face the threats from both individual lone wolf kind of actors out there [and] nation-states  .  .  . it’s really the nation-state actors that keep me awake at night and that we have to be most concerned about. . . . In spite of the fact that all that crimeware and commodity malware is out there and potentially hitting our networks every day . . . the big issues that we have to worry about are coming from the nation-state actors: Russia, China, Iran.68

That said, while the consequences of a nightmare kinetic-effect nation-state cyberattack beggar the imagination and represent a legitimate topic of concern, they currently remain hypothetical. While this can invite some imaginations to run wild, it invites others to discount the chances of such an event. The vast majority of noticeable cyberattacks to date have been comparatively simple to launch, and many of them have been irritants whose effects can frequently be described as “reversible.” While “we definitely see the volume [of threats] increase,” one cybersecurity executive has explained, the more troubling piece is that “we see the sophistication of those threats grow exponentially. So, this is no longer your teenager in a hoodie in the garage. This is sophisticated cybercrime with talented and very trained individuals,” resulting in companies like Cisco Systems employing more than 250 personnel to deal with a range of cyberattacks that, when broadly defined, number more than 2,500,000 per second.69 Thus, the rising sophistication of a large number of annoyance threats is also a point of concern. When excluding kinetic-effect cyberattacks and carefully tailored malware meant for precision targeting, “cyber capabilities can  .  .  . be fielded rapidly with minimal acquisition cost compared

Temporal Mythologies of Cyberwar39

to traditional weapons systems.”70 That dynamic played out in Estonia and Georgia when botnets and minimally skilled “script kiddies” were directed to attack targets on Russia’s near-abroad; websites like ­StopGeorgia​.­ru even provided information and advice to novice hackers about tools and tactics valuable for launching DDoS attacks.71 Given the near-impossibility of effectively prohibiting attack code and the staggeringly inexpensive rates at which botnets can be rented out, even the simplest harassment attacks can be expected to gain in force and keep pace with defenders’ abilities to overcome their effects.72 The swift sharing of tactics, techniques, and procedures among hostile nation-states, cybercriminals, hackers, and other malcontents is among the prospects that concern cybersecurity researchers. That Standardized methods, “modular design” in malware development, and information exchange among such adversary groups could put increasingly effective cyberweapons into the hands of “non-technical parties” raise concerns.73 An even starker specter is the leveraging of hand-me-down sophisticated malware by adversaries who adopt it in the wild but who were incapable of building such tools for themselves. Following the disclosure of the Duqu malware in 2011, “exploits attacking the same font-rendering vulnerability that it attacked showed up in various readymade toolkits sold in the criminal underground.”74 There is truth to the idea that cyberweapons are cheap and widely available, although context should be kept in mind to ensure that the resulting conclusions are accurate ones. The availability of cyberweapons touches on the issue of zero-day vulnerabilities and exploits, and also on the mythology about cyberweapons being singleuse tools. ZERO-DAYS Most unfriendly activities in cyberspace use old tricks rather than newly created malware. The old tricks are tried in a pattern reminiscent of a chain letter: most chain letters may be thrown away, but a handful will be sent forward, and thus scaling up the number sent out will increase the chances that some will succeed. Defenses like anti-virus suites look for previously identified malware and block these attacks. For an attacker to increase the chances of a particular target falling prey to an attack, the attack could be developed in a way that skirts recognition. But developing new malware costs time and money, and developing new malware that takes advantage of a software vulnerability that has not yet been identified requires even more time to do—or money to purchase from a researcher who has quietly discovered such a previously unknown vulnerability, or “zero-day.” Access to the source code of popular or other strategically significant software can provide a leg up either in mimicking and pirating software or for hunting down zero-day vulnerabilities, and the latter alternative is

40

Myths and Realities of Cyber Warfare

one further reason that cybersecurity analysts have expressed discomfort with China’s requiring foreign companies seeking business in the People’s Republic to make it possible to obtain source code.75 Zero-day vulnerabilities have a market value: with entities like white-hat software companies that developed the software and want to patch newly discovered holes, with grey-hat groups like intelligence services that acquire such vulnerabilities for purposes such as cyber espionage through creating zero-day exploits, and with black-hat groups who intend to use zero-day exploits to perpetrate cybercrime. The market for zero-day vulnerabilities depends enormously on factors such as what the vulnerability could allow to happen, how many copies of the software exist to be attacked or patched, and which people or devices are using that software. Conventionally, “bug bounties” offered by the white-hat institutions for patching their erstwhile vulnerabilities fall far short of the money available to a researcher willing to sell to a black-hat like a criminal syndicate or a grey-hat like a state intelligence organization. Whereas Google has offered bug bounties ranging from $500 to $20,000 for vulnerabilities related to its browser, a single usable vulnerability could garner tens or hundreds of thousands of dollars from black-hat or grey-hat buyers, and reportedly the market for both of the latter groups has grown in past years, and reportedly the organizations that research and sell zero-days often sell to more than one buyer.76 The logical result, decried by cybersecurity and cyberprivacy advocates, is that zero-days can be stockpiled for future covert or overt use and that the timeframe in which these exist unpatched constantly increases the risk of yet other actors discovering and deploying them.77 Because software itself has a shelf-life before it is replaced or updated, even “the best cyberattacks have a limited ‘shelf life’” according to Libicki.78 A U.S. cybersecurity company estimated that during the mid-2000s, the average zero-day exploit existed “in the wild” for 348 days before its eventual discovery.79 The combination of several factors, including the increasing ubiquity of devices and applications, the complexity of software, the general inaccessibility for ordinary users to source code, and the active economic drivers maintaining a market for zero-day vulnerabilities and exploits touches on one of the mythologies of cyberwar’s effects: namely, the degree to which participants and potential targets are realistically capable of providing for their own defense. The applicability of zeroday exploits is also importantly impacted by another related topic: whether cyberattack tools are necessarily single-use weapons. ONE-USE WEAPONS? When it comes to the reusability of cyberweapons, the fact seems to be that the landscape is resoundingly nuanced—but that many of the

Temporal Mythologies of Cyberwar41

assertions are comparatively sweeping. Earlier sections of this chapter have already demonstrated that some of the low-sophistication scourges of the domain, such as DDoS attacks, are ready for repeated use. Since DDoS attacks take advantage of finite capacities for communication, it is reasonable to expect harassment actions regularly to keep pace with defenders, even as communication capabilities progress. Some pundits define cyberattacks more narrowly and specifically refer solely to malware that leverages something like a software vulnerability on a target’s devices. As a result, authoritative voices like Libicki argue that “operational cyberwar is tailor-made for surprise attack and a poor choice for repeated attacks,” because “it is difficult to surprise the same sysadmin twice in the same way.” Similarly, Stiennon asserts that “once used, a cyber weapon quickly becomes ineffective for future use as the victim discovers the targeted vulnerability and deploys corrections.” A cybersecurity panel found that either the use or the divulsion of vulnerabilities has the effect of “rendering the capability useless in the future” because “cyber weapon capabilities generally depend on the exploitation of unknown target vulnerabilities.”80 Strictly speaking, it is correct that a sysadmin will be more likely to identify the same attack the second time it comes. However, applying this logic more generally invites problematic oversights. One is that the “repeat” may resemble rather than duplicate its predecessor. Even before Stuxnet was uncovered, cyber analysts noticed the dangers resulting from “attackers increasingly rely[ing] on exploit and bot code that morphs itself” to reduce the ability of defenders to identify similarities between attacks.81 As mentioned earlier, after identification of the Duqu malware, reminiscent aspects of its exploit soon began to be seen in crimeware, indicating that cybercriminals had leveraged the newly discovered not-quite-zeroday exploits and were turning it on their own victims before patches had been propagated. As the examples of what forensics authorities conclude was a suite of malware surrounding Stuxnet emerged, Microsoft engineers rapidly revised their estimate of the amount of time that would be required to adapt it for use as crimeware, finding that it might take only a quarter as much time as had previously been thought.82 In the decade that followed the discovery of Stuxnet, the world was spared the onslaught of damaging waves of derivative criminal malware, which some cybersecurity experts had feared would come. Rumors did, however, arise with the launching of the WannaCry ransomware in 2017, temporarily rekindling some of the discussion about the wisdom of stockpiling zero-day vulnerabilities and exploits that could be illicitly leveraged by third parties. Data sharing among defenders, coupled with the adoption of better cyber hygiene and other best practices, can help make many cyberattacks less effective. Heuristic detection can assist in this process; the Trusted Automated eXchange of Indicator Information (TAXII), established by

42

Myths and Realities of Cyber Warfare

the U.S. Department of Homeland Security and operated by the nonprofit MITRE Corporation, is a notable institutional tool. Even simple network changes, which may or may not even be motivated by cybersecurity concerns, can alter the attack surface in a way that might make a painstakingly and expensively constructed cyberweapon “instantly become obsolete.”83 These factors do not fully address the second of the problematic oversights about the finite utility of a particular cyberweapon, however. As P.  W. Singer and Allan Friedman observed in 2013, “many users don’t always pay attention to . . . security updates and leave the vulnerabilities unpatched,” leading software giants to respond by transitioning from a notification system regarding updates to the actively forcing devices into undergoing automatic downloads that incorporate patches.84 This does, however, give rise to irritating and potentially unexpected downtimes of the devices that exert a real cost in terms of end-user functionality. At a basic level, many of the problems have become disconsolately old hat for cybersecurity professionals. Chief Technology Officer for Microsoft in Canada, John Weigelt, related at a forum in the fall of 2017: When I look at the [cybersecurity] space, I get bored. I’ve been doing this for 25 years. . . . I look at some of the news scape articles out there. So when you look at the news article, double click on it, and you find out that someone used a USB stick [and got hacked]. Oh. Someone has a weak password [and got hacked]. Oh. Someone hasn’t updated [and got hacked]. Oh. These are the same things happening over and over and over again. . . . We keep doing the same old thing: ‘Here’s another guide—250 pages. Do this. Here’s another compliance thing, do that.’ We need to change out the way that we’re talking about things. We need to get out of this perimeter defense model. We have to assume breach.85

P. T. Barnum famously remarked that a new sucker is born every minute. The majority of cyberweapons are not, in fact, single-use because all the defenders are not necessarily protected in the wake of the first successful attack. Imposing patch-laden updates on end-users is itself, at a strategic level, something that patches over the issue, which itself grows in parallel with an attack surface that increases as the number and role of interconnected devices expands. Among the places that patching is difficult and system failure can be devastating is critical infrastructure. These, and many other industrial systems, are controlled by Supervisory Control and Data Acquisition (SCADA) systems. PATCHING SUPERVISORY CONTROL AND DATA ACQUISITION The literature reflects a relative consensus about cybersecurity and SCADA, although differences in emphasis show some analysts highlight

Temporal Mythologies of Cyberwar43

the dangers of an attack while others identify the complex prerequisites before an effective attack could be launched with confidence. These positions do not seem mutually contradictory but instead are further illustrative of a tendency already in evidence when more deeply studying the other temporal clichés about cyberattacks. Rid has pointed out that the “rather specific” configurations present in many SCADA networks mean that “a widely generic attack seems to be unlikely.”86 A cybersecurity official for the Airbus corporation opined in mid-2018 that: all too often, we hear people saying, ‘we can just take down a power grid.’ I’m not doubting you can. But you have to be inside an infrastructure like that for months or years to do the intelligence gathering you need to be able to design and craft a relevant attack method—relevant piece of malware that has that effect. . . . I cannot just launch and press a button and something happens.87

The speaker quickly followed up that this would not be possible, “till we get to A[rtificial] I[ntelligence].” Time will tell the ways in which artificial intelligence will play a game-changing role in peace and in conflict, but the main point that the neutralization of infrastructure via cyberattack directed at its SCADA systems requires extensive time for preparation and exploration holds. What does concern some authors in the field is how changes in the handling of SCADA open the door to adversaries achieving that dangerous access. The Obama-era initiative to make a Smart Electric Grid might have offered opportunities for streamlining, monitoring, and coordination in the context of peaceful operations, but bringing the core controls of critical systems closer to accessibility on the internet also increases the vulnerability of these parts of the national infrastructure.88 Libicki rightly concurs, stating that “eliminating all vulnerabilities from SCADA . . . will be a long, painful, and likely incomplete process.” Therefore, “mandating systems isolation may be a cost-effective hedge against a catastrophic attack on infrastructures that move physical goods,” such as electrical and other energy forms, water, and the transportation industry.89 Even Rid, whose writings show a conscious pride in not panicking or over-extrapolating about the likelihood or the catastrophic nature of a hypothesized cyberwar, has observed that three trends increase the vulnerability of SCADA systems. Rid listed the increased connectivity of SCADA online as the second of these factors, followed by their heightened visibility as a result of search technologies that make heretofore obscure or inaccessible items like device control manuals more easily accessed. But even before these factors, Rid highlighted the “standardization in communication protocols” that could accelerate the pace at which an adversary could translate access to a system into intelligence on how the system worked or how it could break. Although he pointed to other mitigating

44

Myths and Realities of Cyber Warfare

factors that permit a degree of “security-by-obscurity” constituting “a tenuous security advantage for the defender,” Rid’s writing found its hope in the way that it balanced the data, rather than in suggesting one side of the argument to lack convincing evidence.90 Examples of cyberattacks tampering with SCADA targets now form a short but growing and potentially ominous list. To the Maroochy Shire sewage release orchestrated in 2000 by a disgruntled former contractor can be added the Aurora vulnerability awareness study done by the Idaho National Laboratory in 2007, the infamous Stuxnet case discovered in 2010, and physical damage reportedly wrought on a steel mill in Germany in 2014. Malfunctions at the small Bowman Avenue Dam in Rye, New York, have been attributed to Iranian hackers; reporter David Sanger suggested that “they must have had something [larger] like Hoover [Dam] in mind, and missed. Or maybe it was simply a demonstration of their powers,” in keeping with the hunch of the state’s senior senator.91 Such a demonstration would fly in the face of much conventional wisdom about sophisticated cyberattacks, given the thinking about the temporal and financial costs inherent in their construction; nonetheless, it is conceivable that this conventional wisdom is more within the conventions of friends than of adversaries, whose minds are likely not to be a mirror of our own. Recurring cyberattacks have targeted the electrical power grid of Ukraine since 2014, coinciding with combat between the country’s under-equipped military and pro-Russian separatists in the eastern part of Ukraine, along the Russian border. Sanger suggests that “the attack demonstrated . . . what the Russians . . . could get away with . . . as long as they used subtle, short-of-war tactics.”92 The antiquated technologies supporting Ukrainian infrastructure have been associated with the relative ease with which its grid could be threatened. Study suggested, however, that although the more sophisticated and complex power grids of the United States and Western Europe constitute “much tougher targets,” the “bottom line” is not only that “aging systems made the Ukraine grid easier to hack but also easier to get back up in hours,” in contrast to the longer-lasting impact of a successful targeting of a sophisticated and tailored set of systems.93 Authors emphasizing the dangers of an attack indicate that, in a strategic sense, the risks connected to SCADA vulnerabilities are not only substantial but have been in evidence for years. During the red team Eligible Receiver 97 test, named for the year it took place, red team personnel from the National Security Agency “cracked the SCADA systems controlling electric power to US military bases and then overwhelmed local 911 emergency systems with a denial of service attack.”94 It should be remembered that the red team in Eligible Receiver was expressly forbidden to use any cyberattack tools that were not already freely available and unclassified, yet as of 1997 their team had shown an ability to infiltrate computers

Temporal Mythologies of Cyberwar45

managing infrastructure components. In recent years, researchers in the European Union have pointed to signs of APTs gaining long-standing access to a range of networks across various key industries and in several countries, and cybersecurity advocates have reportedly pondered whether such access might allow not only large-scale reconnaissance but also preparation of latent threats to infrastructure.95 The vulnerability studies at Idaho National Laboratory known as the Aurora vulnerability constituted an even more explicit wake-up call. While some writers declared that the test “lends credence to the suggestion that the manipulation of computer code can be just as effective in destroying critical infrastructure as a missile would,” and others argued that the vulnerability “extends” to several sectors in which SCADA systems operate.96 The validity of the last point may stand or fall to some degree based on the extent to which the standardization trend identified by Rid is adopted across different sectors. Thus, although experts note that tailored attacks against complicated and obscure systems require time for reconnaissance of the target and preparation and testing of attack tools, an awareness that SCADA as a class of potential target has existed for the quarter century since Eligible Receiver 97, and it was heightened further by Aurora more than a decade ago, in addition to the actual examples of attacks that have taken place. Forensics studies point to dramatically different degrees of sophistication and technical surprise with respect to cyberattacks, including those that have been deployed against SCADA systems. At the high end, the Stuxnet malware with its four discrete zero-day exploits, a decade after its discovery, continues to represent the apparent high-water mark of sophistication in a known anti-SCADA malware. The repeated attacks against Ukrainian electrical power infrastructure embody the other extreme in terms of technical surprise. The 2015 attack leveraged a vulnerability that had actually been identified a dozen years earlier. But the inherent criticality of critical infrastructure precludes practices such as patching or updating devices overseeing SCADA systems. As a result, the factors that Singer and Friedman had noted long retarded the patching of known vulnerabilities on so many personal computers worldwide are still more extreme for systems that must be “on” continually. In short, interrogating the temporal mythologies of cyberwar invites the conclusion that cyberattacks (broadly defined) are diverse enough to resist pat generalizations about the speed of cyberwar, about the supposed irrelevance of distance or geography, about their cost-effectiveness, and about their potential shelf-life as single-use weapons.

CHAPTER 3

Mythologies of Cyberwar Effects

REVERSIBILITY OF EFFECTS A number of widespread ascriptions about cyberattacks concern their effects on a target. These include the common but not universal idea that cyberattacks produce only temporary and reversible effects, arguments that the victim of a cyberattack essentially deserves the blame for its occurrence, the predominant interpretation that attack is superior to defense in the cyber domain, and assertions that indicate expectations that cyber hostilities may come to eclipse or displace fighting in the physical domains. The idea of reversible attacks may be the most useful starting point in an effort to unpack and consider these conceptions. An important point to note about the effects of cyberattacks is the relationship of both the attack tool and the target’s own character to the impact of an attack. Such a relationship exists in the physical domains as well—when a bullet is fired at a combatant on a battlefield, the mass and velocity of the projectile matters, as does the presence or condition of factors influenced by the target, such as wearable body armor. Analysts have noted that in the cyber domain the target’s own condition matters more than in the physical domains, where the kinetic effects of a bullet or grenade or bomb follow the laws of physics and the impact on a target can be mitigated by armor or cover but cannot wholly be negated. Indeed, some have gone so far as to argue that “the effect [of a cyberweapon] is more dependent on the target than on the weapon,”1 in addition to being more dependent on the target than for fighting in the physical domains. This thinking can be interpreted to touch on notions of the target’s potential culpability (an issue discussed later in the chapter) but also suggests that cyber effects can range unpredictably.

48

Myths and Realities of Cyber Warfare

Many of the voices that spurn “cybergeddon” imagery choose to underscore their view that cyberattacks’ effects are limited, temporary, or reversible. This may be why Thomas Rid has suggested that cyber “war” “has more metaphorical than descriptive value” and, specifically: “no cyber offen[s]e has ever caused the loss of human life. No cyber offense has ever injured a person. No cyber attack has ever seriously damaged a building.”2 Strategy writer Colin Gray has embraced three of the mythologies in a single sentence, arguing that “cyber offense is swift, but it is not likely to be deadly, and it should not work twice.”3 The swiftness of cyberattack has already been shown to be dependent on the kind of attack launched, and although perhaps the same attack should not work twice, records show that they frequently, in fact, do succeed after first appearing in the wild. Gray’s second assertion may prove to be true, but it is at least flanked by characterizations that require substantial contextualization. In a real sense, the picture drawn by Gray lacks internal cohesion. Many of the cyberattacks that are swift are also basic tools like DDoS, which are the most prone to repetition. To date, these also seem to be the attacks least likely to inflict damage or draw blood directly—although they may be used in concert with other actions, as the 2008 DDoS attacks against Georgia coincided with combat between Georgian and Russian military personnel. Syria’s civil war, which raged essentially from 2011 to 2017, triggered the creation of a “Syrian Electronic Army” (SEA) sympathetic to the governing regime; SEA concentrated on simple (and reversible) attacks such as website defacements.4 It can be suspected, but not confirmed, that these attacks were reversible because they were easier for an unevenly skilled band of cyber militia to prosecute against targets than a more damaging and less reversible series of attacks might have been. One of the principal reasons that cyberattacks are so frequently considered to be reversible stems from an enduring generalization enunciated by Singer and Friedman in 2014 that “the good news is that there are only three things you can do to a computer: steal its data, misuse credentials, and hijack resources.”5 The implications that come from tampering in these areas can be more fully explored in the final chapter, but suffice it to say that the insinuation of computers with a range of either convenient or essential processes means that theft, misuse, and hijacking can exert less finite kinds of dilemmas than may be at first apparent. Analysts specializing in cyberwar have observed that although attacks (including via cyber as well as kinetic) can conceivably destroy physical hardware, “data and code can be restored at the point of the latest uncorrupted backup,” and “infinite copies of information may be made at near zero cost,” which is a core factor in an information age economy and which also “means that producing identical copies of cyber weapons at scale is effectively free.”6 Some kinds of impact resist blithe concepts of “reversibility” even despite their not directly triggering physical destruction. The countries

Mythologies of Cyberwar Effects49

on Russia’s “near abroad” have been subjected to several kinds of cyberspace abuse. The generally low-sophistication actions against Estonia (in 2007) and Georgia (in 2008) are the most widely known. In early April 2019, just days ahead of the Scandinavian country’s upcoming elections, Finnish police were “probing a cyber attack on a web service that publishes vote tallies.” The cyberattacks could not wreak physical damage, nor could they actually tamper with the vote counts themselves, since Finland does not employ electronic voting, but it was feared that an opportunely timed take-down of the vote reporting “could seriously hamper the media’s access to election results and undermine public trust in the elections.”7 Those intending to sow discord do not necessarily need to change an election outcome. Undermining trust and encouraging domestic feuds can bring tangible results in the physical realm—results that carry geopolitical consequences. Democratic processes can also be upset indirectly through nonphysical (but hardly “reversible”) disinformation operations, something already demonstrated via social media. While Russia’s artificial and false-flag social media activities in 2016 have been the subject of extensive controversy in the United States, they are neither the only nor the most recent example. As is discussed later in this book, social media platforms have begun to respond to the public and political outcry by (belatedly) taking a more vigilant stance on accounts meant to advance information operations. But, as can be expected in conflict, adversaries respond to each other, and the defender’s reaction to aggression is apt to be met by the aggressor’s own counterreaction. Facebook now watches for politically active accounts emanating from St. Petersburg (reported to be the nexus of Russian interference activity). Therefore, in targeting Ukraine’s March 31, 2019, elections, Russian intelligence agents have adopted a different approach in pursuit of an older goal. Analysts identified Russian actors contacting intermediaries who could “find people in Ukraine on Facebook who wanted to sell their accounts or temporarily rent them out” in order to “publish political ads or to plant fake articles.” As a Brookings Institution blog observed in connection with this ploy, “Russia has . . . used Ukraine as a test-lab for its arsenal of political warfare” and “Ukraine’s experience is thus a bellwether for assessing the Russian tactics that may be deployed against the West.”8 This activity does not cause physical destruction, but it likewise cannot be realistically dismissed as “reversible.” The acknowledgment that some cyberattacks have demonstrated an ability to exert physical effects is also important when determining the degree to which cyberattack should be thought of as having “reversible” effects. Likewise, the notion that the data can be backed up and kept secure can mollify potential concerns about the abrupt compromise or deletion of data; Saudi Aramco’s loss of operability for 30,000 computers and its loss of data when an attack coincided with imperfect system backups due to

50

Myths and Realities of Cyber Warfare

a Ramadan holiday warn against taking too quick comfort, however. The opportunity to back up data touches on the target’s own role in determining the impact of an attack, although the opportunity also brings more somber questions about implications and responsibility. Finally, the image of reversible effects, which is continually reinforced by events like website defacements and DDoS attacks, undercuts demand for greater effort on some aspects of cybersecurity work, such as attribution.9 A MORALITY OF REVERSIBLE CYBERATTACK? The conclusion that cyberattacks exert only temporary effects gives rise to a discussion among ethicists. This volume is not a work on cyberwarfare ethics, but the literature about cyber conflict includes advocacy of cyberattack on ethical grounds that are founded on the premise that cyberattack effects are less extensive and less persistent than those of conventional weapons of kinetic war. Jeremy Rabkin and John Yoo openly argue that “the most attractive aspect of cyber operations from a tactical standpoint is that they can be customized,” for the reason that “cyber weapons have far more value as a more precisely tuned means of coercion between nations, rather than as a weapon of mass destruction.”10 Both co-authors are law professors, and Yoo also served controversially as deputy assistant attorney general from 2001 to 2003; while their argument extends beyond cyberattacks to include unmanned vehicles and robotics, a recurring message is that precise targeting by militarized forms of high technology can increase the opportunity to wield legitimate coercion against rogues on the international stage and that this ability to coerce will keep wars small, limited, and (implicitly) rare. Reversibility of cyberattacks would fit neatly into this line of argumentation. But the idea that weaponizing high technology will be used only by good guys for the purpose of coercing rogue entities and preserving peace is appallingly simplistic. If such an approach truly proved itself to be an effective tool for maximizing a favorable peace, adversaries would quickly seek ways to redress the new balance. This might entail the deployment of the same kinds of technologies in order to pursue very different kinds of objectives. Alternatively, it might involve the development of strategies that minimize the utility of the new high-tech weapons. The assumption that cyberweapons (and other high-tech tools) can be weaponized but in a controlled way rejects historical precedent and rationality—not to mention running afoul of Rabkin and Yoo’s own characterizations of how international law has been used to assist rogue actors. Their justifiable objections to the artificial and selective deployment of international law illustrate how a deliberately constructed tool can nonetheless be repurposed for very different ends. Deep flaws are embedded within their argumentation,

Mythologies of Cyberwar Effects51

throwing a shadow on blithe notions of cyberweapons being used as controllable and finite tools for minimally destructive coercion of bad actors. But optimistic voices persevere, offering a “better state of war” and positing that “questions regarding the character and nature of war emerge when targets can be turned off and on rather than being destroyed.” One author suggests that the cyber domain offers “trustworthy methods to disarm combatants or negate defenses from great distances with less violence.”11 But the ability of ordinary people to participate in some (low-end) cyber actions begs the question about whether such people retain a “noncombatant” status. Fortunately, the logic argues, “citizens engaged in cyber warfare . . . may be targeted with commensurate cyber weapons instead of physical responses,” thereby eliminating violence and destruction from warfare. Citing the unofficial Tallinn Manual devised by legal experts, this perspective holds that “nonlethal cyber warfare options may conceivably be justified before more traditional forms of war if they help de-escalate a conflict.”12 Once again, however, the argumentation is premised on the power of nonlethal—and now even nondestructive—effects winning a conflict, while the adversary impotently and inertly concludes that the reversible effects of cyber operations have overcome his collective will or capacity to resist. No matter how welcome and desirable this outcome might be, no one should engage an enemy with an expectation that the results will be so straightforward. War is an interactive process and the enemy gets a vote—such observations across centuries remain accurate and cannot be simply switched off. This does not prevent other scholars from making implausibly bold assertions. For example, “cyberwarfare makes possible the realization of ideal warfare, a war waged in a way that is maximally discriminate and proportionate” as a result of cyberweapons being “programmed to communicate constantly with command and control servers” and thereby “kept on a short leash.”13 That writer meekly concedes at the very end that “all of this is dependent, of course, on the willingness and ability of states with powerful cybercapabilities to deploy cyberweapons in ways that are proportionate and discriminatory.”14 However, one of the prominent features of the cyber domain is the ever-increasing number of devices and a proportion of the global population that is connected. The lowering threshold for participation, and the widening range of players, is paralleled by factors discussed earlier that frequently extend the applicability of cyberattack tools beyond a “single use” or a short “shelf-life.” Thus, this proviso that the powerful states show restraint is demolished at a stroke since cyber operations are absolutely not the preserve of powerful states. While Deibert has more accurately suggested that “many forms of current and future cyberwarfare will often not involve deaths and widespread

52

Myths and Realities of Cyber Warfare

permanent physical destruction,”15 some questions remain about how problematic even so-called reversible effects would be when felt by bystanders. Tallinn Manual 2.0 reflects discomfiture with difficult-to-escape implications of even careful cyberattacks. In keeping with wider conventions, cyber infrastructure that is being used both by civilians and by the military “is a [legitimate] military objective” and yet “an attack on a military objective that is also used in part for civilian purposes is subject to the rule of proportionality,” leading to expectations that prior to launching a cyberattack study would be undertaken to “determin[e] whether an attack would be lawful,” depending on the potential for harm befalling either civilians or “civilian objects.”16 Given that even in a case where a dual-use cyber infrastructure would still require the avoidance of impact on civilian people and objects before meeting with acceptance from Tallinn 2.0 scholars, the expectation seems to be that reversible effects would be the only practicable alternative that would involve engaging the target. Some reversible actions inadvertently pose problems for bystanders. In one case, an individual citizen combatted cybercriminals by deliberately overusing bandwidth temporarily to freeze thieves out of access to fake bank accounts they had earlier set up. However, deploying software that is designed to consume excessive bandwidth interrupts all traffic, including those of targets (in this case cybercriminals) but also of other users attempting to conduct legitimate traffic. The configuration of traffic flows online means that such actions can easily disrupt neighboring traffic.17 The impositions that this inflicts on legitimate traffic, let alone on potentially legitimate entities that are being attacked, contributes to the rationales for DDoS attacks being deemed illegal. When “damage” is measured in terms of downtime and repair efforts rather than in terms of shattered buildings or fried computers, it can be notoriously more difficult to calculate. But does this mean that the former kinds of inconvenience (and the second-order effects of the lost opportunities that they represent) be accurately considered as completely “reversible”? Analysts studying the impact of cyberattacks against Estonia in 2007 noted the substantial costs of remedial actions made necessary for the public sector, as well as the financial impact on the nation’s major banks. These were deemed to pale in comparison to the psychological toll on both the national level policy makers and the population as a whole. “Had the cyberattacks disabled the provision of vital services on a large scale, public trust in the government and digital infrastructure would have been seriously compromised.”18 In the midst of the attacks, of course, there was no way to anticipate realistically whether the scale of attacks or selection of targets might shift in exactly this direction. Likewise, compromises of Ukraine’s election committee in April and May 2014 might not have been physically destructive, but eradication of information on databases not

Mythologies of Cyberwar Effects53

only complicated attribution of the attacks (which were claimed by the anti-democratic group CyberBerkut) but also inherently undercut confidence in the validity of democratic processes.19 Other kinds of actions may seem nonviolent in and of themselves, but they are harder to actually undo or they fit within a context of escalation that comes to include irreversible actions. The reported confrontation between the nebulous hacker collective Anonymous and the Mexican drug cartel Zetas in 2011 illustrates the point. When a potential member of Anonymous was reported kidnapped by the gang, fellow hackers declared that they had gained access to relevant databases and threatened to disclose the identities of officials who had been bribed or were cooperating with the Zetas gang. This threat to publicly identify or dox a vulnerable figure has become a part of hostile activity in cyberspace, as has the actual release or doxing of damning or personal identifying information. Although doxing does not directly cause violence, it would be difficult to credibly suggest that a victim of doxing had been dealt a “reversible” attack. The gang’s escalatory response of threatening to murder ten civilians for every doxing conducted reportedly brought an end to Anonymous’s threat.20 Even actions that are not directly destructive can involve irreversible effects. NOT NECESSARILY SO REVERSIBLE If ostensibly nonviolent activities in cyber are of questionable reversibility, the physical damage wrought by cyberattacks specifically tailored to attack particular kinds of systems such as SCADA is inherently irreversible. The insider threat attack on Maroochy Shire’s sewage treatment plant ushered in the beginning of infrastructure attacks just prior to the start of the 21st century. Vitek Boden, a contractor who had worked on the county’s sewage treatment control systems, was angered when he was not later given a government position overseeing the facilities he had helped build as a contractor. Using a radio transmitter, a laptop, and his own knowledge of the control system, Boden arranged the improper release of hundreds of thousands of gallons of untreated sewage into public parks and waterways, resulting in both “an environmental disaster” and “a health hazard to the local population.”21 While being the most famous example to date of an attack on SCADA, and the one most commonly ascribed to potential political motivations, Stuxnet was neither the first nor the most recent example of SCADA being targeted in order to yield a kinetic effect. Whether because of the reported policy motivations or for other reasons, the impression has taken hold that Stuxnet was not only a notably sophisticated piece of malware but also

54

Myths and Realities of Cyber Warfare

the cyberattack par excellence of the early 21st century. Michael Hayden, former chief of the NSA, reinforced this perspective in his memoir: Someone had just used a weapon composed of ones and zeros, during a time of peace, to destroy what another nation could only describe as critical infrastructure. When the fact of the attack became public . . . it felt to me a little bit like August 1945. Mankind had unsheathed a new kind of weapon. Someone had crossed the Rubicon. A legion was now permanently on the other side of the river. We were in a new military age.22

Forensics experts have argued that the malware was designed in ways that specifically increased the amount of tangible damage inflicted by the malware.23 A tailored cyberattack struck a steel mill in Germany in 2014, resulting not only in physical damage but also in an apparent sense of strong discomfort about disclosing details of the attack. From what information has been released, the intrusion began with a spear phishing email by which attackers gained access to networks. The attack that was ultimately launched involved multiple control system component failures that cumulatively prevented operation of emergency shutdown features that might have contained and mitigated the damage. SANS analysts noted that the German government’s Federal Office for Information Security released a report that remained mute not only about possible motives of the attackers but which also “does not provide further detail on the type (blast furnace, basic oxygen furnace, arc furnace) and therefore does not provide enough information to determine the type of damage that could have been caused by mis-operating” of the systems.24 Brian Mazanec has suggested that “cyber weapons are well suited for attacks on logistical networks, reinforcements, and command-and-control facilities,” and such episodes indicate that skilled teams, resourced by motivated and patient sponsors, can deliver irreversible physical effects, even if the majority of cyberattacks do not bring spectacular and vivid images of burst or shorn or burnt physical artifacts.25 Evidence suggests that a range of cyberattacks, including but not limited to the physically destructive, can invite chaos to a targeted society. CHAOS Elizabeth Dubois of the University of Ottawa succinctly notes that “chaos changes the balance of power internationally, and so if you don’t like the amount of power you currently have in the global system, creating a chaotic system is going to help you, potentially, maneuver to a position where you have more power.”26 In short, actors pursuing a revisionist agenda will likely seek to introduce discord or uncertainty into a

Mythologies of Cyberwar Effects55

status quo in hopes to then achieving a fait accompli. An infamous modern example involves Russia, especially regarding the controversy about the nature and extent of its activity in the lead-up to the 2016 elections in the United States (although it was by no means the only election scenario in which allegations of cyber-induced chaos have appeared). According to one former intelligence professional, Russian president Vladimir Putin “wanted  .  .  . to muck up that system and he did a beautiful job of it,” although “we have also now recognized that” and potentially awakened to the threat of psychological tampering via the internet. Journalist David Sanger wrote bluntly, “if the Russian goal was simply to trigger chaos, it worked.”27 Kello, as an international relations scholar, calls cyberspace “a perfect breeding ground for political disorder and strategic instability,”28 and the calculated spreading of misinformation certainly illustrates that dynamic. Tweeted rumors of Ebola cases in the continental United States or chemical explosions on the Gulf Coast29 can spread quickly and illustrate how a lie can circle the Earth before a correction can be enunciated. The first years of the 21st century saw U.S. political and military efforts focused largely, and understandably, on confronting jihadi terrorism. One government study from this era warned of the “growing impact of persuasive technologies to manipulate our abstraction of reality and consequently the truth,” adding that the common tendency to seek news from already trusted venues risked information being presented “devoid of necessary context,” perhaps heightening the chances of “nonstate actors to directly impact” perceptions and attitudes.30 Subsequent years brought popular disaffection with aspects of the campaigns against terrorist organizations and also signs of renewed alertness to the approaches and capabilities of other nation-states, but the conclusions identified in 2005 remain relevant. Analysts note that some cultures and organizations, such as in China, interpret the U.S. military concept of the OODA (observe, orient, decide, act) loop differently from U.S. strategists. Developed to illustrate aerial combat, a comparative speed advantage in the OODA process provides a substantial advantage for one combatant over an opponent. However, whereas many thinkers may seek to gain a speed advantage in OODA by accelerating the pace of each step, researchers suggest that Chinese strategists see opportunity in using information warfare to slow the adversary’s progress through each step, thereby gaining a comparative advantage without requiring a wholesale revision of one’s own OODA processes.31 Inducing chaos would be an effective method of slowing their OODA steps, and the cyber domain offers different avenues through which to accomplish this. If cyberattacks can, in Sanger’s words, “be used to fray the civic thread that holds together democracy itself,”32 chaos can also be induced through

56

Myths and Realities of Cyber Warfare

actual cyberattacks. A senior U.S. Navy information warfare official stated that the disruption that natural disasters create through damaging infrastructure illustrates part of the danger, but she added that “the good news with hurricanes is that at least there is some warning” through weather prediction, in contrast to the potential for abrupt surprise effects being meted out in an attack. Cybersecurity writers have voiced concerns about disruptions “in the event of cybered conflict” launched by an entity with APT capabilities, which might have spent years reconnoitering targets and preparing attacks.33 This prediction must be contextualized alongside other experts’ observations about how patches, other software upgrades, or network reconfigurations potentially raise the bar for actually achieving the kind of surprise referred to in such a prediction. Analysts have pondered the consequences of an aggressor launching such a cyberattack. On one hand, Libicki argues that “detonating cyberattacks in clusters has its benefits” because “confusion may be created when many things go wrong unexpectedly at once.” However, he has conceded that the rapid use of cyberweapons (he assumes that repeated use leads to diminishing effect) also quickly depletes the stockpile of remaining available cyberattack tools.34 Researcher Heather Roff has observed that cyberweapons are “notoriously . . . difficult to control once released,” and this adds to the challenge of determining the effects a cyberattack has had.35 While Libicki is on record musing that in the cyber domain an attacker may actually have better situational awareness and understanding about the effect of a cyberattack, he has noted that “the prospects for improving battle damage prediction . . . may prove largely illusory” and cyber test ranges “can yield widely varying results” when a cyberattack is gamed out. Nonetheless, he has written that understanding the effect of a launched cyberattack is as important as the actual preparation and launch itself, and that without such an understanding a cyberattack “would be no better than the proverbial shot in the dark.”36 Thus, delving into cyber conflict means to a real extent leaping into the dark, and not only imposing chaos on a target but also inviting uncertainty for oneself. THE DOGMA OF THE OFFENSIVE The idea that cyber struggle inherently favors the offense has been repeated so frequently that it has become cliché. Authors P.  W. Singer and Allan Friedman pointedly compared this idea of the inherent superiority of offensive actions in the cyber domain to the cult of the offensive mentality that was in vogue with several militaries in the years preceding World War I.37 Numerous rationales have appeared for the purpose of explaining this widespread perspective that the offensive is naturally superior in the cyber domain, and although some researchers have

Mythologies of Cyberwar Effects57

upheld the practicality of defensive action, it retains cache. If correct, it means that the cyber domain is fundamentally different from the physical domains of warfare, which collectively reflect the virtue of the Clausewitzian concept that offensive action is necessary to derive a conclusion to a war but that defensive actions carry natural advantages over offensive actions on the battlefield. If this Clausewitzian truth is invalidated in cyberspace or by struggle in the cyber domain, then intuitively many long-standing ideas about security affairs would give way to unnerving and dangerous chaos. A prime concern about offensive action in cyberspace involves the fact that countries with relatively advanced economies tend to be capitalistic, democratic, and often electronically connected. What is an advantage each day in peacetime is transformed into a vast and vulnerable attack surface either during a conflict or in the sort of “grey zone” of ambiguous dynamics in which APTs and other sinister actors in cyberspace thrive. In these circumstances, “information networks are the Achilles’ heels of the United States,”38 and cybersecurity advocates like the former cyber czar Richard Clarke add that “offensive prowess cannot make up for the weaknesses in our defensive position.”39 Other analysts contend that “if a country has valued assets but is militarily, economically, and politically weak or lacks the capacity or will to combine cyber responses with real-world kinetic sanctions, then it becomes a mighty inviting cybertarget” and that power projection is “tempered to a great degree by corresponding vulnerabilities” to cyberattack.40 This danger sharpens in view of an expanding attack surface and any examples of ingenuity in malware development. The result is cyber domain threats that “evolv[e] faster than our understanding of them” according to one study by the Center for a New American Security, in which (according to Schneier, “we are designing systems that are getting more complex faster than our ability to secure them. Security is getting better, but complexity is getting faster faster and we are losing ground as we [work to] keep up.”41 The presumed cost-effectiveness of “mounting a continual barrage of cyber-attacks” is another.42 Libicki has chimed in that a dollar of offensive cyber power buys more damage than a dollar of defensive cyber power can protect.43 Still, other concerns involve the presumptively instantaneous speed of cyberattacks or the difficulties of tracing and countering attackers. “The instantaneous timescale of cyber operations becomes a vital asset on the offensive, especially when prosecuting time-sensitive targets,” while dual-use cyber infrastructure “provides cover and concealment for response actions.”44 A strong quibble might be raised about the need to distinguish between “instantaneous” effects and “abrupt” effects (and what this might mean about what kinds of cyberattacks would be involved or precluded from such a scenario), but the point holds water. Brantly points

58

Myths and Realities of Cyber Warfare

out that “anonymity favors the attacker in nearly all situations within the cyber domain,”45 an issue that will be addressed further in the next chapter. Many arguments thus seek to explicate the inherent superiority of offensive cyber over defensive cyber, and an extension of this perspective argues that offensive cyber is not only superior to its defensive counterpart but also that offensive cyber conducted by capable but non-first-tier powers may effectively trump the capabilities of the countries traditionally deemed the leading powers globally. Brantly enunciates this outlook by stating that “a small state wielding cyber weapons might have a greater degree of relative power than a large state as it has significant theoretical and demonstrated capabilities but few vulnerabilities.” Specifically pointing to North Korea, which “can develop an offensive cyber unit and ignore the need for a defensive cyber unit because it has little to no reliance on cyber technologies in its homeland,”46 the implicit conclusion is that under-connected countries with overdeveloped militaries will be able to punch well above their weight regarding hostilities in cyberspace. At present, this intriguing concept remains too much in the hypothetical realm to explore comprehensively, although it carries a few passing similarities to notions that Italy in the early 20th century, whose geopolitical stridence and martial posture far outpaced its actual industrial potential, would disproportionately enjoy the strategic benefits of military developments like the introduction of the air domain to warfare. Hayden has stated that “I’m not worried about that mature nation-state actor conducting a cyberattack as a bolt from the blue. I’m beginning to worry about the mid-range, isolated rogue nation-state actor—the North Korea or in some circumstances the Iran—who, out of desperation, might actually go try to do something like that.”47 Author and journalist Gordon Corera concurs that because cyberweapons are easier to produce or to use than nuclear ones, they are “an attractive option for weaker states.”48 Robert Mandel argues that a less powerful country can use cyberattacks to inflict “pain at every turn,” even “despite the huge military power disparity” that might exist, such as is the case for Ukraine as it copes with pro-Russian separatist military activity and cyber domain pressure attributed to Russia.49 Conti and Raymond do not go so far as to suggest that smaller countries exceed the power of larger ones, but they do observe that reliance on sophisticated weapons systems can accentuate vulnerabilities to cyberattack.50 The dogma of the superior impact of the offensive has gained traction with some reason, and as with other generalizations, it can persevere as a dogma as long as it is continually fed by sufficient and seemingly plausible evidence that validates it. One additional factor that helps sustain the offensive dogma unfolds from the fact that a successful attack (particularly an overt one with abruptly visible effects) to some extent tips the hand of an attacker and draws attention to the event. A related aspect is

Mythologies of Cyberwar Effects59

identified in some of the existing literature: “cyber weapon capabilities generally depend on the exploitation of unknown target vulnerabilities, so declaration of capabilities may identify those target vulnerabilities, thus rendering the capability useless in the future.” Reading the scenario in another way, a powerful leveraging of an exploit against a vulnerability could (ironically) “compromise” the future utility of the compromise, but in the meantime psychological advantage might be wrought by allowing awareness of the attack’s success to be socialized.51 Successful defenders are wise not to crow too loudly about their own successes, since an attacker has to be right once in a way that a defender has to be right each time; furthermore, failed attacks are far more easily hidden from a global public than can be the case with compromised defenses—unless an intrusion is undertaken for purposes that aim to preclude or delay disclosure, such as sophisticated espionage conducted by APTs. THE DEFENDERS’ POSITION Some factors mitigate what might appear to be an overwhelming advantage of offensive cyber, and different researchers have pointed alternatively to limitations of a cyberattack or to opportunities available to defenders. Two notable limitations in cyberattack have been explained by Rid and Libicki, whose work frequently aims to dispel what they view as overblown fears or exaggerations about the domain. Relying on the point that the use of a cyberweapon betrays the existence of the exploit and prerequisite vulnerability, Rid has indicated that “when it comes to cyber weapons, the offensive has a shorter half-life than the defense.”52 This may be true, although the record shows many cyberattacks that have been successfully employed even after (sometimes many years after) their first use would theoretically have initiated the countdown on their relevance in cyber conflict. It also does not address the point (outlined by Libicki) that given adequate testing facilities an attacker can examine effects as part of a process of modifying and developing a cyberattack tool that becomes visible only upon deployment.53 This means that although an attack may indeed have a shorter half-life than is the case with a defense, the clock begins running for the defense when it is released, and the clock on the attack is started only with the identified launching of an attack. Seeking to dispel excessive notions of the power of the offensive in cyber, Libicki writes that “a cyberattack cannot disarm a target state’s cyberwarriors.”54 This also may be true, but it too should not be carried too far. Rather, it might be compared to when a bomb destroys a tank or a parked aircraft without injuring a dismounted crew; the bomb makes a weapon system inoperable but does not disarm those combatants. However, the combat power of various specialized forces is intrinsically related

60

Myths and Realities of Cyber Warfare

to the equipment they are skilled in using. Put simply, tankers lacking tanks and pilots lacking planes represent vastly less combat power when they become foot-bound ersatz infantry; although they are not rendered helpless, their effectiveness is definitely impacted. Libicki is right about encouraging readers not to leap to conclusions about cyberwar that are reminiscent of interwar strategic bombing advocates who insisted that airpower could induce fatal industrial paralysis by dropping high explosive bombs on factories. This pitfall can be avoided while still acknowledging that attacks will exert a potentially significant impact without being totally devastating. Greater perspective about the limitations of a cyberattack can help mitigate an excessive faith or fear in the power of attack in cyberspace—ideas about a defenders’ own options and opportunities figures importantly as well. Planning and redundancy are two valuable tools, because developing response plans help put into place steps to take in response to an event that will occur at the time chosen by the attacker, and infrastructure redundancy will provide a cushion against some attacks and can help buy time for a defender to put its contingency reaction plans into action.55 International cooperation can be another useful, but difficult to orchestrate, tool. Cybersecurity advocates have observed that the NATO partners underprioritized cybersecurity threats before witnessing the evidently politically motivated and orchestrated DDoS attacks conducted against Estonia in 2007. Not only did cybersecurity experts provide assistance to Estonia during this high-profile watershed event, but NATO by the following year reevaluated its estimation of the importance of cybersecurity; although not considering the cyberattacks to trigger the NATO charter’s Article 5 clause about collective defense, a new NATO Cyber Security Centre of Excellence was established and the Estonian capital city of Tallinn was selected as headquarters in a pointed move meant to signal solidarity with the new NATO member that had been subjected to the wave of cyberattacks.56 For the future, Conti and Raymond predict an ongoing “arms race for advanced capabilities” in which defenders will continually “lag slightly behind” attackers. While this seems to predict a future that looks much like the present, they note that future research will likely lead to an exploration of opportunity areas for defenders that have so far been overlooked. For example, “deception isn’t only for the offense,”57 and a more conscious use of deception by defenders could help them withstand attacks and gain intelligence about attacking adversaries. This would turn on its head notions that an attacker would have better in-the-moment intelligence about an attack than would the defender. Nor is the concept without precedent, since the development of a honeypot on the Berkeley computer system played a key role in enabling researchers to trace the 1986 Cuckoo’s Egg hackers (the first infamous case of strategic online espionage) to their origins in East Germany.

Mythologies of Cyberwar Effects61

Unpackaging the dynamic between defenders and attackers indicates that the offensive in the cyber domain indeed does include a number of advantages for the attacker. But analysis shows that the defenders are not necessarily left universally hapless. This conclusion, whether seen holistically as more reassuring or foreboding, dovetails with another recurring assertion about cyberattacks: that the victim is at fault. BLAMING THE VICTIM Some cybersecurity authors ask whether, if defenders’ actions can substantially foil an attack or can mitigate its impact, the defender should not, therefore, shoulder some actual degree of responsibility for a breach. Indeed, since many of the most lucrative targets in cyberspace are institutions that possess and utilize the data entrusted to them by vast numbers of people, the breach of an institution can seriously threaten the data integrity of individuals who had no input about the security measures undertaken for safeguarding their data—or even any acknowledged right to refuse the breached institution from collecting the data in the first place. Questions of responsibility grow still more complex when cooperative or contracting arrangements trigger connection of systems with different cybersecurity standards. The years 2013 and 2014 saw large-scale data breaches at Target and Home Depot, resulting in customer data compromises for millions of individuals. Following the breaches, forensics teams in both instances discovered that the in-the-moment detection was impeded by attackers using valid vendor credentials, but that in the case of Target, the networks were arranged without controls to limit a user’s access once within the system. The hack of the Office of Personnel Management (OPM) was disclosed a year after the 2014 Home Depot hack, and it hit an organization charged with storing the data of current and past federal employees as well as of contractors associated with federal projects. Yet it was reported that after the attack it was realized that OPM lacked even a list of its own computer servers and that no regular scanning was done to search for possible vulnerabilities.58 When the breach was first disclosed, OPM instead underestimated the number of victims by tenfold, only later conceding that information of twenty-two million people had apparently been exfiltrated; even then, the response resembled the reaction to a criminal attack despite reported evidence that the action was actually conducted for intelligence purposes.59 The understandable outrage of individuals whose information has been compromised by security lapses raises the question of whether the victim deserves blame for an attack’s success. Libicki argues that “cyberattacks require the victim’s complicity,” and, although “the responsibility for secondary impacts is always contentious,” it is “a system’s faults, as

62

Myths and Realities of Cyber Warfare

much as or more than a hacker’s skill,” that “determine whether it can be exploited.”60 Provided that the end-user—who to date has no realistic input on cybersecurity policies nor an easy way to access information about their details—is not interpreted as the culpable party, this argument carries some important merit. Libicki certainly seems to intend his argument to apply to the institutions that mishandle data and facilitate intrusions, rather than to the individuals whose data is compromised. For this reason, Libicki decries trends in which the government “promis[es] to indemnify infrastructure providers for their failures if they follow rules and fill out their paperwork.” Cybersecurity practices would improve, he argues, if the breached entity were legally found responsible for the costs associated with the compromise.61 Admittedly, this line of argumentation boasts both a cogent logic and a visceral sense of justice when applied to institutions whose laxity betrays their clients, customers, or citizens. After all, “in theory, all computer mischief is ultimately the fault of the system’s owner.”62 Unfortunately, some threats pose challenges that cannot realistically be anticipated and overcome by a defender, particularly since even refinements in cyber defenses must race newly discovered vulnerabilities, whose identification is reportedly accelerating. Libicki himself concedes that “in practice, none but the most skilled or well-heeled buyers can avoid purchasing commercial software,” meaning that at some point an outside vendor’s product will have to be trusted. This danger is paralleled by the prospect of supply-chain attacks, in which hardware components are tampered with at manufacture; although extremely damaging to the reputation of a supplier once they are discovered, these vectors of attack are also reportedly very hard to detect.63 Due diligence seems reasonable to expect—and demand—from institutions. However, it is more difficult to precisely define, particularly in a world in which “no matter how secure your facility is, it can be breached.” Some polemic writings have compared contemporary cybersecurity services to worthless and misrepresented snake oil,64 and although most leveraged weaknesses may indeed come as a result of surprising levels of laxity that harkens the notion of “complicity” that Libicki describes, researchers find that other attacks are so well-researched and resourced that a defender cannot prevent them. Kevin Jones of the Airbus corporation highlights the dangers of extending the “victim’s complicity” logic all the way to the end-user: “Let’s remember links are meant to be clicked, PDFs are meant to be opened.” Even realistic levels of diligence and care can fail in the face of determined attackers who squat on very similar URL addresses (he uses a hypothetical malicious link to “über.com” instead of “­uber​.­com”) can mean that a user gets “owned by some APT.”65 Analysts have contended that Estonia might have more quickly identified the early phases of the 2007 DDoS attacks by

Mythologies of Cyberwar Effects63

noticing the uptick in automatic (and artificial) traffic that participated in the attack, or hypothesizing that Iran’s response to Stuxnet might have been more effective sooner if Iran had been an open enough society to dialog with cybersecurity professionals themselves rather than hermetically hunker until the later spread of the malware to computers outside the country finally brought the issue global attention.66 Other studies, however, have noted that in the latter case the target had evidently taken significant steps and yet still had been attacked.67 CYBER STRUGGLE IN RELATION TO THE OTHER DOMAINS Nesting a new domain into the larger context of armed struggle poses significant challenges—not only today regarding cyber but in the past with the domains that were introduced before. Space reconnaissance ushered in an intelligence data revolution during the Cold War. But two decades elapsed between the mid-1950s and the mid-1970s, during which the United States and the Soviet Union expressed interest in the space domain, began to explore it, began to utilize it for reconnaissance purposes, and came to a quasi-official acknowledgement of a legitimate reconnaissance role for space vehicles (and official disclosure of space reconnaissance activity did not occur for another two decades after that). Although the air domain was brought into intense and diverse combat applications barely a decade after the invention of the first controllable heavier-than-air vehicle in 1903, the following decades were marked by heated controversy about the most useful applications of airpower, appropriate organizational relationships, proper doctrinal concepts, and fittest technological priorities for airpower. The history of naval warfare, introduced three millennia ago, is strewn with examples of combatants, from the Ancient Romans to the Imperial Spanish to the Revolutionary French, who struggled with the domain and sought to lean on ways that they might treat naval combat akin to land warfare. Observations that “cyber warfare has become a necessary element in the modern conduct of war” may be accurate, but they do little to clarify the role it will play or the relationship it may exhibit in relation to other fighting domains. Even regarding coercion, arguably an area in which cyber power could be valuable, its impact remains open to some speculation.68 Controllability and predictability of effects remain areas of concern for authorities such as Mandel as well as for Conti and Raymond, the latter two warning that “we may think we are using a precise [equivalent of] a sniper rifle and instead end up spraying unintended effects around the planet” that inflicted unintended damage. Mandel considers cyberattacks to be the most effective when the attacker already possesses technical, intelligence advantages, relationships with sympathizers in a target society that could represent an insider threat to the defender, and

64

Myths and Realities of Cyber Warfare

when the defender is hampered by inattentiveness to cybersecurity.69 But the point that cyberattacks could be useful to a combatant who already holds a number of strategic valuable cards is less a predictive assertion than it is an implicit consideration of cyberattacks as a supporting and enabling tool. The space, air, and naval domains could first prove their relevance by demonstrating their utility in supporting functions in the existing domains, and Conti and Raymond wisely argue that cyber operations “must be responsive to kinetic warfighters’ requests for support” in order for them “to be taken seriously by the kinetic warfighting community.” Friction may be unpleasant but it can be expected “until cyber operations become routine.”70 Suggestions about leveraging cyber for intelligence to support so-called virtual peacekeeping, conjecture about Chinese interest in using cyber in the context of information operations for psychological effects, speculation about the gulf between terrorists’ interest in wielding cyberweapons and their gaining an effective capability, and reports of various applications of the cyber domain to conflicts within Georgia or Sri Lanka or Ukraine all encourage further examination.71 They also point toward cooperative and ultimately supporting roles for cyber operations. There is a sort of stigma in military organizations about being seen as a “supporting” partner, and this may inspire other voices to suggest a more co-equal role of cyber operations and operators. In 2007, the reported use of man-in-the-middle cyberattacks to spoof normal airspace conditions to Syrian air defense controllers while Israeli Air Force jets flew past to launch an unchallenged kinetic strike on Syria’s suspected North Korean-backed nuclear facilities at Kibar illustrates how cyber activity in a supporting role can nonetheless factor impressively in influencing the outcome of a kinetic battlespace action.72 Discussion among Navy figures has included the suggestion that cyberattacks may be a locus of early escalation to an impending conflict73—and thus cyber operations merit greater attention and resources both for its place as the figurative canary in the coal mine and also as a place where a conflict’s context could be shaped. Kello has noted that although “the capacity of cyber arsenals to augment military force is not . . . their main contribution . . . neither the goal nor the effects of a cyberattack need be contained in cyberspace,” a concept that has evidently spooked conscientious conservatives like Libicki enough to prompt his proposal of informal “Las Vegas Rules” to cyber conflict—that what happens in cyberspace should be kept in cyberspace.74 The idea that conflict can be corralled into a single and even artificial domain may seem reassuring. Whether that would be possible is another matter, particularly in scenarios wherein a cyberattack exerts kinetic repercussions such as property damage, human injury, or fatalities. When advocates of cyber operations wonder how “we can achieve through cyber what in the past we have had to achieve through a kinetic strike,”75

Mythologies of Cyberwar Effects65

questions may be asked about whether in such a future those cyber actions (or indeed a range of cyber effects that might in the moment resemble them in the eyes of an alert defender) will still be seen as falling outside the threshold of escalatory retaliation. War does not occur in a vacuum, and changes in technology or tactics will trigger reactions among both friends and adversaries. Perceptions of what cyber activities mean for warfare will almost certainly be impacted by revelations about what cyberattacks can accomplish. Figures including Hayden have already stated as much when referring to the first kinetic-effect cyberattack attributed to political objectives as having been a crossing of the Rubicon. Conti and Raymond warn unambiguously that “cyberspace effects can be destructive and lethal” either as first-order effects or as second-order effects triggered by the malfunction of key systems, as shown by the fact that “even without malicious instructions, the lack of a computer to guide a process can cause physical destruction.”76 As with other formulations of limited war, the concept of a conflict contained to cyberspace is predicated on the concept that the stakes are not sufficiently high on either side for either side (or a third party with an interest in the conflict outcome or simply of stirring chaos as a means of promoting its own unrelated interests) that actions will escalate either to include kinetic actions or cyberattacks intended to exert kinetic effects. A related argument appears from pundits like Richard Clarke, positing that inadequate defensive tools could accelerate escalation beyond the cyber domain.77 VISIONS OF CYBER AT THE FOREFRONT The more optimistic appraisals nonetheless advance ideas that the cyber domain can reduce the destruction that has traditionally been an inherent part of warfare. These perspectives rest on a precondition that the parties to a conflict each seek to pursue their policy aims while also wanting to avoid bloodshed, and these arguments implicitly go on to assume that the avoidance of injury and destruction will trump other policy goals in the midst of a conflict. Such advocates point to the opportunities to limit conflict. For example, Rid’s famous book-form of the argument Cyber War Will Not Take Place appeared in 2013, arguing that sabotage via cyber would be less physically destructive than its kinetic forebearers, and thus one of the most common forms of conflict in the cyber domain would mean that attacks become “less violent, not more violent.”78 The same year, another analyst suggested that “cyber weapons have a versatility other ‘non-lethal’ technologies cannot match,” thus delivering a new level of practicality to non-lethal munitions that had been unavailable in past eras.79 Whether this humanitarian perspective is as applicable to combat scenarios as to

66

Myths and Realities of Cyber Warfare

law enforcement situations or fulminating urban unrest is unclear, however. Impeding connectivity for protest organizers in the various states of the Arab Spring in 2011 had uneven results and in several cases, violence was not anathema to either those resisting the regime or those seeking to eliminate dissent. Other writings, including well after the Arab Spring, continued to indicate that cyberwar “may expand the pool of options available to strategists striving to resolve hostilities” by means of “precision, discrimination, and the ability to produce effects without the normal trapping of lethality found in other domains.”80 Some ethicists have loudly heralded the potential of cyberwarfare, even arguing that “cyberwarriors . . . should define their profession by way of the potential of cyberwar to be a near-bloodless mode of warfare,” that “cyberoperations may save a nation from the ravages of war,” that the avoidance of bloodshed enables “the realization of ideal warfare . . . waged in a way that is maximally discriminate and proportionate.”81 Indeed, Rabkin and Yoo argue in Striking Power much the same point: the precision effects of cyberattack (and armed robotics and space weapons) would “provide states with the means to exert pressure at lower levels of destruction and casualties which provides great powers in a crisis more opportunities to divert from escalation to settlement.”82 The logic of this argument is seriously (even fatally) shaken when one realizes that the opportunity to coerce an opponent is related to the seriousness with which that adversary treasures its policy aims. The more strongly such objectives are valued, the less likely the adversary will abidingly retreat from them. In such situations, the power to coerce erodes. The less optimistic but more measured analysis points to how cyber operations might make conflicts harder to quickly identify, rather than necessarily being rarer or less destructive. In fact, George R. Lucas has asked whether “relatively nondestructive cyberweapons serve to lower the threshold against preventive war” and thereby facilitate the eruption of new conflicts.83 Libicki has hypothesized that unacknowledged “sub rosa warfare” may arise and usher into realization activities that would not gain approval or survive scrutiny in a war that garnered greater attention.84 Analysts have considered whether such attacks and acts of sabotage might not proliferate and whether relatively smaller powers might even be disproportionately interested in launching cyberattacks as part of this form of covert conflict.85 The cyber domain cannot be expected ipso facto to unlock a more peaceful or even a less casualty-laden world. The widespread assumption that the cyber domain favors the offense is itself not unanimously held. Conti and Raymond suggest that at the tactical level, cyber offense indeed holds the advantage since the attack is revealed at a time of its choosing and the defense is visible from the time a software is deployed or a network is

Mythologies of Cyberwar Effects67

brought into operation. However, the offensive advantage is concentrated at this local level (local in the sense of the technical logic of computers, rather than in a geographic sense). At the operational level of war, the relationship between offense and defense is “more even,” and at a strategic level which side possesses an “advantage appears to be an open question.”86 Analysis of the actual effects of cyberwar draws into significant question, then, many of the mythologies that are commonly brought forward. The effects of cyberattacks have frequently been reversible (if the opportunity costs incumbent in artificially imposed downtimes and in recovery efforts are, somewhat unrealistically, ignored), but several cases have already shown lasting and damaging effects stemming from cyberattacks. The seeming flood of accounts about cyberattacks has fed a belief in the inherent advantage, and perhaps even an invincibility, of the offensive in the cyber domain, in stark contrast to long-held ideas and extensive evidence of the defense’s advantage in the physical combat domains. That dominance may be less sweeping than is commonly imagined, and it may be more concentrated in the lower levels of war than in the higher strategic and policy levels. The extent to which a defender can shape its vulnerability as a target raises the question of the culpability of victims, and if victims are culpable then questions would arise about how to consider providers, hosts, vendors, and end-users, not to mention the relationships among them. Different voices have proclaimed disparate ideas about how cyberwar will impact conflict in the other, existing, domains. Here, too, simple answers are not forthcoming, although (perhaps, unfortunately) the optimists who cherish notions of cyberwar banishing bloodshed from geopolitical struggle advance a set of expectations that fail to credibly survive critical analysis. Many questions about the who’s and the why’s of cyber conflict remain for consideration and several will be examined in the following chapter.

CHAPTER 4

The Attribution Paradox and Organizations’ Impact on Cyberwar

ATTRIBUTION CHALLENGES Debate within the field about the feasibility (and requirements) of attribution dovetails with perspectives about the relative advantage of the offense in the cyber domain. Absent a path to meaningful attribution, those launching cyberattacks can theoretically do so without repercussions, and ineffective or mistaken attribution efforts could even draw innocent parties into the middle of a confrontation in cyberspace or beyond it. For this reason, scholars like Brantly and Mandel argue that attribution is “fundamental” to defenders in the cyber domain because ambiguity poses “debilitating” challenges to staunching further breaches and thus “anonymity is a powerful attribute of [offensive] cyber operations.” Pundits have observed with alarm the retorts of figures such as Russian president Vladimir Putin, widely held to be connected to cyber espionage chicanery on the eve of the 2016 elections in the United States, to brazenly shrug off the very topic of attribution and point instead to leaked materials as being the sole topic of relevance.1 “A multitude of obfuscation techniques” pose potentially daunting challenges to analysts seeking to link sinister activities in the cyber domain to the people responsible.2 Carefully crafted privacy tools such as The Onion Router (TOR) were deliberately made widely available decades ago with the intention of facilitating democratic speech even in authoritarian countries where such sympathies are dangerous to hold; once at hand, however, privacy tools can be used by a panoply of actors online for a wide range of purposes, including facilitating criminal activity by hiding perpetrators or activities from easy view of legal authorities. The use of

70

Myths and Realities of Cyber Warfare

botnets allows relatively simple actions like DDoS attacks to be conducted but also potentially obscures the origins of an attack, since the “attacking” machines are in fact geographically distributed and will be owned and operated by users who are unaffiliated and likely unaware of the cyberattack their machine is participating in in the background. Deliberately routing traffic in circuitous ways can facilitate an APT in its exfiltration of data while obscuring the location and identity of the collector, and a dispersal of such routing can even impact the rate at which exfiltration traffic occurs and reportedly obscure the very act of exfiltration itself. This illustrates how Corera’s suggestion that “proving . . . material was stolen is easy” but “proving it went to China is harder” and “proving who received it in China is almost impossible”3 can make the first part of the process sound easier than it may necessarily be. Some scholars have compared the dynamics to privateering before the mid-19th century, and one noted that “it is far easier to hide a government’s shadowy hand in cyberspace” than to maintain deniability about the privateers preying on gold-laden galleons in the time of Francis Drake.4 One important and recurring problem for defenders is to determine appropriate reactions and responses to an attack. But conventional wisdom holds that the answers to these questions are situation-dependent—that a cyberattack launched by a nation-state is a different kind of threat than one conducted by organized cybercriminals or one undertaken by hacktivists or others. This has to do with how the identity of the attackers may influence their motivation and how their motivation translates to what they want to accomplish: Is the ultimate intent espionage? Reconnaissance for later physical destruction? Compromise of systems or data for subsequent ransomware attacks? Doxing or public release of sensitive information? Still other purposes? Schneier explains, “You really can’t tell the difference between the governments and the criminals and the activists. It is their motivation. It might be what they do after the break in. But they can all use the same attack tools to accomplish their goals.” Additionally, countries that desire cyber surveillance or other effects but lack massive budgets for institutions armed with their own expertise “will just use criminal hacking tools to pretty decent effect. And they’re using it for state/governmental purposes even though it’s the same tools.”5 Since the tools and methods are identical and the differences between attackers really most involves what happens once the intruder is inside the system, not only is the task of distinguishing between a criminal and a state incursion difficult, but there is also an opportunity for overlap and cooperation among different classes of adversaries. The European Union’s Institute for Security Studies reported that Russia augmented its existing institutional cyber domain expertise during the 2000s by quietly employing nonstate actors in order to “hide behind the mask of ‘plausible deniability.’”6 For years, cybercrime was facilitated by the so-called

The Attribution Paradox and Organizations’ Impact on Cyberwar71

Russian Business Network, which in the first decade of the 2000s formed a sanctuary for a number of criminal cyber activities. For countries lacking qualms about coopting criminal organizations, sluggish and incomplete law enforcement actions against cybercriminals who target foreign victims help passively nurture the development of hacking talent that can be later economically used against a target chosen by a state. This does not preclude the same state choosing at other times to enforce laws and pursue cybercriminals.7 Unattributed attacks clearly unnerve cybersecurity professionals and experts. Attackers who work in anonymity are more reportedly confident and more persistent in their efforts, emboldened either by the knowledge or the impression of their immunity.8 Meanwhile, disclosure of information about an attacker’s methods or other clues means the defender tips its hand to some degree about what it knows happened, bringing concerns about the costs as well as the opportunities of sharing incomplete clues related to attribution. Persistent reluctance can be seen in the limited information provided by Germany’s federal cybersecurity office regarding the kinetic-effect hack against undefined portions of an unidentified steel mill somewhere in that country in 2014. Outside cybersecurity researchers have noted that, in the absence of any official word about the attackers’ possible motivation, several theories such as “industrial sabotage for competing contracts or national interests, environmental extremists, or an individual or group testing out capabilities and tactics whether the physical damage was intended or not” could each be considered viable and plausible.9 After more than half a dozen years, the event is still shrouded in considerable obscurity. ATTRIBUTION AND TIME Recent additions to the literature offer challenges to an earlier mantra that attribution is difficult or impossible to accomplish swiftly. Counterinsurgent operations following the terrorist attacks of September 11, 2001, attracted focus onto nonstate threats, and effects can be seen in the conclusions drawn regarding cyberwar as well as kinetic combat; the global and apparently instantaneous reach of actors via cyber propelled a sensed need for rapid attribution, tracking, and targeting of adversaries.10 Subsequent discussions would witness assertions such as “attribution after the fact is not sufficient—you need to know attribution in near real-time,”11 implicitly because the value of a defender’s reaction would rest on its ability to counter the intrusion, perhaps by halting a breach or by punishing an attacker. Sometimes, even in cases lasting weeks such as the 2007 DDoS attacks against Estonia, the time required for attribution extended even beyond “the duration of the war.”12 The arguably hyperbolic use of the term “war” aside, the desirability of more prompt attribution carries an

72

Myths and Realities of Cyber Warfare

intuitive persuasiveness. No would-be target would want to be attacked and attain attribution data only to discover it to be without value because the forensics had been overtaken by events and a diplomatic or strategic window for effective response had closed. Some quite recent cybersecurity writing continues to argue for the importance of rapid attribution, at least in “instances . . . where time is a critical factor—for example if a hack-back needs to be conducted.”13 The value and opportunity afforded by rapid attribution may not be as extensive as has earlier been presumed. Risk management analyst Clement Guitton argues that attribution may be useful in the moment for foiling a DoS attack since blocking traffic from the offending parties could frustrate such an attack on accessibility coming from known origins. Timely attribution could be valuable for law enforcement entities seeking to capture cyberattackers and bring them to justice, although Guitton has indicated that sufficiently timely attribution would not need to be instantaneous and that accurate attribution over time could be completely sufficient from a law enforcement standpoint. A similar message has also been heard from Iain Lobban, who ran the General Communications Headquarters (a British counterpart to the National Security Agency) from 2008 to 2014. Even instantaneous attribution would not unlock a panacea solution to the problem of bad actors operating in the cyber domain, most of all when nation-states are involved. Even the swiftest attribution from a technical standpoint, immediately identifying what machines and what people are conducting an attack, would probably not reveal the sponsors of an attack.14 Knowing who is attacking—but not why—leaves critical questions unanswered and does little to help defenders decide how to deal with an attack that might be for espionage, for preparing a leak, for perpetrating ransom, or for paving the way for kinetic effects. The defender would still be in the unenviable and nearly hapless position of watching and waiting, or of guessing the motivation of the attackers and chancing the consequences of a given response. ATTRIBUTION OPPORTUNITIES A cadre of attribution advocates have insisted that “attribution in cyberspace is complicated, but it is not impossible, as is often portrayed.”15 In fact, given the expressly covert intent of those conducting cyber espionage and the sometimes very intentionally “loud” and the noticeable impact that is more frequently fitting into the category of “cyberwar,” it may be more possible to attribute actors in cyberwarfare than of their spying counterparts. Some analysts have been able to trace some malicious activities through technical forensics, as for example when the DDoS attacks against Georgia in 2008 were found to use a codebase that had been seen solely in Russian-language botnets;16 the geopolitical context corroborated

The Attribution Paradox and Organizations’ Impact on Cyberwar73

this technologically based hypothesis. Studying the events surrounding the Stuxnet malware’s impact, Kim Zetter wrote that “a new world order” was catalyzed by the use of cybersecurity professionals to analyze malware that was not at that time yet suspected to be a politically propelled cyberweapon.17 Honeypots, used in human form as a recurring tool throughout the Cold War and in use among cyberspace defenders since the 1980s, remain a useful tool for deception in distracting intruders from exploring legitimate data but also especially as a tool for defenders to identify and trace unauthorized traffic. Georgia successfully utilized honeypots in its work to trace cyber hostility to Russia in 2008. Others were designed to resemble SCADA systems were reportedly employed during the tracing of hackers ultimately identified as part of People’s Liberation Army Unit 61398, also known as APT1.18 Technical forensics can provide valuable data, but as Guitton has forcefully indicated, attribution is not simply a technical question, nor is the issue of attribution most meaningfully understood as a “problem.” Rather, in Guitton’s analysis, attribution should be understood as a process, because rather than existing in a sort of binary limbo in which the world has either no idea who launched an attack or is totally certain of who did, evidence will instead provide information that will bring analysts to greater clarity about who and why an attack was perpetrated. Even at this point, however, attribution is not simply accomplished, but instead the ascribing of responsibility is a political decision rather than an automatic technology-based conclusion.19 Panayotis A. Yannakogeorgos, an authority on cyberwarfare and a determined advocate of establishing attribution norms through a particular brand of political decision, has frequently pointed out that the internet is descended from structures that had been designed as closed systems for communication and research among trusted participants. As a result, little thought was given—or, at the outset, required—regarding security. Yet the expanded connectivity has led to a far greater interest in leveraging its economic potential than to a recognition of the changed security context. He suggests that the responsibility for hostile actions in the cyber domain be associated with entities whose resources make them possible. As a result, he argues that “the only way forward . . . is to hold states accountable for malicious activities either originating from or transiting their territories,” and that even “I[nternet] S[ervice] P[roviders] should be held responsible for malicious activities that occur within their systems.”20 Such a path would fundamentally reframe the character of the connected world. Presumably, this approach to attribution would mean that in a hypothetical recurrence of the DDoS attacks that were launched against Estonia in 2007, the culpability would extend not only to the people who planned the attack but also to the 178 countries from which

74

Myths and Realities of Cyber Warfare

attacking devices were located. The vast majority of these are believed to have belonged to botnets, meaning that the devices’ users were not aware that their machines were participating in a DDoS. Nevertheless, the question would arise as to whether those 178 countries and the individual ISPs serving those users were to be held responsible. What repercussions would be deemed acceptable or appropriate? To a real extent, the perspective of blaming the unknowing proxies, unintentional “host” countries, and ISPs shares something in common with the argument explored in the previous chapter, that the victim is culpable when an attack occurs. Yannakogeorgos’s intent is that ISPs and countries around the world would respond by taking appropriate steps to exert meaningful control over whether attacks were perpetrated using their telecommunications infrastructure. Here again, the logic runs parallel with blaming the victim for cyberattacks: that the costs of assigned culpability will come to convince the cyberspace players who are inadequately secured but lacking in malice to defend themselves better and bring about a greater security for the commons. The long-range objective is valuable, even laudable. The short- and medium-term consequences seem, arguably, harrowing in both cases, however. One-quarter of the DDoS bombardment against Estonia in 2007 reportedly came via devices in the United States. Would this approach to the attribution problem then identify the United States as one of the leading attackers against Estonia—how would such logic not come to this conclusion, despite the incongruity of such a concept? Attribution can also involve more qualitative consideration of evidence, in concert with the deeply technical analysis of the political declarations discussed earlier. The playfully dubbed “Agatha Christie principle,” that brushing aside confusing information that may be planted for distracting effect and instead focusing on the logical suspects, “probably is enough to counter cyber subterfuge and take care of the attribution problem,” according to Lucas.21 Analysis of openly accessible information represents another attribution option, particularly regarding hostile actions that are themselves conducted outside the cyber realm but are signaled online, thanks to the rampant popularity of social media during the early 21st century. Various claims by different combatting sides in Syria’s civil war have been analyzed and debunked in this way, and the destruction of flight MH-17, a Malaysian Airlines-operated Boeing 777 by a Russian-made surface-to-air missile deployed in support of pro-Russian separatists in eastern Ukraine, was confirmed through crowdsourced online analysis of social media postings by Russian and separatist personnel who did not realize how effectively disparate data points could be assembled to support attribution efforts.22 When patience is politically permissible, the technical and qualitative methods appear to offer a constructive opportunity for attribution. This is predicated, however, on a geopolitical patience

The Attribution Paradox and Organizations’ Impact on Cyberwar75

that is willing to wait for evidence to be collected and assembled and is still willing to undertake relevant action when conclusions can be reached. ATTRIBUTION IS NOT A PANACEA Throughout the discussions about attribution in cyberspace, the presumption is almost universal that achieving more reliable, faster, more comprehensive attribution would represent a sort of panacea to the “problem” of attribution challenges. And this is the result of understandable reasoning; identification of bad actors and eradication of their activities would intuitively make the cyber domain a safer place. Indeed, often the voices who raise concerns about increased attribution capabilities tend to come from the margins such as the “antisec” communities that express overall discomfort with any perceived threat to privacy, regardless of the benefits it might provide in terms of security. Two decades into the 21st century, improved attribution seems to offer benefits to potential victims of attack. Will that necessarily remain true indefinitely? Although myriad bad actors have leveraged cyberspace for malicious purposes, clandestine users of cyberspace are not always those antithetical to democratic ideals. Indeed, although TOR anonymity technology today is frequently misused by criminals seeking privacy in cyberspace, TOR was first developed two decades ago by the Defense Advanced Research Projects Agency with an eye toward enabling the use of cyberspace by individuals whose democratic sympathies would trigger their being targeted by autocratic regimes. Often cyberattackers are criminals seeking ransoms or spies stealing data, and terrorists including Abu Musab al-Zarqawi and Anwar al-Awlaki were traced and killed by counterterrorism forces because their frequent use of digital media led to carelessness that enabled allied forces to geolocate them. But attribution has been used by adversaries also. The clandestine Syrians that reported on ISIS atrocities through the citizen journalist group, Raqqa Is Being Slaughtered Silently, were traced when file-sharing practices allowed tech-savvy ISIS members to determine their IP addresses; the citizen-reporters were then brutally murdered.23 Even anti-security authors have opined that attribution is “why soldiers wear marked uniforms and battleships fly flags,” so that “a nation under attack” can know who is coercing it; how otherwise can it “make the necessary concessions if they don’t know who is attacking them?”24 Indeed, broadly the use of standardized uniforms is as old in Western warfare as the adoption of flintlock weapons or bayonets on small arms. From a historical standpoint, however, it is useful to pull the curtain back one more layer on this issue of attribution. Armies had several simultaneously aligning reasons to consider adopting uniforms from around the 17th and 18th centuries. Identification on a battlefield was one; this

76

Myths and Realities of Cyber Warfare

related to problems of obscuration by the white smoke clouds from black powder firearms, but it also related to the possibility in some conflicts of some personnel swapping sides rather than endure the confinement and health problems then-common for prisoners of war. The widespread adoption of firearms impacted military training, state budgets, and the size of armies, and adopting uniforms helped address arising potential problems. Over time, battlefield conditions, technologies, and tactics all inspired shifts in the patterns of uniforms, from colorful and frequently ostentatious designs to consciously sedate colors with attention to ergonomics and camouflage. It is worth remembering that the period roughly aligning with the adoption of uniforms, from about 1700 to the present day, encompasses the period that saw a rising arc in terms of the number of personnel mobilized for wars and the suffering of casualties in major conflicts from the French Revolutionary era through the world wars. In short, uniforms did not make war “better.” Nor was the development of uniforms simply for the purpose of telling hapless defenders to whom they should offer obeisance. Rather, uniforms came to address a number of important needs of combatant organizations, and as these needs evolved, so did the physical technologies used to answer them. Therefore, the impulse to wish an answer to the attribution problem may be understandable, but one cannot realistically presume that the “solution” to the attribution problem will necessarily simplify the challenges involved in cybersecurity. An interesting trend has been developing with regard to the attribution of various forms of cyber misbehavior to Russia. Analysts have pointed to repeated and various forms of online chicanery, directed within the country and beyond its borders. These activities have mounted to the point that “the Kremlin is now suspected of being behind any major cyberattack that takes place in the West,” a sign to some journalists that “Russia [has] overused plausible deniability.”25 Others disagree, contending that perhaps “the Russians are becoming easier to see . . . because they want to be seen by those prepared to see,” deeming the coercive effect to be a valuable part of “their broader information operations strategy.”26 Still another point to consider involves false flag operations. Observers a decade ago noted that “because virtual attacks can be routed through computer servers anywhere in the world, it is almost impossible to attribute any hack with total certainty.” Indeed, Libicki points out that crafty actors might work to mimic known and recurrent bad actors as a means of tempting forensics research to misconnect dots, seeing the tactics, techniques, and procedures reminiscent of a chronic aggressor and circumstantially concluding a new act to be yet another in a list of its transgressions. Thus, Libicki suggests, “better attribution by one may lead to worse attribution by another.” Mimicking the style of another frequent attacker could simultaneously allow cyberattacks to escape direct

The Attribution Paradox and Organizations’ Impact on Cyberwar77

consequence but could also cast suspicion and potential retaliation at parties that were, for once, innocent of launching an attack.27 Analysts have identified one case in which Russia has been suspected of launching an operation called “Olympic Destroyer” in such a way as to simulate North Korean cyberattack methods; in this way, the actual attackers could make their attack and also allegedly either foment greater tensions in the Korean Peninsula and distract U.S. policy makers from the rest of the world stage, or alternatively if forensics experts did point to Russia then that government could question and “undermine public confidence in attribution” practices.28 One might conclude that attribution is simply not simple. Although it appears to be less impossible than was once commonly imagined, and although the need for swift attribution may be debatable, the challenges associated with attribution do not constitute a single “problem,” nor would the establishment of a clear path to attribution guarantee a safer security landscape. HACKTIVISTS The term “hacktivist” for a hacker-activist is not new, and examples of hacktivists date at least to the last days of the Cold War, with computer-savvy members of the antinuclear movement. The mindset of hackers feeds into the traits seen in hacktivism. One cybersecurity expert has explained that, for hackers, “there is a shared ethical and moral set of values; there is a shared almost historic culture of things that they find funny, and other people would not find funny.” Making the computer the portal for all pursuits, spanning work and play and socialization, connects with a “transcending of boundaries and becoming an iconoclast and applying that discipline of the iconoclast to your everyday life.”29 If this is the case for hackers, one might suggest that among hacktivists these characteristics are combined with some sort of common sense that hacking should be used to pursue a shared vision of the world that aligns with those iconoclastic ideas and boundarylessness. Holding these views (which may vary too much to smoothly apply a word like “sharing”) is the long pole in the tent for hacktivism; considerable variations in actual hacking skills may exist among members. For hacktivists, the motivation is the commonality and the expertise is a variable. Easily the most commonly identified hacktivist “group” is Anonymous, and it illustrates these dynamics of hacktivism. To a large extent, the political campaigns conducted by Anonymous grew from a zeitgeist focused on crude and often cruel “lulz” humor directed at individuals, societies, and institutions that Anonymous members deemed anathema to the world they wanted online.30 Anonymous targeted a range of entities as distinct as cybersecurity entrepreneur Aaron Barr, PayPal and Visa, the Church of

78

Myths and Realities of Cyber Warfare

Scientology, the government of Zimbabwe, YouTube, the United States, the Zeta drug cartel, and ISIS. In part, this reflects a bewildering patchwork of antipathies for those deemed to have somehow run afoul of Anonymous’s distain for censorship or tyranny—which it clearly defines very broadly and in arguably contradictory ways. Attacks on Barr constituted retaliation against his work to demonstrate infiltration of the group in order to advertise his skills as a cybersecurity expert; PayPal and Visa cooperated with government requests to hinder the funding of the group WikiLeaks, which had been dumping hundreds of thousands of allegedly classified materials online for public view; the Church of Scientology had been found to have quashed a derailed public relations view that reflected negatively on the organization. High-level corruption in Zimbabwe was linked to the blood diamond trade, whereas YouTube’s content policies curbing the posting of copyrighted music and U.S. governmental policies for security were interpreted as censorship. Efforts to dox allies of the Zetas gang was connected with attempting a hostage release, and bloodthirsty activities in lands controlled by ISIS-inspired Anonymous to an anti-ISIS campaign that included an “ISIS Trolling Day” to propagate ridicule and mockery of the jihadi quasi-state in December 2015. Journalist Parmy Olson, in her examination of Anonymous, explained that a driving motivation for Anonymous participants was “a desire, intensified by group psychology, to bully someone who seemed to deserve it.”31 Sometimes ridiculed by outsiders as possessing a “hive mind” or comprising a single individual named “Anonymous,” the entity’s behavior wandered nomadically among targets and fluctuated in intensity. Key players found that organizing a campaign was far more likely to happen successfully if a campaign was announced as an imminently established fact, rather than as a proposal. While proposing and discussing particular target candidates might intuitively seem to be a more open and “anti-structure” approach in keeping with a hacktivist culture, in fact, most members’ actions reflected an interest in being a part of something important, and Anonymous participants met suggestions or questions about proposed targets with unrestrained hostility and mockery directed against offending participants.32 Likewise, proposed campaigns to be directed against an individual’s personal enemies were frequently rejected as an attempt to coopt and misuse other people’s time. Those participants who wanted to collaborate before unveiling an attack found that organizing through the Internet Relay Chat (IRC) was a crucial step, so that planning could be done discretely in the IRC and a plan could then be announced through the more highly trafficked boards, rather than be quashed through ridicule in its early phases.33 Hacktivists are by definition self-motivated, and this represents a double-edged sword. Provided that hacktivists are sustained by some

The Attribution Paradox and Organizations’ Impact on Cyberwar79

source of motivation (and provided that their material needs are at least in the short term met somehow), they can operate without being provisioned by the kinds of external material support that might be required for cyberattackers who work in the employ of some formal or quasi-formal institution. But, as a “nebulous force” seemingly with “a life of its own,” it could be directed only in a short-term, limited, and narrow sense. Persuading clusters of individuals to imbibe any specific, shared, and enduring sense of purpose can pose a challenge. This helps explain the reasons for observations such as the “fundamental divide between those who believed in Anonymous’s roots in fun and lulz, and the new, activist direction” promoted by participants who wanted the schadenfreude mockery of targets to serve some larger purpose. Because membership in such an amorphous organization cannot be formalized, the entity’s population included numbers of “cyber punks” interested in leveraging an existing brand to suit their own diverse agendas.34 Anonymous participants have been quoted saying things such as, “sometimes I care so much about something, but the next minute I don’t,” or “sometimes something will happen and then you suddenly care about it. . . . It matters for thirty seconds.” This phenomenon parallels the impact of modern news cycles,35 which present a succession of rumored or semi-relevant topics whose significance disappears as successor topics are continually presented to replace them. Before the coining of the term “fake news,” this pattern was identified and dubbed “junk news,” alluding to the way it simulated news while constituting the information equivalent of junk food empty calories.36 Strategies are hard to build or sustain in this environment. For those aiming to mete out damaging effects against targets, even on a short-term basis, harnessing the latent power of participants could seem valuable. Likewise, being a part of something larger and victorious carries natural cache, particularly for people who feel an absence of purpose. Participation in Anonymous could lead to eventual disillusionment, but it also offered “a gateway to political activism” that some participants embraced. Participants in Anonymous sometimes marveled at the inaccurate ways in which media outlets and authors have tried to describe the “group” as “hackers,” when its community resists such precise terms: it does not possess the organizational trappings implicit in a “group,” and while some participants have been skilled hackers, many of its “hacker” members appear to have been little more than undertrained script-kiddies.37 Tools such as the “Low Orbit Ion Cannon” (LOIC) provided a technical way for interested users to become participants in an Anonymous campaign. Free to acquire and easy to access, it was deployed in either manual mode where participants actively “fired off” junk packets at targets or in an automatic mode requiring less engagement from an individual user. The real hitting-power of Anonymous’s DDoS attacks, however, came not from

80

Myths and Realities of Cyber Warfare

the various sympathizers clicking buttons that read “IMMA CHARGIN MAH LAZAR” but instead from a small number of large-scale botnets, from which individual key players could vector DDoS attacks from 50,000 to 75,000 machines owned by people completely unconnected to Anonymous. Because “eager volunteers did not want to believe that botnets had more firepower than their collective efforts,” those Anonymous participants who took a hand in the entity’s public affairs often worked to invert the apparent impact of their attacks and insist that it was the “hive” of users rather than the botnets that were having the greatest impact, when it appears that the opposite was actually the case.38 The shortcomings inherent to such collectives do not prevent their triggering concern within institutions they sometimes target. Hayden has noted that, in addition to the second-tier nation-states who lack kinetic weapons but might be interested in launching “a cyberattack as a bolt from the blue . . . the other group I’m really concerned about” is “the disaffected: the LulzSec, the Anonymous of the world.” The concern seems to stem from the idea that their aggregate cyberattack capabilities will increase over time and that hacktivist groups cannot be deterred or coerced in the same ways that might be available when dealing with a peer adversary.39 Nation-states possess vital and frequently identifiable assets, and these must be defended from attack. A threat to the so-called center of gravity, the lynchpin of society or government or armed forces, is a long-standing concern in the Western conception of warfare; but since hacktivist entities lack clear parallels to these centers of gravity, finding key people or assets to target is made complicated. PATRIOTIC HACKING Patriotic hacking denotes politically motivated hacktivism aligned with particular nation-states. The turn of the millennium witnessed the first cases of patriotic hacking, which has grown to become a part of modern conflicts. A clear opportunity for nation-states is that patriotic hackers frequently advance the nation’s agenda and do so essentially without requiring much in the way of support, while at the same time providing a fig leaf of political deniability in the event of international backlash at their actions. The costs associated with patriotic hacking are that although the self-styled patriots are cheap (from the state’s perspective) to operate, they are less controllable than personnel working formally under governmental control. From the start, patriotic hacking has included cases in which the participants’ patriotic fealty has not necessarily corresponded with geography. In the fall of 2000, teenagers located in the United States launched DDoS attacks against half a dozen websites affiliated with the Hezbollah and Hamas organizations, which had for years exchanged kinetic attacks with Israeli forces in the Middle East; Palestinian groups and their

The Attribution Paradox and Organizations’ Impact on Cyberwar81

sympathizers reacted by calling for an “e-jihad” in retaliation. The cyberattacks that followed marked a “horizontal escalation” in the conflict as it broadened to include a new domain. Cybersecurity author Demchak observed that “the ease with which cyberspace enables long-distance access also makes organizing multiparty alliances and coalitions for attacks more likely.”40 Scholars have noted that, particularly in countries where political dissent or nonconformist behavior is interpreted as a serious threat to the state, hackers can direct their activities in unofficially authorized patriotic directions. Patriotic hacking (e.g., in Iran) provides evidence of loyalty to a government because nonconformist behavior is inflicted on targets that the host state distains anyway. A state may even facilitate and encourage participation. Russia refused to cooperate in investigations into the cyberattacks against Estonia in 2007 and an official belonging to the governing United Russia political party publicly declared, “about the cyber attack on Estonia . . . don’t worry, that attack was carried out by my assistant.” Websites with Russian domain names appeared in the midst of cyberattacks against Georgia in 2008, as a Belorussian journalist writing for Slate was reported describing “how easy it was for an average Russian-speaking internet user to quickly acquire the tools necessary to throw an ‘e-Molotov Cocktail.’”41 In turn, a state may present patriotic hacking avenues as an alternative to punishment for nonconformists facing the legal ramifications of other domestic crimes. “Russian hackers convicted of cyber crimes are oftentimes given a choice to work for the intelligence services instead of going to prison,”42 and it is a short leap, from this sort of effort to turn criminals into inside sources or official intelligence personnel, to coopting criminal individuals and organizations for unofficial purposes. Efforts at website defacement, DDoS attacks, and similar harassment and subversion are believed to be much more likely to be undertaken by patriotic hackers than by trained cyberwarriors.43 The reason is entirely practical: systematically trained cyberwarriors are skilled professionals who, in the event of a conflict with a cybered component, would be needed for more intricate—and arguably much more impactful—activities. In recent years, scholars have pondered the status and legitimacy of groups such as patriotic hackers. Tallinn Manual 2.0 academics have concluded that “inhabitants of unoccupied territory who engage in cyber operations as part of a levee en masse enjoy combatant immunity and prisoner of war status” in the event of capture, but consensus seemed to erode as to whether such a levee en masse could be deemed possible when cyberattack skills remain confined to “the cyber-capable members of [a] population” rather than being held more broadly.44 Perhaps facilitating tools, such as the Russian websites providing directions about participating in a DDoS against Georgia, or a state-sanctioned counterpart to Anonymous’s LOIC, widen the aperture of who might be computer literate enough to

82

Myths and Realities of Cyber Warfare

participate in such a cyber militia, and perhaps such considerations would impact the views of scholars examining combatant immunity issues. Combatant status, however, does mean that earlier claims of immunity as a noncombatant (assuming that they would have been respected earlier) could disappear.45 Inchoate distinctions among bystanders, sympathizers, patriotic hackers, hacktivists, and others are further obscured when the same people transition between categories. For example, before becoming the Anonymous participant known as Sabu, Hector Xavier Monsegur participated in a cyber counterattack against the People’s Republic of China in 2001 when Chinese patriotic hackers struck U.S. websites; following his capture by U.S. law enforcement, he reportedly cooperated with law enforcement.46 Patriotic hacking can include third-party sympathizers who may involve themselves in counterattacks against a state under cyberattack, especially since hackers’ greater sense of “borderlessness” can enable alternative and even overlapping loyalties. Nor would targeted states necessarily object in the short term to third parties assisting in their defense.47 U.S. officials, such as former NSA-USCYBERCOM commander Mike S. Rogers, have enunciated a dim view of such ad hoc counterattacks. Although conceding that historical examples such as privateering, when private citizens were authorized to engage in kinetic operations on behalf of a state, Rogers reported himself as “very leery” of such a move in the cyber domain.48 Both Rogers and then-president Barack Obama alluded to such an environment resembling the Wild West. Certainly, the use of cyber militias, potentially seeded with cybercriminals who have been turned so that they do a state’s bidding while preserving a plausible deniability, interests some states.49 Simultaneous to the horizontal escalation in Israeli-Palestinian tension in 2000, patriotic hackers in India and Pakistan defaced each other’s websites. Chinese patriotic hackers’ actions triggered intermittent retaliation from patriotic hackers in the United States. As will be explored later in this book, a large number of parallel activities in the social media spheres have been witnessed in the past two decades as well. Rather than specifically invert international affairs, patriotic hacking (and its social media counterpart that might be dubbed “patriotic trolling”) are adding layers of complexity.50 The Honkers Union of China (HUC) responded to the unexpected NATO bombing of the Chinese embassy in Belgrade by more centrally organizing its patriotic hackers so that harassment attacks could be launched in retaliation against the United States. A mid-air collision in early 2001, between a Chinese jet fighter and a U.S. reconnaissance plane over international waters adjacent to China, triggered another wave of HUC-directed activity against the United States. Jason Healey has observed that although HUC “played the most powerful role in coordinating China’s cyber offensive” and the pace of cyberattacks “suggested that Chinese hackers generally

The Attribution Paradox and Organizations’ Impact on Cyberwar83

respected the HUC’s proposed timeline,” even HUC was “ultimately unable to exert full control over Chinese hackers.”51 The regime has reportedly been alarmed by the fact that “the hive [in China] no longer roils at foreigners alone, but also at Chinese government actions that fall short of the most stridently patriotic standards.”52 There is no such thing as a free lunch, and the use of patriotic hacking is no exception. In view of these trends, and in light of metaphors about the Wild West, there may be a good reason for reluctance regarding the use of unhired guns. DOES STEM ALONE BLOSSOM INTO CYBER? An increasingly holistic interpretation of the requirements and implications of cyberwarfare are changing ideas about who can be groomed into being an effective “cyberwarrior.” This is not simply a product of the increased accessibility of easy-to-access and easy-to-use cyberattack tools that are so often employed by hacktivists, patriotic hackers, cybercriminals, or others bent on defacement and vandalism for its own sake. Instead, researchers are suggesting that nation-states can make use of personnel whose formal background does not involve intimate work with the STEM fields of science, technology, engineering, and mathematics. “Other disciplines such as systems engineering, psychology, operations research . . . history, [and] foreign languages . . . are important for a well-rounded team.”53 There is an urgent need for more personnel. A study in 2016 reported an estimated deficit of 30,000 cybersecurity personnel in the U.S. government and a global workforce shortage of approximately one million. Furthermore, the pipeline of newly trained computer scientists suffered an enduring impact that was triggered at the same time as the dot-com crash at the turn of the millennium; peak U.S. graduation of computer science undergraduate degrees occurred in 2004. This leaves many organizations seeking to recruit and retain skilled personnel from within a too-shallow pool of talent and to provide additional training to personnel who have other forms of computer science expertise.54 However, some have recognized that the need for non-STEM cyberwarriors is not merely because of a deficit of experts on the techniques of cyberattack or cyber defense. Rather, people from various disciplines bring different skills to the table. Conti and Raymond rightly note that “cyber forces . . . will flounder in a hegemony run by myopic kinetic operators,” and a diversity of perspectives and expertise areas will help hedge against a different form of floundering under a different set of myopic vantage points. Although “the intersection is messy,” cyber operations involve technology and intelligence as well as ramifications for kinetics. “Cyber conflict is inherently multi-disciplinary.”55

84

Myths and Realities of Cyber Warfare

Panayotis A. Yannakogeorgos and John P. Geis II agree that “placing too much value on educational background could create a barrier to entry in the cyber operations field” by turning away “autodidactic” personnel interested in developing creative solutions to problems. “The range of aptitudes for cyber-related skills is broad.”56 The insinuation of computers into ever-increasing aspects of civilian life and security-related affairs will continually reinforce this trend, at the same time that it will require the ordinary “non-expert” to grow more familiar at least with the use of devices and more comfortable with their being ubiquitously present parts of daily life. They assert that “a STEM background is not necessarily indicative of a good cyber operator,” since “creative thinking skills differentiate the best from the rest in cyber operations.”57 These traits may well coincide in the same individuals, but the point remains that one cannot be accepted at face value as guaranteeing the other. Conti and Raymond seem to agree, offering that “technology is important, but at the end of the day, it is the people who make cyber operations possible.”58 Indeed, the multidisciplinary character of cyber operations and its relationships with other aspects of conflict point to the value of ensuring that cyber operators encompass personnel with a range of skill sets and also that the wider world of military professionals includes cyberwarfare and cybersecurity topics in their field of vision. For this reason, military professional journals such as Strategic Studies Quarterly, Air and Space Power Journal, and Parameters, among others, regularly consider cyber domain topics, and organizations have been set up in the 2010s to foster interdisciplinary examinations of areas relating to the cyber domain and its relationship with security affairs.59 Practice and familiarity can be encouraged through the inclusion of realistic cyberwarfare components in training programs, such as the National Training Center (NTC) in southern Nevada. Integrating simulated cyber effects into the most realistic training scenarios can help nurture the development—and continual revision and refinement—of “a playbook . . . of common target types, ranges of potential effects, common delivery mechanisms, and resultant T[actics], T[echniques and], P[rocedures] that operational forces can rehearse in training and execute decisively when called upon.”60 In keeping with the historical example of combined arms warfare, and in keeping with past examples of the marrying of cyberattacks with effects in the physical domains, such thinkers see cyber operations offering more potential as part of an interdisciplinary and combinedarms context, and this points to nation-states using their resources to ensure that its cyberwarriors possess a combination of skill sets. THE WESTPHALIAN RESURGENCE Authors have observed that “the internet creates transnational communities.”61 But the world is still governed predominately through a dynamic

The Attribution Paradox and Organizations’ Impact on Cyberwar

85

defined by nation-states and inhabited also by various intergovernmental or other organizations whose actual authorities remain deeply impacted by the will of nation-states. The emergence and expansion of electronic interconnectedness has precipitated several reactions, which essentially fall into the following families: the interconnection imposes dangers and opportunities through the invasion or exportation of diverse ideas and ideals; the interconnection itself can or should be controlled in some way to conform to a Westphalian dynamic (or even to reinforce it); electronic interconnection represents an inexorable force that demands cooperation among geopolitical players. Despite representing quite different perspectives, states arguably behave in ways that reflect an affinity with more than one of these viewpoints. If the internet catalyzes the fast exchange of potentially distant and different ideas, then it represents a tool with which to potentially reshape the world stage. This prospect can be alluring or terrifying—or both—depending on one’s standpoint. Democratic countries generally favor a freer exchange of ideas, and matured capitalist societies support commerce that values concepts like intellectual property; Singer and Freidman have noted that the United States deems online crime as “the Wild West behavior” in cyberspace, countries like Russia and China “view the Wild West behavior as the Western democracies trying to export their wild values.”62 Other authors have emphasized how a chaotic internet “can be politically threatening and easily exploited,”63 and since different countries maintain very different political, social, and economic values, even defining an acceptable status quo poses a formidably complicated challenge. Determining a pecking order in the digital landscape can be a complex process, and while it overlaps with traditionally recognized concepts about which countries constitute major powers, it does not match neatly. Since U.S. researchers laid the technological and organizational cornerstones of the connections that became the internet, and since the United States is generally recognized as possessing the most formidable military and by most measures still constitutes the largest economy, one might naturally presume that it would be accepted as the most powerful nation-state in the cyber domain as well. Not every study reaches this conclusion, however, as there are several different ways to consider cyber power. Even in terms of connectivity, the United States has been identified by its own policy makers as standing behind geographically smaller but highly connected countries such as South Korea.64 From a warfighting perspective, some analysts have pointed out that leveraging key technologies inherently involves a degree of reliance on those technologies and thereby “bakes in” certain weaknesses. Ignoring the vulnerabilities of such reliance poses a further risk, whereas addressing reliance through efforts to ensure redundancy or other resilience can mitigate these problems. Estimates about relative cyber power vary greatly. One divides countries among six unequal categories and announces the United States to

86

Myths and Realities of Cyber Warfare

occupy the highest tier without peer, followed by “a very small group” that includes the United Kingdom, Israel, Russia, and China.65 However, another recent examination places the United States ninth globally in terms of cyber power. According to this study, which emphasizes “that a state’s power is scaled by its vulnerabilities,” Germany, Japan, and the United Kingdom hold the first, second, and third place positions. South Korea, Canada, and France are found to form a close cluster beneath these, followed by Australia and Estonia, while the United States and Israel constitute another tier of power below them. These are followed by Italy, Turkey, Brazil, Russia, Saudi Arabia, and Argentina. Perhaps surprisingly, the People’s Republic of China lags in 18th place by this measure.66 North Korea, despite its bouts of aggressive activity in the cyber domain such as the attacks against Sony in 2014 and the tiny attack surface presented as a result of the country’s extremely limited degree of connectivity, does not appear among the top two dozen nations. Factoring cyber into a geopolitical power dynamic brings chaotic rearrangements. For example, the list of leading economies and countries with the largest military expenditures shows a strong correlation. Among the world’s top 10 economies, 7 are among the top 10 countries in terms of military strength, and vice versa. Ten of the top dozen economies are among the 12 leading militaries. But introducing cyber power into the mix means that the list of common leaders among the three categories can shrink sharply, to 8 from the top dozen and just 5 from the top 10.67 Leaders who are interested in reshaping geopolitical dynamics are likely to interpret the resulting ambiguity in power relationships as an opportunity to redefine the status quo. To some extent, disgruntled players who had formerly stood toward the top of the heap might be accused of holding “a helpless nostalgia for the easy pleasures of comparative advantage,”68 but to shrug off all unease about this upheaval misses a relevant point. In an ambiguous environment, “bad actors” in the eyes of one state can easily be the “white hat” in the actor’s own homeland.69 Prosecution of illicit activities perpetrated via cyberspace but at a geographic distance can wither in the face of another nation-state’s noncooperation, and while governments seek to comprehend and curtail bad actions, norms of behavior serve as the thin reed to which nation-states must cling in the absence of universally respected laws.70 Governments, equipped with a traditional monopoly on the use of legitimate force, are also confronted by assertive or even aggressive action by nonstate entities through the cyber domain and by the specter of nonstate actors’ opportunities growing in the future. Even as early as the turn of the millennium, some scholars had noticed that a dismantling of concepts such as the American “tribal kin” formed from a panoply of ethnographic nationalities was being contemplated in order to “help usher in the Infosphere Age.”71 A frequent theme in Jason Healey’s examinations of

The Attribution Paradox and Organizations’ Impact on Cyberwar87

cybersecurity involves the fact that “non-state actors, not governments . . . typically are decisive in cyber defense.”72 Thus nation-states confront a number of changes simultaneously, including that their own position relative to other nation-states may be shifting, that the extent of these interstate shifts is if anything ambiguity and dynamic rather than simple or clear, and that the stage is increasingly shared with nonstate actors who could formerly have been written off, unstated, as nonfactors. WESTPHALIAN RESURGENCE FOR THE DIGITAL REALM Unsurprisingly, chaos triggers conflict. Paul Rosenzweig has aptly noted that “the Westphalian image is one of conflict, rather than cooperation,”73 alluding to the tradition of respect among states for each dealing unmolested with its own internal affairs. This tradition was established with the Peace of Westphalia in 1648. It emerged because the alternative of encroaching on each other’s internal politics (at least nominally regarding religious confession) had sucked Europe into three decades of blood-soaked conflict. Arguably, the 2010s witnessed a resurgence of Westphalian sentiment in response to the perceived shortcomings and failures of an extended post–Cold War era that had been marked by internationalist efforts at cooperation toward establishing an enduring and prosperous global commons. Signs of Westphalianism in cyberspace have been identified by scholars Chris Demchak and Peter Dombrowski since at least 2010,74 and the intervening decade has witnessed continuing interest in establishing sovereignty in cyberspace. They have since predicted that “the process of establishing cyber borders and thus states’ sovereignty will be nonlinear, dangerous, and lengthy.”75 Several factors feed the hunger for cyberspace to take on a more Westphalian flavor. One is the need for security and justifiable concern that security cannot be guaranteed through measures that cannot be verified. An international agreement to prohibit cyberweapons, for example, could be difficult to establish, due to controversy about what even constitutes a “weapon” in addition to concerns about how a ban would impact the obsolescence of whatever cybersecurity measures particular states might already have undertaken. Some scholars have attempted to address ambiguities by advocating agreements that would prohibit all cyberweapons except those that would be attributable and impose only reversible effects.76 To some extent, this type of advocacy merely replaces one set of obstacles for another, since the problems of attaining assent to a comprehensive ban would be replaced by challenges defining what “attributability” entails or what attacks are to be universally interpreted as having “reversible” effects. Even if terms could be satisfactorily agreed and a treaty signed, it is expected to lack the kind of opportunities to “trust but verify” that had

88

Myths and Realities of Cyber Warfare

characterized the nuclear arms draw-downs during the bilateral breakthrough agreements between the United States and the Soviet Union in the last years of the Cold War. Lacking an international agreement or serious prospect that an agreement could be confirmed or enforced, both pundits and diplomats have questioned their value.77 Internet traffic speeds around the world, but users and devices have physical locations. Chokepoints exist as well, and in spite of visions of the Cloud transcending petty factors like physicality, “every data storage facility is located somewhere.”78 Countries that have been targeted by cyberattacks launched in concert with interstate political disputes might be expected to guard their infrastructure and to interpret sovereignty differently than states that have suffered politically driven cyberattacks. DDoS attacks against Estonia precipitated not only international attention about the politicized and weaponized uses of cyberspace but also resulted in the formation of that country’s cyber defense units constituting information technology volunteers within the auspices of the Estonian Defense League. The regime in Iran’s efforts to extirpate the internet discourse among its nationals of dissent illustrates how sovereignty can intersect with censorship,79 and the literature shows that Iran is not a unique case. The People’s Republic of China began working to establish a sovereignty-oriented set of regulations about proper electronic commu­ nications early. In December 1997, its Ministry of Public Security banned information that the government deemed to “incite resistance” to the country’s laws or the socialist system, that prompted “hatred or . . . harms . . . unity,” that propagates rumors or “destroys social order,” or advances a broad range of criminal activities. Subsequent years brought instructions that internet service providers must enforce government policies in order to remain in operation, and restrictions that blocked internet users in the country from accessing globally popular sites such as YouTube and Facebook, as counterpart sites tailored to serve Chinese users were developed.80 Although foreign companies are required to provide sensitive and ordinarily protected information to authorities as a precondition of conducting business in China, Chinese businesses are connected to government apparatus so that their own corporations’ details can be reportedly withheld as state secrets.81 In a globally connected landscape, products are rarely “sourced” in a single country. Instead, they are very likely to be assembled from components that had been built in another country, from materials that may have in turn been imported, and that the assembled product has been programmed with different software designed by different companies that may each have an international presence. The diverse origins of a single product can mean that several entities may have an opportunity to manipulate the functionality or fidelity of the end product. For military systems or critical infrastructure, this has clear security implications.

The Attribution Paradox and Organizations’ Impact on Cyberwar89

This awareness sparks a Westphalian impulse. It is evident in statements by successive Chinese presidents. In 2007, Hu Jintao announced that “the development of socialist culture, the security of information, and the stability of the state” were key issues that were each bound to “whether we can cope with the Internet.”82 In 2018, Xi Jinping told the 19th Chinese Academy of Sciences conference that “self-innovation is the only way for us to climb the world’s technological peaks” and that the country must nurture “a group of innovative leading companies with outstanding core technological capabilities and integrated innovation capabilities” to advance “the struggle of the Chinese nation.”83 Arguably, examples of foreign sovereignty efforts raise the pressure to respond in kind. When Obama administration officials reportedly sought to derail this trend by quietly alerting Chinese counterparts of upcoming doctrine, the results were disappointing: “without any guarantee of reciprocation, the briefing turned out to be a one-way exchange” in the spring of 2014.84 Oft-cited cases of industrial and economic espionage attributed to China has led to concerns that products ranging from components to software originating overseas could harbor anything from monitoring tools to instruments of sabotage. And Chinese practices, which are overtly aimed at maintaining “harmony” within China and ensuring that the regime shapes public discourse, have apparently not managed to prevent strident patriotic hackers from sometimes biting the hand that feeds them, by egging on Beijing to more assertive geopolitical actions than the regime plans. Other challenges to China’s “harmony” have also been suggested in relation to the burdens imposed by cybercrime,85 although this too is a term which carries different connotations in different countries. The Westphalian approach to cybersecurity has at times encouraged its own appetites, and perhaps for this reason, officials in China and elsewhere have also simultaneously sounded other notes as well. CONNECTING THE WORLD Experts have explained why Chinese statements sometimes conflict with the Westphalian outlook that characterizes that country’s posture so well. Chinese activity via the United Nations “tempers the negative image of China as a hacking state.”86 An expansive (and ironically Westphalian) interpretation of hostile cyber activity and “hacking” is also promoted by Chinese officials when they seek to deflect accusations of their country’s APT espionage by pointing at everything from DDoS attacks on Chinese sites to TOR software used by pro-democracy dissidents as examples of China being the target of “cyberattacks.” At least one Chinese official is reported to have discretely stated that openly opposing the multi-stakeholder tradition in internet governance is politically damaging. Shaping the discourse about multi-stakeholder concepts can be more advantageous.87

90

Myths and Realities of Cyber Warfare

While scholars note that “weaponization of the cyber realm is already undermining international cyber stability,” some of the states to which notable cyberattacks have been attributed have been at the forefront of efforts to nominally prohibit aggressive acts in cyberspace. Its suspected cyberattacks against countries on its borders and evidence of its information operations notwithstanding, Russia’s calls for banning cyberweapons ironically span the past two decades, and Russia entered into the first-ever bilateral cyber agreement in 2013 when it and the United States vowed to exchange information between their Computer Emergency Response Teams. Russia has reportedly followed up on this agreement with interest in developing bilateral cyberspace agreements with countries in Western Europe, with Israel, and in East Asia.88 Concern about the danger of cyberspace attacks, including those waged by strange bedfellows who share enemies more than values, set the stage for interest in greater cooperation. NATO’s philosophy on cyberspace in the early 2000s has been characterized as “laissez-faire,” although the 2007 cyberattacks against newly minted NATO member Estonia jarred the alliance into greater attention and activity. Although the cyberattacks were not deemed to constitute hostility sufficient to trigger NATO’s collective security provision, NATO did quickly establish its Cyber Defense Centre of Excellence and in 2012 and 2014 enunciated that the alliance considered international law to be applicable to cyberspace. Some scholars have added that cooperation between government, business, and civil society are equally vital to cooperation between governments and militaries in responding to cybersecurity concerns.89 Although skeptical voices have suggested that information-sharing regimes that are simple to establish are unlikely to bring meaningful benefits, other scholars have pointed to the value of information-sharing projects like the Structured Threat Information eXpression (STIX) and the Trusted Automated eXchange of Indicator Information (TAXII), which are meant to facilitate the interpretation of threat information and the sharing of this information.90 In a competitive and multipolar dynamic with myriad attackers and defenders (and the prospect that players may at different times embody either role), cooperative information-sharing efforts will be propelled by the sense of vulnerability to exploitation while encountering potential obstacles as information-sharing may be interpreted to foreclose some members’ subsequent options. Although substantial U.S. government investment was a prerequisite to the work that would eventually bring about the modern internet by the 1990s, officials showed an early interest in divesting the government of much of its authoritative relationship to the internet’s governance. The internet’s ancestor ARPANET “initially took shape thanks to well-managed state subsidies and collaborative research environments” that has led one student of the Soviets’ contemporary and failed computer

The Attribution Paradox and Organizations’ Impact on Cyberwar91

networking programs to write that “the capitalists behaved like socialists while the socialists behaved like capitalists” through “unregulated competition among self-interested institutions” and Soviet apparatchiks.91 This interpretation confronts a common perception of the internet standing as the crowning achievement and proof of the capacity of a free society, and of the implicit theme that free societies out-innovate closed or autocratic ones. That interpretation is summarized and advanced by Alexander Klimburg: Only liberal democratic societies were able to give birth and nurture the Internet into its young adulthood, and . . . the ascent of cyberspace and the current governance of the Internet is one of the most perfect proofs that political freedoms really do exist, and that they make a meaningful difference in the most practical and measurable of ways.92

The complex arena of internet governance invites various interpretations.93 It is quite possible to understand the emergence of the internet reflecting U.S. support for research yielding progress toward a functioning interconnection of the various “internets” into an overarching “Internet,” and to see U.S. government activities in the 1990s generally moving toward a degree of privatization that divested the government of its previous authority over cyberspace. This ran parallel to widespread popular expectations, evident in government policy and even appearing in academia, that the new millennium would reveal a quasi-utopian democratic-capitalist era, in which globalism would advance hand-in-hand with ever-wider access to information. Shifts from direct U.S. government authority toward a multi-stakeholder system in which the U.S. government maintained an indirect presence can arguably be understood as both an attempt to short-circuit the prospect of a strong precedent of national sovereignty in cyberspace and also a signaling of the innovative benefits of a U.S. system that yielded the internet and opened it to the world. Indeed, instilling a cooperative zeitgeist in potential partners is likely to presuppose the modeling of these same values.94 SOVEREIGNTY, CENSORSHIP, AND A TRANSNATIONAL LANDSCAPE The internet has persuasively been likened to “a gigantic, globally distributed, always-on copying machine,” and digital systems “collapse the distinction between transmitting, copying, and using information.”95 One might consider whether a similar effect might be envisioned between the tracking of data, its potential exploitation, and its censorship. As a transnational environment, another question quickly follows: whether such tracking or censorship activities can be kept to one’s own sandbox, and

92

Myths and Realities of Cyber Warfare

what role sovereignty concepts have when sovereignty is asserted for the purpose of protecting or preventing activities from occurring to populations and places. China’s pursuit of social harmony has resulted in programs like the Golden Shield and its keyword filtering. This is consistent with predictions at the outset of the 21st century that “it will be some time . . . before the Internet becomes a political threat in China. In the near term, the Internet may in fact strengthen the [ruling communist] party.”96 This effect has not been accidental. Between its internet users and online minders, a cat-and-mouse dynamic has reportedly emerged, in which, for example, blocked terms such as “censorship” can be simulated through an ironic use of the word “harmony,” whereupon the newly coded word becomes at risk of blocking and yet another term is required in turn. Conversations cease to be private when mobile phones are switched on to become eavesdropping devices. The regime’s system of automated monitoring in order to establish and track the “social credit” of citizens was begun in 2015, and although its full deployment appears to be lagging behind schedule,97 the emergence of a system for watching citizens and determining their political and societal reliability from the perspective of the governing party exemplifies how sovereignty can be interpreted to intersect with censorship. Nor does the philosophy of surveillance necessarily stop at the nation’s borders because effective tools are prone to find new uses—and new users. China’s reported espionage of Google is reputed to have included scooping up information about whether other nation-states had identified Chinese spies working overseas.98 According to Singer and Brooking, “the internet has not loosened the grip of authoritarian regimes. Instead, it has become a new tool for maintaining their power.” They cite specifically how Russian president Vladimir Putin reacted to successful Chinese censorship by arranging for Chinese experts to train Russian counterparts how to devise similar programs.99 Moves ranging from the simple enforcing of an internet outage in Burma during its 2007 “Saffron Revolution” to work on the walling off of a “halal Internet” in some Islamic countries point to an interest in expressing sovereignty through sequestration.100 The rise of the giant internet corporations has led to another set of players in the game of monitoring and moderating content. In keeping with the perspective of Jason Healey that nonstate actors have disproportionate capabilities and play an unusually prominent role in security affairs with respect to the cyber domain, Singer and Brooking suggest that “the entities best positioned to police the viral spread of hate and violence are not legislatures, but social media companies.”101 Indeed, the first decades of the 21st century have coincided with the birth and expansion of social media. Predictably, traffic has led not only to congestion but also to conflict. Larger numbers of people posting—and viewing—vast amounts of

The Attribution Paradox and Organizations’ Impact on Cyberwar93

content has rapidly fed the issue of online content moderation. Whereas countries such as China have imposed the moderation burden on internet service providers, across Western countries content issues had been far more unfettered. This has brought content controversy to the doorstep of the platforms themselves, and companies such as Facebook, Google, and Twitter have suddenly had to respond to a cycle of controversial content (and controversial tasks of defining unacceptable content) until a two-layered system recently emerged. Ordinary users flag content they deem questionable, and paid personnel then sort through the mountains of potentially problematic content, ruling on each case. The majority of these employees are reportedly not direct employees of the internet giants but are instead subcontract employees living in India and the Philippines.102 On the other end of the spectrum from censorship is data monitoring, which can be profoundly lucrative. Conti has recognized that security-driven self-censorship “constrains the power that search [technology] offers,” and untoward activity either by an internet company or by a governmental entity “is equally concerning.”103 But whereas some governments have demonstrated an active interest in curtailing or corralling dialog about particular topics or access to certain websites, companies that collect and monetize data have an opposing set of still problematic motivations. The more information users provide, whether through posts on social media or as queries via search engines, the more companies know (or presume to know) about them; this information is then made available to third parties at a profit for the online company. Companies even tailor what they show users in order to respond to their conclusions about their tastes and interests. Thus, what is theoretically a wide world is potentially skewered by a shrinking aperture set by companies that try to “respond” to desires that their responses help shape. Thus, “we relinquish control over our destinies, one query at a time,” Conti notes.104 Perhaps even worse, the collection of that data, like the collection of any information online, becomes another resource awaiting exploitation not only by its collectors or its clients but by anyone able to pick a lock in cyberspace. The policing of content can be thought of as having devolved from a state enterprise to a corporate one, and once in corporate hands has been funneled toward economically expedient solutions. Determining the appropriate role of the state in regulating internet activity may depend a lot on perspective and on which state is considered. The transferal—or rather the sharing—of these responsibilities with corporations complicates rather than simplifies questions about monitoring and censorship. What is monitored can be collected, what is collected can be stolen, and what is stolen can be used in pursuit of strategic goals. The rampant use of social media has increased the amount of data that enters this cycle, and its utilization for hostile purposes is already coming to prominence.

CHAPTER 5

Form and Function of Social Media

FACEBOOK Perhaps the most iconic of the social media giants is Facebook. Begun in 2004 by a small circle of Harvard University undergraduates, it has grown rapidly, reaching a user population of 2.2 billion by the start of 2018. Facebook is the leading social media outlet across all of North America, as well as most of South America, Europe, the Middle East, South Asia, the Pacific Rim, and portions of Africa. “In countries like Thailand and the Philippines, Facebook literally is the internet.”1 Its early policies made the application available to an ever-growing pool of potential users: first Harvard students, then to students from other colleges, and ultimately to any computer user claiming to be at least thirteen years old. But still more important for Facebook’s rise was its apparent ability to meet a human yearning for connection. Seeming to answer human needs for connection forms the bedrock of a social media app’s power to attract users. Arguably, the multitude of social media apps provides linkage rather than connection. As a result, these tools can beguile and addict, but they are unlikely to completely satisfy. The significance of this distinction lies in the fact that this dynamic offers both a short-term power of hollow attraction and a longer-term vulnerability, which is often glossed over through novelty. Changes in the app’s format refresh the cycle of interest and simulate new forms of connection. For Facebook, these adjustments have included making the status function more flexible (2007), adding a private chat function (2008), developing a patented newsfeed tool (2010), and introducing different emoticon-based categories of reaction to other users’ posted statements, images, and videos (2016).2

96

Myths and Realities of Cyber Warfare

Yet relatively personal connection, compared to other social media alternatives, constituted the mantra of the rising giant. Among the original developers, Mark Zuckerberg quickly became the company’s face and leader. He told investors of the newly tradable Facebook that the app had been “built to accomplish a social mission, to make the world more open and connected.” Specifically, “our mission . . . starts . . . with the relationship between two people.”3 Research identifies users interpreting and employing the app in that way. Lisa Ellen Silvestri’s study of US military use of Facebook noted that although Facebook is predominately a one-to-many technology in which the user’s activities are visible by an array of friends and other users, when deployed soldiers and Marines wrote on the Facebook walls of loved ones’ pages, they tended to do so with the intent of talking directly with one specific person.4 The aura of cozy intimacy can be overestimated, but it seems to be an important intermediate objective from the perspective of Facebook’s creators. Zuckerberg has claimed that “we don’t build services to make money; we make money to build better services.”5 A more telling claim might be instead that providing ever-more attractive and magnetic services—all the while retaining the sense of cozy intimacy—helps ensure that large numbers of app users will continue or begin using the services, and that the data derived from this usage generates the wealth that allows a company to offer services to 2.2 billion people free-of-charge while simultaneously generating nearly $56 billion in revenue in a single year.6 The emphasis on openness and information necessarily, if quietly, prioritizes the company’s open access to user information as a precondition of its model. The more visible openness and access by other app accountholders, or as of 2009 simply people online searching for basic data on an accountholder’s page, has simply been the more visible counterpart to a trend in which Facebook for years steadily eroded privacy settings and arranged for increasing amounts of access to user data.7 Discoveries about the company’s utilization of this access and of the power of its sensitive algorithms have inspired concern for many years before disclosures such as the 2016 leveraging of data by Cambridge Analytica. Facebook’s immediate predecessor application, Facemash, ran afoul of Harvard University’s administrative board for its disregard for the privacy of people whose information had been appropriated by the app.8 One report noted that Facebook had in 2014 “selectively manipulated the newsfeeds of 680,000 users, showing them either happier or sadder status updates” to confirm that happier news tended to cheer up users and that more solemn news topics had an opposite effect.9 This example parallels other online tools’ misusing the power of their algorithms in order to manipulate users and monitor the effects.10 Such activities, notably but not exclusively conducted at Facebook, reflects an alertness to the

Form and Function of Social Media97

opportunities for influence that social media can wield, and it reflects a willingness to engage in influence at least for the purposes of exploring its power. REPURPOSED FACEBOOK FUNCTIONS Traditionally, form follows function. Less-than-charitable interpretations include that the form allows the leveraging of data because that was a true purpose. Even accepting Facebook’s claims that the company and its website exist primarily to enable social connection, then influence and monitoring by the app’s creators are at least examples of the function of social media apps being nested within their form. Many users also adopt and massage other functions to fit the contours of social media forms. Facebook is ubiquitous, seemingly free, easy to use, and almost globally accessible. These traits invite its use in the context of conflict and political struggle as well as its originally envisioned environments of dialog. Internet access is not a precondition for politically motivated social mobilization, but it does facilitate it, and Facebook has already been involved in a variety of geopolitical contexts and it has already gone to war multiple times. The war between jihadi terrorists and U.S.-led actions to eradicate terrorism has coincided with the birth and adolescence of social media. Throughout most of the Iraq war and first tenure of postwar stability operations from 2003 to 2011 and for much of the period of U.S. military activity in Afghanistan, U.S. government policy on social media usage struggled with issues regarding operational security and other concerns stemming from usage of social media by deployed or stateside personnel, particularly in connection with incidental use of social media on the institution’s own devices. Facebook nonetheless offered an enticing opportunity to write to the folks back home, in a way that evoked soldiers’ letters home in previous wars but that also possessed an instantaneousness unprecedented in any earlier conflict. In addition to its use as a platform for the deployed personnel of nation-states writing home to their families from the battle zones, other Facebook accounts promoted recruitment and propaganda support messaging of jihadist movements. These efforts appear in greater detail in the next chapter, but one of the figures who notably applied the app in these directions was Anwar al-Awlaki. A U.S.-born cleric with connections to several terrorists including three of the September 11 hijackers, Awlaki relocated to Yemen and promoted terrorism until an unmanned aerial vehicle attack killed him in 2011. Creator of an English-language radical online journal Inspire, he also mobilized followers through an active Facebook page.11

98

Myths and Realities of Cyber Warfare

Nor has the struggle between jihadist terrorist entities and the planet’s moderate and democratic geopolitical blocs been at all the only space in which Facebook has been adapted for war. In 2014, Russian-backed separatist paramilitaries followed up on Russia’s recent absorption of Ukraine’s Crimean region by detaching Ukraine’s eastern borderlands. This forced a difficult set of challenges on Kiev, whose principal sources of security were an underequipped military and a twenty-year-old agreement known as the Budapest Memorandum on Security. This pact had arranged for Ukraine and two other former Soviet republics to divest of their Soviet-era nuclear arsenals in exchange for sovereignty guarantees from Russia, the United States, and the United Kingdom. With the pact undercut by the eastern security breaches and the country’s military reeling, citizens’ groups within the country began to organize to provision fighters who would battle the separatists. As a platform tailored to communications between friends and built for one-to-one and especially one-to-many communications, Facebook proved easily adapted for the purpose. The app’s global reach allows the dissemination of the group’s message to reach beyond the country’s frontiers, and reportedly the group receives some donations from abroad. In a country where many people suspect that donations do not find their way to intended recipients, the app’s functionalities such as posting photographs allays concerns that donated clothes or equipment might not reach the front: user-donors can actually see the boots or coats or equipment that they gave being worn and used by the nationalist troops. Facebook is used not to launch attacks directly but rather to sustain kinetic fighting, and soldiers in the fighting have described the organizers of the Facebook supply effort as analogous to combat medics serving on the battlefield in noncombat roles.12 According to journalist David Patrikarakos, online mobilization “endows citizens with the ability and the means to act in dynamic ways their government cannot” in the case of militarily less powerful states.13 Patrikarakos examined social media and has argued that it is changing the nature of warfare, and while it may be a stretch to assert that the nature of war is itself changing, the character of conflict is clearly being impacted by the ability to mobilize quickly and across considerable geographic distances. These opportunities for mobilization also raise questions about the relationship between soldier and civilian, and Singer and Brooking have asserted that social media campaigns constitute a new form of “LikeWar,” and that “every LikeWar is a battle for attention with a specific objective in mind,” in which a command of “conveying narrative, emotion, and authenticity, melded with community-building” ensures virality and relevance, and ultimately helps deliver victory.14 The relationship between evoking an emotional reaction and attaining the viral qualities that create (or, likely, simulate) relevance have been

Form and Function of Social Media99

identified in studies and are actively pursued by groups seeking to bring geopolitical change. Facebook’s own study that manipulated the proportion of positive and negative news stories encountered by selected users was in fact anticipated a year earlier by research by Chinese data scientists. Their 2013 study found that far and away the most evocative emotion that was likely to carry and resonate was anger. Contemporary research estimated that 5%–6% of Facebook accounts open at that time were themselves fake. Researchers have struggled to trace the leveraging of anger-inducing content to form viral and destabilizing messages; findings have included conclusions that more than 5% of the total Facebook user population was exposed to Russian disinformation related to the U.S. elections of 2016, and that nests of fake news creators have even metastasized into subcontracting and outsourcing of fake news stories propagated by accounts run by both adults and children located not only in Russia but also in places like the small Balkan state of Macedonia.15 RESPONSE TO REPURPOSING Eric Schmidt, serving as executive chairman of Google’s retroactively constituted parent company Alphabet, explained that “it is inconceivable to any of us . . . that you would use Twitter, Facebook or YouTube for these videos that are designed to terrify.  .  .  . We  .  .  . did not understand that you could sufficiently terrorize people using the online media to recruit people” to terrorist movements.16 Facebook built an admirably robust and user-friendly platform that to date has a user base that rivals the total population of the People’s Republic of China and of India combined. The expectation that no one would launch a serious effort to repurpose the platform for a strategic purpose certainly in retrospect appears utopian, even naïve. Unfortunate as it may be, the launch and growth of social media giants like Facebook open new dimensions of opportunity for communication, and when people communicate, they do not necessarily say only the things that their masters want to hear. History indicates that new technologies will be used for strategic aims, barring extraordinary and conscious efforts to preclude their use. In fact, earlier technologies that had been heralded as bringing connection, abundance, and understanding to humanity have repeatedly been used for violence and political struggle. Radio technology that could connect societies was soon used to disseminate propaganda. Chemistry that alleviated hunger was quickly repurposed to facilitate explosives production in wartime. Even maps and cartography have a long history that is intertwined with warfighting. No one wants to lose a war because the price of defeat can be devastating. There is, as a result, a powerful impulse to use what is available to assist in times of struggle. Arms limitation efforts and law of armed conflict advocates face the uphill climb of combatting this impulse with

100

Myths and Realities of Cyber Warfare

respect to particular kinds of weapons or actions. But expecting social media to not become embroiled in conflict is to ignore how the impulse to avoid defeat has engulfed myriad technologies across history. Nor was the weaponized utilization of Facebook entirely a bolt from the blue. An early and evocative misuse of social media involved the cyberbullying of thirteen-year-old Megan Meier on MySpace that ended in her 2006 suicide and an enormous blow to the reputation of what had been a first-generation social media icon.17 Various regimes have exhibited concern that dialog on Facebook might constitute, from their own perspectives, dangerous speech; as a result, Facebook access has been blocked at various times in countries such as China and Turkey, and its use (alongside YouTube and Twitter) by discontented Tunisians in January 2011 helped galvanize what would become known as the Arab Spring. Tunisian authorities, unable to filter network traffic and block Facebook, opted on January 27, 2011, to eliminate all internet activity. This has been described as “the first time in Internet history that a state purposely went offline to halt information flows.”18 Historical perspective provides important reminders that social media may accelerate or magnify rather than ipso facto create seemingly revolutionary dynamics. Michael Neiberg, a noted scholar specializing in World War I, observed a century-old case that matched social media mobilization in its rapid ferocity. The Shandong Decision at the post–World War I Treaty of Versailles ceded Chinese territory that had earlier been controlled by Germany to Japan, which had captured it in the early phases of the war. The brazen quashing of Chinese self-determination preferences at a conference that ostensibly valued that tenet meant that the decision was “incredibly unpopular everywhere except Japan.” Thanks to rapid communication by wireless telegraphy, people in China learned within hours of a decision reached just west of Paris, and violent protests known as the “May 4th movement” ensued, while Chinese ex-patriots in Paris conducted simultaneous nonviolent demonstrations. As Neiberg observes, this visceral and mass-organized reaction by Chinese in Paris and in China happened “without WiFi, without Facebook, without #May4th. And it worked.”19 Social media can profoundly impact social mobilization, but such mobilization has happened without online social media platforms, and it has occurred before their appearance. Silicon Valley magnates perhaps had little sympathy for the tottering autocratic regimes awash with youthful networked protesters, but a more earnest examination of the Arab Spring might have highlighted that social media could be used to support political unrest and that future targets might not always be so anathema in the eyes of social media’s own moguls. Facebook itself was apparently caught by surprise by the complexity involved in something even as deceptively straightforward as defining and moderating inappropriate sexual imagery on its site.20

Form and Function of Social Media101

Facebook’s creators deliberately intended the platform to be accessible and effective, and these traits are just as much of interest to extremists as they are to high schoolers or undergraduates or their parents who login in to comment and post with friends. Facebook has faced a barrage of lawsuits from the families of people killed by extremist attacks in Gaza and Israel, citing the unobstructed leveraging of Facebook for disseminating propaganda as constituting assistance to terrorist groups.21 These lawsuits have repeatedly been dismissed, but their existence draws attention to questions about how platforms’ use by third parties might impact world affairs. Warning signs abounded prior to the Russian strategic activities across Facebook and other social media in 2016, and the social media industry’s evident unpreparedness seems to reflect an inability to extrapolate from past events or to anticipate probable threats. In the words of Singer and Brooking, Facebook founder Zuckerberg reacted to the rancor directed at his company after 2016 with “essentially a corporate version of psychiatrist Elisabeth Kübler-Ross’s five stages of grief.” Zuckerberg’s assertion that social media manipulation of the election was “a pretty crazy idea” marked the denial phase. Lashing out in anger followed, before promises emerged that the company would redouble its efforts to quash manipulation and misinformation on the platform. Nine months elapsed before the company formally identified the Russian government as the source of a misinformation campaign in the run-up to the national elections conducted the previous fall.22 A variant of this pattern of denial and delayed and ostensibly surprised contrition would be repeated when it was discovered that mountains of data had been quietly collected and leveraged by Cambridge Analytica, and indications that Facebook failed either to stop or to publicly disclose and condemn these activities until they were uncovered externally. Schneier notes that Facebook responded with “lots of strong words,” but “nothing has changed about Facebook—and probably won’t.”23 The process of external disclosure, denial, and shocked response and promises to improve is coming to seem cyclical in the eyes of analysts observing Facebook as it responds to unsavory uses of its platform. Whatever damage this is exerting on the Facebook brand, it has still coincided with growth in the user base. Facebook remains the leading social media platform across most of the countries on the globe. And despite the muted misgivings of government officials about a corporation taking on powers traditionally held by sovereign states, efforts to moderate content align to some degree with the kinds of influence over speech and discourse that states once possessed, or at least shared with more tangible media institutions. Facelifts and added features for adding content, comments, and reactions continue to appear, in a bid to retain the sense of cozy intimacy that keeps large numbers of users—particularly younger users—on the platform. According to some sources, this is the battle that Facebook is

102

Myths and Realities of Cyber Warfare

losing.24 That, however, is a long-term battle, and Facebook may well have the time and the assets to reassert itself in that struggle. STRATEGIC TWEETING Whereas Facebook’s epic growth derives from bandwagon effect and the perception of community among users connecting with friends, Twitter relies on brevity and rapidity for the dissemination of compact announcements. A landmark study of weaponization of social media notes that Twitter “focuses less on bringing people together and more on bringing ideas together.”25 Tweets are “just little blurbs of information that you can put out there, instead of having to have these exhaustive discussions on Facebook.”26 Twitter’s reported active user population stood at 330 million in 2018, and Japan is the only country in which Twitter actually exceeds all other social media rivals in popularity. In terms of size as well as earnings, Twitter lags far behind Facebook. But it draws considerable attention, which magnifies its impact in global discourse. One researcher has explained that “Twitter may not always be the best medium for receiving a clear and impartial picture of events during a conflict, but it’s the perfect way to get your version of events out immediately.” The result is “effect without cause,”27 which is tantamount to an ability to reshape the contours of various forms of discourse (including debates) continuously. That power is extremely useful in wartime propaganda as well as in other information operations. Both with respect to physical conflicts and less overt strategic struggles, analysts note similarities in how information operations utilize social media such as Twitter. Steering online discourse takes various forms, known as trend distribution, trend hijacking, and trend creation. Trend distribution refers to inserting messages into several existing lines of discourse in the hope that they will attract attention; an example would be tweeting a flurry of political statements or images, each tweet carrying a popular hashtag selected for its popularity rather than its pertinence. Trend distribution could be thought of as the Twitter counterpart of an earlier social media phenomenon of photobombing people posting selfies. The price of trend distribution is low but the likely payoff is low too. Trend hijacking and trend creation deliver higher payoffs but demand more effort. Hijacking involves selecting a popular trend and posting so many tweets about a particular topic (unrelated to the trend) that the original (topical) traffic is washed out. Hijacking requires much more effort than distribution, but successfully hijacking a globally popular trend can jar users and provoke wider attention. Creation involves launching a new and pertinent hashtag and elevating it to the prominence of a trend through a massive volume of traffic rivaling the size of existing popular trending topics of discussion. Given the volume of traffic required for hijacking and

Form and Function of Social Media103

creation, these two approaches are expected to rely heavily on the use of autonomous programs called bots that simulate human accountholders.28 Distribution, hijacking, and creation are not exclusively undertaken for strategic purposes, but they do encompass the prominent approaches taken by information operations planners working to leverage social media. They and other actors have embraced the opportunities for dialog manipulation, dubbed “commanding the trend.” Evidence can be seen in the fact that Twitter has estimated that bots accounted for 5% of accounts in 2014 and 15% by 2017.29 This is more problematic than it may sound. It means that the proportion of bots in the Twitter landscape tripled at a time when the overall user growth during the same period registered a mere 15% and 20% growth.30 Light calculation reveals that bots would account for nearly 90% of the overall growth in regular Twitter users between 2014 and 2017, since the total active accounts in 2017 reached 328 million (up about forty million from three years earlier), but the rapid increase in bot population would mean that nearly thirty-five million new bots were launched in that period to supplement the approximately 14.5 million already present. Among industry leaders in social media, Twitter long proved particularly reluctant to enter into the thorny issues involved in restricting content.31 Notably, these years that saw an explosion of bot accounts (and apparently a virtual flatlining of growth among actual human users) coincides with two infamous waves of Twitter being leveraged for strategic purposes. TWEETING A CALIPHATE ISIS aggressively used Twitter to support its military expansion, in keeping with the movement’s effort to use social media as a force multiplier in both a figurative and literal sense. Different narratives vary in tracing the origin of the ISIS movement; some evidence can point to its having broken away from al-Qaeda lineage to become the terrorists known as al-Qaeda in Iraq during the violence following the toppling of Iraqi dictator Saddam Hussein, but other interpretations suggest that the terrorists who would establish ISIS were more like a patronized franchise of al-Qaeda than a breakaway offshoot. Predictably, various events and dates potentially mark the birth of the terrorist group. The same chronological ambiguity could not be said for the group’s violent invasion of social media and onto the wider geopolitical stage. ISIS unsheathed its online propaganda “Dawn of Glad Tidings” app to coincide with the group’s fighters’ approach of Iraq’s northern second city of Mosul in early June 2014. By any conventional measure, the ISIS force should not have been victorious. Although vicious and violent, its inchoate force numbered perhaps 1,500 personnel, while the paper strength of the Iraqi army forces around the city stood at 25,000. Iraqi numbers are reported

104

Myths and Realities of Cyber Warfare

to have been previously inflated so that as much as 60% of this force may have consisted of nonexistent phantom troops meant to pad unit payrolls; nevertheless, whether Iraqi forces consequently outnumbered ISIS by 17-to-1 or by “only” 7-to-1, the attackers were heavily outnumbered in the face of a well-equipped defending force.32 Long-standing military convention holds that an attacker should possess a 3-to-1 advantage over the defender, yet the outnumbered advancing ISIS fighters comprised barely 3% of the total that such convention would suggest should be arrayed to capture the city. But ISIS forces appear to have panicked and routed the defenders because within days ISIS controlled the city. The Dawn of Glad Tidings app helped trumpet the capture, and in advertising ISIS successes the propaganda campaign helped make the terrorist group’s victories seem inevitable. The ISIS propaganda app cleverly transformed the devices of sympathizers into bot machines because the permissions granted through loading of the app give ISIS the opportunity to add them into the throng of machines that deluged social media with reports of ISIS military success and with signals of savage brutality meted out to ISIS’s unlucky enemies who failed to escape its grasp.33 Although the app was closed by Twitter within days of the city’s capture, the Dawn of Glad Tidings provided a brief but crucial chance for ISIS to flood the global commons with its message in the form of thousands of tweets daily.34 Panic wrought on the enemy served ISIS to tragic effect, as Mosul would not be wrested back from their hands until the summer of 2017 after hard fighting. Termination of the Dawn of Glad Tidings app did not bring an end to ISIS’s online information operations. Indeed, researchers believe that at least 46,000 Twitter accounts supported ISIS in the months following the app’s closure, although not necessarily at one time.35 Brookings Institute researchers believed that a small subset of these accounts was controlled by a cadre of “hyperactive users, numbering between 500 and 2,000 . . . which tweet in concentrated bursts of high volume.”36 This is in keeping with patterns described in a Strategic Studies Quarterly report that outlined how a “small core group” coordinates a message that is then magnified by a bot network with accounts that may follow each other as well as the core group of human actors, and that below these bot accounts is still another layer of human users. Members of this last group are unlikely to follow each other’s accounts but will follow the accounts of the core group and they will sometimes additionally follow the accounts of the bots above them, perhaps unaware that the bot accounts do not also belong to human users who are true believers in the movement.37 This pyramidal arrangement of social media accounts can greatly facilitate the aggressive use of social media, particularly the hijacking and creation of hashtags for strategic purposes. ISIS trend hijacking was most vividly on display during the 2014 international soccer championship

Form and Function of Social Media105

games, because ISIS accounts moved to dominate #WorldCup2014.38 Soccer fans interested in the games would be abruptly confronted by postings crafted by ISIS, such as a soccer game being played by ISIS personnel—using a severed human head in place of a soccer ball. The implicit message of such explicitly graphic imagery was that ISIS was unstoppable and inescapable. Despite being traditionally used as an escape from other controversies and concerns, sports was being invaded by violence and extremism. Concurrent with hashtag hijackings were other forms of propaganda, such as the intermittently published but visually slick English-language ISIS journal Dabiq. Named for the site of a Muslim prophesized battle equivalent to Armageddon, ISIS released a dozen issues that each centered on a jihadist theme roughly every month or two.39 Presenting overlapping messages in different formats or seemingly from different sources is a time-honored tool in propaganda, due to the reinforcing effect it can exert. A narrative spreads across social media because it matches aspects of existing understanding, it is promoted by a cadre dedicated to its popularization, and that cadre uses human and automated assets to produce a magnifying echo effect.40 Tweets foist graphic images of violence; the journal elaborates on extremist philosophy; other tweets announce the release of the journal; still more add further graphic images or other information, either to normalize the extremist viewpoint or to show outsiders, or to thread the needle of accomplishing both aims simultaneously. Recruitment efforts insidiously played on the yearning of vulnerable potential recruits for human connection and for a sense of purpose. ISIS operatives saw benefit in inserting less graphic, and sometimes even seemingly whimsical, images of their personnel. This helps explain the thinking behind including the “Cats of ISIS” images of apparently playful housecats amongst ISIS fighters and their military kit. Such juxtaposition would be baffling in the absence of a deliberate purpose, particularly within a group that was notoriously draconian about telecommunications access. Frightening outsiders and inspiring insiders permitted the same organization to post images of human-headed soccer and of cats lounging among Kalashnikovs. Then-defense secretary Ashton Carter concluded that ISIS was “the first social media-fueled terrorist group.”41 DEFEATING ISIS The grizzly successes that ISIS inflicted, in the Middle East, in affiliated attacks by individuals and small teams of like-minded extremists, and across cyberspace, could trigger conclusions that an aggressive leveraging of social media represents an irresistible asymmetric weapon for use by underdogs of any potential stripe. The power of social media usage requires attention and study. However, this does not mean that kinetic

106

Myths and Realities of Cyber Warfare

fighting ceases to play a key and central role in warfare. Whatever changes activity in the cyber domain are exerting on conflicts, the presumption that cyberwar or online information operations have extracted combat from warfare would be extremely (even dangerously) mistaken. As with the invention of weapons such as the tank or the airplane, social media should be understood as becoming a part of the fighting environment rather than a replacement of it. Generals and soldiers in the first decades of the 20th century learned that armor and aircraft were useful parts of combined arms fighting, and even if they proved indispensably necessary they were unlikely to be solely sufficient for victory. The fate of ISIS as a quasi-state reinforces a similar set of conclusions. The first actions meant to curtail ISIS’s Twitter activity exerted an apparently desultory effect, and clues to the ineffectiveness appeared even before ISIS began its assault on the cyber landscape. Nine months before Mosul fell, al-Shabaab terrorists killed sixty-seven people in an attack on Nairobi’s Westgate mall in September 2013. The terrorists used Twitter as a platform from which to proclaim their attack and to project misinformation that was apparently accepted and disseminated by media outlets unaware that their reporting had been coopted by terrorists. Twitter eventually responded by suspending a terrorist Twitter account, although al-Shabaab simply set up other accounts and the effect was negligible.42 Twitter’s early response to ISIS bore many of the same hallmarks. Accounts such as @IslamicState and @jihadISIS were closed, but the same users simply created new accounts, frequently providing only minimal modification to the name, such as adding a digit to the end. ISIS observed the hundredth iteration of the @IslamicState account by posting a picture of a birthday cake. Some analysts have optimistically pointed to account suspensions and suggested that the inconvenience of these suspensions “changed the once-free terrain for ISIS,” even citing one ISIS account in mid-2015 complaining that “Twitter has become a battlefield!”43 Since Twitter became a battlefield through ISIS’s own introduction of the platform to the conflict, the comment in isolation looks as much like a statement as a lament. The ISIS Twitter footprint receded significantly over the next two years. Twitter, perhaps seeking to emphasize the contrast with its earlier reluctance to become involved in content moderation even related to violent extremism, announced in August 2016 that it had suspended some 360,000 accounts between mid-2015 and mid-2016 because of their posts touting terrorism.44 By the end of 2017, a grand total of 600,000 account suspensions had been tallied.45 As much as this formidable number of account suspensions reflects Twitter’s efforts to curb violent speech, it also indicates the ease with which ISIS sympathizers were able to relaunch new accounts. No serious estimate of ISIS-affiliated Twitter users comes close to approaching the number of suspended accounts, meaning that

Form and Function of Social Media107

the users whose accounts were suspended simply launched replacement accounts, which then had to be discovered and suspended and replaced in their turn. When propaganda works, it tends to do so because it plays on something factual or at least plausible and then extrapolates from this to project and normalize further ideas. In the absence of other factors, ISIS accountholders and Twitter employees could have played a game of account suspension merry-go-round endlessly. Ultimately what would change the online context would be changes on the ground. This is evident in adjustments recognized in ISIS social media activity. As with other propaganda campaigns, ISIS worked to “seem disproportionately stronger than it was.” This was something it could plausibly accomplish when it was on the attack because it could project and magnify its successes to make them self-fulfilling in the minds of their opponents. But battlefield reversals can be covered by lies for only so long. As its holdings on the ground in Iraq and Syria shrank, its previous pattern of calling on sympathizers to travel to the quasi-state became increasingly impractical; this coincides with an increase in ISIS using social media to encourage distant sympathizers to stage lone-wolf attacks in their own homelands rather than attempt a probably unsuccessful journey to a shrinking extremist pocket.46 If propaganda builds on success, it does also need enough success from which to build. Lacking that critical mass, ISIS fizzled as a state and its interest in generating follow-on attacks forms a predictable but very different kind of approach than ISIS pursued when it possessed a plausibly viable heartland for jihad. TWEETS FROM RUSSIA Russia’s storied pedigree in information operations from the Cold War indicates an interest and aptitude for disinformation long antecedent to the development of the cyber domain. This legacy, and subsequent activities that forensics has also traced to Russia, illustrate the relevance of social media as an avenue of information operations that have strategic goals but may be only indirectly related to violence. Building is clearly more challenging and complex than tearing down. Even building an information operations image requires work in its construction and in its preservation. This may help explain why ISIS was reportedly compelled to counter rumors that it practiced female genital mutilation:47 although hardly squeamish about inflicting pain through graphic violence, the terrorist group wanted to use its violence to propel a particular brand as part of a specific messaging campaign, and this could require counterintuitive cases of protecting its brand against disturbing rumors. A campaign that is geared to advance by tearing down offers a particular opportunity for success. Elizabeth Dubois of the University of Ottawa

108

Myths and Realities of Cyber Warfare

has suggested that “chaos changes the balance of power internationally, and so if you don’t like the amount of power you currently have in the global system, creating a chaotic system is going to help you, potentially, maneuver to a position where you have more power.”48 Many analysts have noticed that entities such as Russia’s Internet Research Agency have identified and worked to capitalize on the ways that social media can “just as easily incite disagreements, fray social bonds,” and promote division as bring distant people into dialog.49 Researchers have tracked cases in which false rumors were disseminated via social media to attract attention and even impact behavior, and forensics experts have in some cases concluded that the trail led to the Russian Internet Research Agency. In one example, erroneous reports of an explosion at a chemical plant in St. Mary Parish of Louisiana snowballed as concerns about a disaster spreading toxic waste were reinforced by SMS advising residents to take shelter. The rumor, later traced to the Russian Internet Research Agency, was disseminated on September 11, 2014. The campaign coincided with both the anniversary of al-Qaeda’s attacks on U.S. cities and with the high tide of ISIS military fortunes across Iraq and Syria, encouraging an understandable but mistaken belief that the rumor had been the work of ISIS.50 Russian accounts’ seeding political discourse, particularly during election seasons in democratic countries, became a charged topic in the United States following the 2016 elections. This may in part be a result of inherently indemonstrable allegations that these untoward activities impacted the outcome of the election enough to be the key factor in determining the outcome of the election. An unfortunate upshot of this debate is twofold, in that it both sustains challenges to the veracity of a decided election and second that it also hinders attempts to undertake a nonpartisan examination of how and why information operations might have unfolded. Investigative journalists traced accounts such as “Angee Dixson” as belonging to a horde of 60,000 bot accounts operated from Russia. Telltale signs that “Angee Dixson” was not who “she” appeared to be included a typical bot tendency in its method of posting web links through “shorteners” and a profile photo of a German model.51 Other research has traced the same Russian-steered account for a year as it periodically reinvented itself: from a young African American angry about (fictitious) KKK activities on the University of Missouri campus, to a white-supremacist German eager to oust Syrian refugees, to a middle-aged American woman supporting Donald Trump’s candidacy and named “Deplorable Lucy,” in reference to comments that Hillary Clinton had vocalized about “half” of Trump supporters comprising a “basket of deplorables.”52 An interest in sowing chaos forms the common thread among these repeated facelifts of the same Twitter account. Some have suggested that the bots and the human-controlled accounts that they echoed sought to establish “the appearance of a popular consensus

Form and Function of Social Media109

to which others [begin] to adjust,” so that real people’s outlooks are remolded to the taste of an information operation’s planners.53 Perhaps. But an alternative may be that the campaigns aim less ambitiously and that their objectives tend less for particular outcomes or specific and alien-manufactured consensus viewpoints than instead for the nurturing of chaos and the fostering of discord. After all, tearing down is easier than building up—easier, whether the building alternative is something good or something bad. Observers seem to find that the 2017 elections in France, which were also subjected to information operations tailored to erode the left-wing candidate’s position, failed as the candidate won the country’s runoff election. While evidence aligns with views that information operations failed to impress chaos into the election season itself, subsequent instability such as the “yellow vest” riots in Paris detract from notions that the United States was uniquely vulnerable among democratic societies. Nor has involvement been “limited” to the relatively plausible deniability of so-called sockpuppets and bot accounts focusing on elections in India, the United States, and France, and the political discourse in Germany regarding Syrian refugees. The Twitter account for Russia’s embassy in London asked rhetorically, “No trust in Britain’s best friend and ally?” during a British Prime Minister Theresa May’s first state visit to the United States following Donald Trump’s inauguration.54 If imposing foreign chaos seems an unlikely policy goal rather than an intermediate step toward establishing some more specific political scenario, perhaps the missing piece involves the sovereignty issue. Journalists experienced in Russian affairs have suggested that a long-standing goal in Moscow has been ensuring continual backdoor access into various social media and online communications services, ranging from Facebook and Twitter to Gmail and YouTube.55 One is left to wonder whether seeding discord might be meant as a way to impose chaos, and whether chronic chaos might be intended as a tool not only for facilitating short-term gains such as the re-annexation of Crimea but also in the longer term confounding democratic countries and capitalist institutions into accepting and embracing ideas that ease a shift from a multi-stakeholder system in the internet into one more in line with Kremlin conceptions of cyber sovereignty. VARIANT FORMS AND FUNCTIONS ENTER THE FRAY Despite arguably being involved in the most infamous cases of social media’s strategic leveraging to date, conflict-related use of social media spills far beyond Facebook and Twitter. That includes the development of regional parallel platforms; it also includes the utilization of other mainstream social media platforms for similar purposes; finally, still, other

110

Myths and Realities of Cyber Warfare

social media apps are being adopted because of the opportunities they lend for more discrete communications. VKontakte consciously competes as the Facebook of the former Soviet bloc. Designed in 2007 as Facebook’s popularity was skyrocketing, VKontakte aimed to uncannily resemble Facebook’s feel, fonts, and blue-and-white color scheme. It is the predominant social media platform in Russia, Belarus, and Kazakhstan, ranking among the most popular social media options in Estonia, Latvia, Kyrgyzstan, Moldova, and Ukraine as well. Analysts have observed that although identity registration is a precondition for VKontakte users, the site has long proven popular with intellectual property violators. The site’s creator was unceremoniously dropped in 2014 and the site is suspected to work in alignment with regime preferences. As frictions grew between Ukraine and the pro-Russian separatists in the eastern part of the country, fake news reports of atrocities began to appear both on Facebook and on VKontakte to ensure exposure of the reports to social media users in the region. Accusations of Ukrainian army atrocities against the ethnically Russian Donbass population spread to Twitter as well.56 This points to how information campaigns, far from being confined to one or another form of social media, can leverage different formats and platforms to maximize exposure and to encourage an echo effect. The quest for social harmony in China, reputedly interpreted as an ongoing campaign to maintain social control and monitoring, includes not only programs like the social credit system mentioned in the previous chapter but also an array of national platforms that facilitate the monitoring that a social credit system requires in order to function. Google discovered in 2010 that an unstated price of doing business in China was not only an explicit sharing of company information but also a quiet and persistent effort to hack into and explore throughout Google’s systems and services and the data of its service users. The fallout from this discovery precipitated Google’s withdrawal from China, but Chinese social media platforms have stepped in to fill the resulting breach created by the absence of foreign services. Google services such as YouTube are blocked and their role taken by Youku. Baidu fills a number of roles, including the search engine functionality Google offers to most of the planet. WeChat reportedly trumps its external predecessor/counterpart WhatsApp in simple functionality; microblogs such as Sina Weibo fill a Chinese niche that Twitter occupies across much of the rest of the world.57 Substituting national stand-in services for their more global brethren (and, typically, models) simultaneously ensures that at least a degree of social media functionality is restored while the regime’s opportunities to monitor and even finesse discourse are imposed. Big opportunities for communication and commerce also entail opportunities for others as well, and this raises the kinds of moderation and

Form and Function of Social Media111

governance challenges that repeatedly seem to have taken Silicon Valley entrepreneurs by surprise. Google, fueled by the ultra-popularity of its landmark search engine, has expanded prodigiously. Recognizing the budding popularity of video sharing, Google acquired YouTube in 2006, just twenty-one months after its founding. In the run-up to the purchase, YouTube declared that downloads had reached 100 million per day;58 a decade later, an estimated 400 hours of new video content were being uploaded each minute. By 2018, YouTube ranked second among visited websites—behind Google itself—indicating the commercial foresight of the purchase.59 Accessibility facilitates popularity, and popularity partly feeds itself, but this means that videos such with political protest messages, or graphic violence, or intellectual property violations arise. Governance attempts have included work to take down videos displaying violence, although confusion arises when blocked videos are in fact the work of people working to uncover and decry such violence. Faulty or misleading tags can bring further confusion, akin to hashtag distribution: representatives of Anonymous reportedly sought to disseminate leaked materials during “Operation Leakspin” by posting videos conveying leaked materials under entirely dissimilar and unrelated tags that ostensibly ranged from economic to entertainment topics. Researchers find that these outlets are not uniformly cooperative or diligent in curtailing improper activities (where consensus in definition might be possible) or capturing evidence that might be useful for forensic study. Instead, by wiping away evidence, many companies digitally use “a vacuum cleaner [on] the scene of a crime,” obstructing later analysis to understand and prevent recurring issues.60 Nor are applications used only for information campaigns bent on directly swaying opinions or discourse. Many apps provide an element of privacy to communications that had not been possible before the digital age and reasonably effective encryption. Where security entities have long been uncomfortable with sophisticated space-based imagery and digitized mapping falling into sinister hands (in the way that jihadist attackers against Mumbai in 2008 used Google Earth in addition to highly portable Go-Pro cameras in the planning phases leading up to their attack), “terrorist organizations have all relied on cheap and ubiquitous cell phone text-messaging to exercise command and control” and the introduction of encrypted communication messaging apps has advanced the viability of this approach to combining private communications with coordination.61 The privacy permitted by social networking apps were useful in organizing the anti-autocratic protesters of the Arab Spring, but the same and other apps, ranging from WhatsApp to Telegram to Surespot, would also be used by extremists such as ISIS for their communications and for the latter phases of their grooming of potential recruits.62 Students of technology history have long understood that technologies do not have sympathies and do not play sides or favorites. Tools were fashioned to fulfill a

112

Myths and Realities of Cyber Warfare

purpose, and they can be used for that purpose—or alternatively adapted for other purposes—by anyone with the opportunity and the wherewithal to do so. There is no feature that can be established to guarantee that a private communications app can only be used to protest against a dictator and not to plot a terrorist attack, or to distribute photos of a cute pet rather than spread false accusations. One writer has suggested that “ultimately, social media is simply a hyper-expedient means of communication that is on the same evolutionary path as the signal flare and the telegram.”63 Like the Vichy French police functionary in the classic film noir Casablanca, Silicon Valley appears to be shocked—shocked—that gambling is going on when it comes to the repurposing of social media and social networking apps. Even scholars working to explain manipulation of social media seem surprised that a platform that can be leveraged for political purposes might be leveraged in different ways by more aggressive actors.64 Journalist David Sanger describes how the creators of the internet giants “convinced themselves that once they connected the world, a truer, global democracy would emerge. They rejoiced when Twitter and WhatsApp made the Arab Spring possible and were convinced they had built the weapon that would tear down autocrats and beget new, more transparent democracies.”65 Such an outlook would have ignored that weapons are tools, and they seldom voice a preference about who uses them. Once transformed into weapons, social media platforms are technically eligible for use by any and all comers. This is an idea that clearly frightens author Alexander Klimburg: “as bad as the vision of kinetic cyber conflict is . . . the idea of a full-blown information warfare defeat is even more disturbing. For at worst it would mean not simply a loss of national prestige or a shattering of alliances but even a fundamental weakening of democracy itself.”66 These issues beg questions about the effect of social media on the enunciation and mobilization of ideas—and of actions.

CHAPTER 6

Unpacking the Mythologies of Social Media

SOCIAL MEDIA, ACCESS, AND THE DEMOCRATIZATION OF SPEECH The impact of electronic connectivity is profound. Its sheer pace has been equally staggering. The unveiling of the internet in the closing years of the 20th century unlocked an unprecedented expansion in access to information—provided that a person resided with one of the less than 15% of Americans whose home had a computer and an ability to connect online, or that a person could access such a machine. Looking back from a perspective two decades into the 21st century, it is important to recognize that these personal devices were typically full-blown desktop computer stations, that they almost always carried a cost in the thousands of dollars without cheaper alternative options, and that internet connection speeds were measured in the kilobits roughly two orders of magnitude slower than connection speeds a quarter century later. Memories, hard drives, and processors have all experienced equivalent revolutions in capacity, alongside other improvements in areas such as visual and sound experiences.1 Connection in the past quarter century has not just become nearly ubiquitous across most of the United States, but globally. The rates of connection in underserved parts of the world have been little short of stupefying: between 2000 and 2007, internet usage more than doubled across the world and increased by 870%–920% in Africa and the Middle East. Since many of the places that saw the fastest proportional growth in the first years of the 21st century were afflicted with poverty or violence, places “starting out from a baseline of practically no connectivity . . . are migrating online” most abruptly.2 Singer and Brooking rightly characterize how “the internet has left adolescence.” Asia, with 60% of the world’s population, also has over half of

114

Myths and Realities of Cyber Warfare

its internet population; many might be surprised to discover that Africa is home not only to 15% of the world’s population but also to 15% of the people who are online.3 As these trends were just beginning to appear, observant scholars noted that “the ability to receive and share information globally has created societies wherein actor and audience become one and the same.” Little wonder that contemporary optimists predicted an era of “disintermediation” in which the planet’s populations would be democratized in their access and activity and power differentials between people and institutions would be flattened.4 The emergence and rapid prominence of social media (not to mention of pocket-size devices that radically outperform their desktop forebearers of the internet’s childhood) nurture expectations that connection is inherently democratizing and universally freeing. The mobile phone was quickly supplanted by the mobile smartphones such as the iPhone and its Android counterparts, and as a result the internet came to be the crux to a phone’s identity rather than “a gimmick or afterthought” as had been the case with earlier mobile phones granted a clumsy ability to interact with the web. Even when scholars voiced concerns about unsettling potential trends in the future of the internet, the prevailing conclusion was that “today’s Internet structure empowers individuals” through its decentralization.5 Chronically scant awareness has been extended to the concept of disenfranchisement in an internet-centric world for those who are not connected through the internet—and in any case, the overwhelming presumption is that if such a situation is a problem, then the solution is further connectivity.6 This points to a logic so satisfied with its own infallibility that no problem, not even problems related to electronic connection, cannot fail to be solved through electronic connection. More participants and more activity result in greater cacophony. Very quickly, the question arises about whether the objective is for a democratization of speech or a democratization of being heard. The two are not synonymous, as students of free speech debate issues will recognize. Singer and Brooking succinctly note that “virality is inseparable from reality,”7 and this is because sensitive algorithms do not allow voices to carry unless the algorithms identify indicators that others want to hear the message. The resulting “flocking together” effect of homophily is a virtually foregone conclusion, since not only do people typically respond positively to a sense of shared ideas and community, but algorithms are written to respond to and cater to that clear preference. As noted earlier, researchers in 2013 and 2014 working in China and for Facebook separately confirmed that eliciting emotional reaction maintains human attachment and interest. These tendencies are connected to a variety of reactions and results. One is a common conflation of responding (reflecting deliberation and thought) with reacting (reflecting reflexive effects triggered by stimuli). Another, which Silvestri identified in her study of social media usage by deployed

Unpacking the Mythologies of Social Media115

U.S. soldiers and Marines, is a “tethering” effect in which personnel who possess an unprecedented and instantaneous connection to the home front that had been impossible in earlier wars carry a burdening sense of obligation to continually reside (in an emotional sense) both at the front with their comrades and also still at home with their loved ones thousands of miles away.8 These effects carry obvious relation to the role of the cyber domain in war and conflict. Less inherently obvious dynamics also impact world affairs in ways that are coming to touch regularly on various forms of conflict. “Click farms” in slums of what was commonly referred to during the Cold War as the “Third World” can factor importantly in shaping perceptions on distant continents because a manufactured trend can be created to shift conversation and opinion.9 As with other useful instruments, this is bound to be explored for exploitation in connection with conflict, including warfare. SOCIAL MEDIA, CACOPHONY, AND THE DEMOCRATIZATION OF SPEECH? Inevitably, the vast volume of traffic online including on social media invites people to use the digital landscape as a resource, including as a resource for information on current events. The exact degree to which people rely on social media for their news is difficult to confirm; however, different findings have included that social media has overtaken both television and print media, that 39% of people get at least some of their news information through social media, and that 39% of people engage with social media in part to derive news. Still, other studies set the numbers higher, with mobile devices (well adapted for social media) serving as the principal source of digitized news for almost three-quarters of Americans.10 As a venue through which people can both interpret and participate in the news—and in which both their consumption and their overt participation impact the status and viability of topics for further discussion, using social media as a news source helps transform it into an arena worth sharing for particular purposes. The internet giants’ overall impression of and reaction to the Arab Spring was shaped by their pride and ambition, natural suspicions of old-style authoritarianism, and apparently hazy impressions about historical patterns. In a certain vein, their views seemed to touch on those of Rid, whose mantra that “cyber war will not take place” includes an assertion that internet participation will not only not increase global violence but actually reduce violence. Rid has argued that opportunities for subversion, made possible by the internet, would drive change without bloodshed. “In contrast to what some security scholars seem to think, [subversion] is not principally illegal and is not even principally illegitimate—only the most extreme forms of subversion are.”11

116

Myths and Realities of Cyber Warfare

This logic misses, however, that it is presuming the kind of tolerantly accepting definitions of “subversion” that are most likely to exist in places in which representative opportunities arguably leave the fewest people with neither voice nor recourse for addressing their problems and complaints. Among societies that tout social uniformity or unity or harmony, subversion is defined more broadly and treated more harshly than in democracies. Distaste for the social inharmony that comes from dissidents’ dissonance, China’s Great Firewall program guards against internet users’ attempts to learn about events like the military crushing of the June 1989 Tiananmen Square demonstrations. True, the same Great Firewall failed in the face of Chinese users’ criticisms after a schoolhouse exploded in 2001,12 and periodic failings of systems that are designed to block or sequester “dangerous” speech will likely recur. This does not guarantee, however, that the results will be free of violence, in the event that the minders grow nervous about their power to control or if the minded believe they have achieved a critical mass to force change. Perhaps part of why the Arab Spring “represented a high-water mark” more than “the first steps of a global, internet-enabled democratic movement”13 has to do with these factors. Just as achieving global consensus about what constitutes “subversion” and the danger it poses; fact is proving hard to define and protect. The challenge is considerable when apparently everyone is both a speaker as well as a consumer of information. One journalist noted, just a year after the unveiling of the first modern web search engines, that “the irony of the information age is that it gives new respectability to uninformed opinion.”14 This is a trend that flies in the face of the philosophies of people such as the late U.S. Senator Daniel Patrick Moynihan that people are entitled to their own opinions but not to their own facts. That sentiment, used for years on the banner of political veracity website ­FactCheck​.­org, was rightly hallowed; it is a short leap starting from possessing one’s own facts to inflicting policies based on those facts upon the lives of others. Suppression of dissent has long forced activists beyond the borders of their country. Frequently, such dissenters have desired democracy for their homelands, although figures like Vladimir Lenin and Ayatollah Khomeini demonstrate that some exiled dissidents seek to interpret and transform “the will of the people” into a new iteration of suppression in turn. Where historical overseas dissidents had relied on smuggled pamphlets or audiocassettes carrying their message back to the home country, the internet allows a far more rapid, long-range, and more disparate propagation of ideas.15 Distinguishing the good from the bad is largely a product of discernment, which is influenced by values. These are inherently personal factors. The validity and authenticity of overseas dissidents can be challenged, as can happen with those whose ancestry or empathy inspires engagement in the

Unpacking the Mythologies of Social Media117

discourse. Ordinary inhabitants of the infosphere, exhausted after tangling with irate accounts that turn out to be trolls or sometimes even bots that cannot be persuaded or mollified, are brought to confusion in which actual “dissident voices appear to be another mirage,” thus stifling the impact if not the enunciation of political opinion. Social media platforms that claim to focus on facilitating human connection nonetheless aim to maintain profitability; their designers paid scant prior attention to the prospect that these spaces would be used for political (let alone weaponized-information) campaigning. It is both expedient and advantageous to persist in treating political messaging like commercial advertising. The internet grew to popularity because of the tremendous access it provided and the anonymity it implied to users who believed that “on the internet, no one knows you’re a dog.” Microtargeting, founded on the conclusions derived from data collection and opaque algorithms, paves the path toward a house of mirrors effect in which users encounter occasional but unsettling clues that anonymity is largely mythical, but the information landscape remains nonetheless tailored (or narrowed) to meet their preferences—according to the ways it is interpreted through data analysis.16 Singer and Brooking argue that “companies must proactively consider the political, social, and moral ramifications of their services.”17 Such a case can certainly be made, but it immediately raises other questions about what kinds of ramifications should be encouraged and enabled or discouraged. Beyond even that lie questions about how bad effects or ideas (once they are defined) should be curtailed. Such questions touch heavily on the topics that once belonged to official institutions of governance. But the governing of speech means that issues like net neutrality quickly carry high stakes for a range of issues beyond the technical considerations that had earlier framed them, and arguably even “neutrality  .  .  . itself [is] a value judgment.”18 Google’s executive overseeing Europe and the Middle East noted, “I believe that the individual should have the right to choose their own moral values and that the state today should not replace citizens with this incorrect and amoral system.”19 To borrow from Jean-Paul Sartre’s most famous concept, to what degree is it possible for an internet giant to become a global information power and yet seek to choose not to choose? Intelligence veteran Ron Marks observed that the perpetrator of the disinformation campaigns during the 2016 U.S. election season “wanted . . . to mess up that [democratic] system” and succeeded. However, Marks continued, the captains of social media giants have ignored how the popularity of their platforms transformed them into public communication utilities, and “to sit there and pretend that they are neutral in this process is laughable.”20 Perhaps this partly explains Facebook’s stages-of-guilt approach to belatedly acknowledging Russian fingerprints identified by cyber forensics

118

Myths and Realities of Cyber Warfare

experts months earlier, and that social media companies repeatedly failed to “remedy the ills that played out on their networks” before “they became larger problems, even though executives could see these abuses unfolding in real time.”21 More proactive actions would directly impact the bottom line in the short term. Furthermore, taking earlier and more forceful action would also demand realizing among themselves—and convincing their multitudes of accountholders—that trouble existed at a time when  the signs were more nebulous and less severe. It would also have required the kinds of steps that necessarily could be interpreted as combatting the ultra-democratic environment that lies at the heart of what social media moguls say and believe they provide. Finally, whenever even a shadow of circumstantial evidence permitted such an interpretation, the sockpuppets and their bots and even some unpersuaded accountholders would pour vitriolic accusations that the social media giants had betrayed the flow of free information that they had promised to deliver. This phalanx of disincentives provides ample rationales for why social media companies instead allowed undemocratic leveraging of their platforms, for a range of purposes in connection to an array of struggles. TRADITIONAL JOURNALISM, SOCIAL MEDIA, AND FAKE NEWS The evident opportunity for anyone and everyone to participate in the dialog online, through social media, lends a compelling sense of authenticity to the new media form. That aura of authenticity is the holy grail of any persuasion effort because by conveying a message as informational or conversational, an audience is much less disposed to sustain the kinds of disbelief and criticality that could otherwise sink a sales pitch. Add to that the studies that find internet advertising to be considered more engaging than television alternatives,22 and the value of persuasion through online vectors becomes clear. These factors can become tarnished as technology ages and becomes normalized but that also helps explain the continuing adjustments to appearance and functionality that characterizes social media; the quest for novelty is in part an ongoing battle to retain the novelty that invites engagement. People tend to see and interpret new technologies through the lens they had used in understanding older counterparts. This may reinforce the perception of novelty with the identification of the differences between new and old. The internet has sometimes been compared with Johann Gutenberg’s 15th-century introduction of standardized and interchangeable type for its revolutionary impact on the availability of information and on the opportunity to produce information that can be consumed. As the internet has expanded and matured, it has also morphed ideas about what it means to be an author or, for that matter, a journalist.23 Expertise, responsibility,

Unpacking the Mythologies of Social Media119

and distinction from other pursuits are the hallmarks of professionalism, as defined by the landmark work on the topic.24 These might be thought of like the three legs of a stool. While the exact merits of the argument can be debated, journalism in the United States and elsewhere has as a profession encountered criticisms about lapses in expertise and shortcomings of responsibility. Perhaps the perception of journalists seeking to be part of the story as well as reporters of the story facilitated the erosion. But it left the corporate concept of the profession’s own distinction as a lone unbattered leg in journalism’s stool. Thus, the idea that “‘news’ originates not just with journalists, but with anyone at the scene with a smartphone,”25 the result is to fatally undercut an already vulnerable institution. Furthermore, the notion that journalists often seek to be part of the story poisoned any serious prospect of concisely explaining how someone equipped with a smartphone and themselves involved in an incident could not also stand in the role of a journalist while also being part of that story. With different timing, social media’s impact on journalism might have been very different, because the context might not have entailed journalism’s overreliance on a single leg of its professional stool. Fundamentally less heavily curated than other more centralized forms of media, the role of moderation did not evaporate but instead adapted into prioritization, as the habits of contributor-consumers were watched and measured, and users effectively decided what was news and what was not, based on what gained more attention relative to other topics and accounts.26 Many analysts have already observed a widespread reaction by the traditional media outlets: the embrace of social media tools and the reporting of social media accounts as almost a gold standard of authentic in-the-moment facts. The Indiana School of Journalism found that the leading reason that professional journalists used social media was “to check for breaking news” on platforms such as Twitter. As Jarred Prier observes, “overreliance on social media for breaking news can become problematic in the midst of an ongoing information operation” because “if an adversary takes control of a trend on Twitter, the trend is likely to be noticed by mainstream media journalists who may provide legitimacy to a false story—essentially turning fake news into real news.”27 Because journalistic practices have (perhaps understandably, but regrettably) adapted to the prevalence of social media by enlisting its use, alert and opportunistic operatives who direct information operations campaigns will become even more likely to seek the use of social media as an avenue for seeding discourse with fabricated accounts and tailored interpretations. @Jenn_Abrams reportedly collected 70,000 Twitter followers and the combination of “her” Tweets and the large numbers of her followers apparently convinced a range of media outlets, including BBC, BET, Brietbart, Business Insider, Buzzfeed, CNN, Fox News, France24, Huffington Post, Infowars, the New York Times, Sky News, USA Today, and the

120

Myths and Realities of Cyber Warfare

Washington Post each to project “Jenn’s” rumors as news.28 This staggering assemblage of media outlets points to the problem not being limited to one side of the aisle in terms of political perspective (not just Fox and Breitbart, but the Huffington Post, CNN, and the New York Times) but also displays that the problem is not limited to one country or one side of the ocean (USA Today and Washington Post but also BBC, Sky News, and France24). This should not be shrugged off as a small issue, or an issue plaguing one political party, or an issue concerning one country. Furthermore, traditional media’s attempt to leverage social media in the short term virtually guarantees that influence campaigns will work to leverage social media and garner the traditional media and public opinion dividends that seem likely to accrue as a result. AMBIGUITY WITHOUT FALSENESS Even when accounts are not fabricated, yet another pitfall exists from projecting social media posts as representing authentic truth. Philip Larrey, a senior logician at Pontifical Lateran University, has explained that “there’s no such thing as a non-perspective perspective. You’re always choosing data and this is what makes a great journalist.”29 Many (even most) accounts may be genuine. But the burden of choosing the data means that by selecting which accounts to highlight through reporting, traditional media decides which of these genuine accounts is projected to the larger world as being representative and significant. An individual’s genuine experiences are inarguably significant to that person, but with eight billion humans on the planet, persuading the rest of the population that one or another genuine experience is disproportionately significant and real can have distinct strategic consequences. In the Middle East, for example, correlations have been identified between the tone of world media coverage and the tempo of Israeli military actions countering terrorist attacks from bases on the borderlands. As a journalist-author, Patrikarakos includes a study about the projection of genuine experiences as part of his book War in 140 Characters. For Patrikarakos, social media is altering the very nature of warfare. He studies Farah Baker, a sixteen-year-old Palestinian from a well-to-do family living in the Gaza Strip. In June 2014, just as ISIS was securing its hold on Mosul, Hamas members murdered three Israeli teens, and an angry public response precipitated the lynching of an Arab teen, catalyzing Hamas rocket attacks and an Israeli military response that extended into late August. Baker was one of a number of Palestinians and Israelis who tweeted their experiences and feelings during the weeks of violence. Her reports were genuine accounts of her perspective during Israeli air strikes in Gaza, but they were not the only accounts coming from Gaza. Yet hers was selected for reporting via The Guardian, the Wall Street Journal, and elsewhere; her social

Unpacking the Mythologies of Social Media121

media following leaped from 800 to 200,000 in a matter of weeks, thanks to the spotlight provided by traditional media for her social media account. Foreign Policy named her one of the 2014’s hundred Leading Global Thinkers, not as an “agitator” or “advocate” but as a “chronicler,” along the lines of a journalist. Patrikarakos speculated on the reasons that Baker was selected among the “thousands of Gazans [who] tweeted daily about the horrors of the war”: her English skills certainly gave her a leg up relative to many of her tweeting compatriots, and as a telegenic adolescent woman she appealed to audiences in a way that media recognized.30 Baker’s Twitter following has retained the gains it made in the summer of 2014, with a comparable following five years later. Likewise, half a decade after her accounts of Operation Protective Edge, she continues to describe what she witnesses: “The bomb was close to my area :( #Gaza #GazaunderAttack” or “OH MY GOD HUGE BOMBS JUST MADE ME JUMP FROM DEEP SLEEPING AND ROCKED THE WHOLE AREA OH MY GOD #Gaza 4:52 A.M.” Notably too, she inserts an editorial comment: two photos of debris are accompanied by the caption: “results of the last Zionist colonization escalation on #Gaza Strip.”31 In identifying social media event-chronicling in ways that align with journalism, traditional media bestow a media status. This can change the light in which more editorially oriented social media posts are interpreted. Traditional media may be playing a losing game by trying to remain relevant by (and despite) upholding social media examples as sources of raw news information, but the concept that the internet would lead to an inexorable decentralization of authority and power seems to be only partially realizing itself. Alert state actors seeking to influence opinion either at home or abroad can be expected to study these trends. The television and online semiofficial news outlet Russia Today (RT) has attracted considerable attention for activities that bear this hallmark. Launched in 2005 with a $30 million annual budget, RT has expanded prodigiously until it now boasts more YouTube subscribers than any other single news broadcaster. Analysts have pointed to an alignment between RT reports, trend hijacking on Twitter, and statements by state officials such as Russia’s foreign minister; these combine to establish an echo chamber effect in which assertions from one source can be repeated and appear validated by another source. RT and its peer in Russian news called Sputnik were also among the traditional media outlets that parroted “Jenn” prior to her being discovered as a product of the Russian Internet Research Agency.32 This should not be easily dismissed as a temporary anomaly. As Patrikarakos notes, “the Internet will inevitably come to benefit the oppressor as well as the oppressed; . . . in the end the state will always use the same tools to fight back.”33 Questions emerge about the impact of social media, relative both to institutional advantages in the realm of speech and the state’s monopoly on violence.

122

Myths and Realities of Cyber Warfare

SOCIAL MEDIA’S IMPACT ON THE INSTITUTIONAL ADVANTAGE IN SPEECH The seeming paradox about the internet’s long-term effect on information and speech becomes a lot more obvious when one recalls that technologies do not choose their users and that tools are regularly repurposed in order to gain or retain relevance. Thus, although internet technologies can facilitate communication between citizens and can decentralize power, the same technologies can also be employed to shape perceptions through information campaigns and to filter and quash disfavored ideas.34 This is not to say that the internet does not regularly permit decentralizing trends to advance. If Joseph Nye had not intended this by his comment that “civilian uses” of technologies, including the internet, “will complicate effective national security strategies,”35 the point remains valid. Social media networking has facilitated countless protest movements in their organization phase, and defending regimes have been left with unenviable alternatives such as allowing them to proceed, attempting to seed organization with their own messaging, tracking organizers or filtering communications, or simply pulling regions offline in hopes that protest efforts are not sufficiently prepared to occur in any case. Sometimes, very popular protest brands (such as #IranianRevolution) have even come to attract commercial interests to participate or facilitate activities as part of their business.36 The ability to counter the internet’s democratizing tendencies seems paradoxical, in part, because of presumptions that accountholders are genuine users. When manufactured entities partake in open discourse, the result is not actually a conversation, and this manufacturing of participants is the starting point from which one should consider an institution’s attempts to shape opinion. Chapter 4 noted the free added muscle that patriotic hackers can lend to society or nation-state. Their activities will be impartial and may well be raucous, but patriotic hackers are genuine participants in cyberspace in some ways that are inconvenient or even problematic for their handlers. Their patriotic hacking engages interests and passions but by definition, their primary motivation is a state-centered form of allegiance in their activism. Patriotic hackers require support that cannot be provided as compensation for their hacking without affecting the definition of what they are online. Nor can they be permanently considered “free chicken” for the governments they support. After all, their rowdier tendencies mean that they will eventually overstep the interests of their states: they will at times bite the hand that does not feed them, out of frustration that their zealotry is not paralleled by their statesmen and diplomats. Something like this has long been evident outside the cyber domain, and examples have arisen for the past two decades within it.

Unpacking the Mythologies of Social Media123

The solution, from the standpoint of states seeking to manipulate opinion in controlled ways, is the use of sockpuppets and other trolls. Those employed by the Russian Internet Research Agency are paid approximately $1,500 per month to churn out propaganda. While hardly a king’s ransom, one can certainly survive by taking the king’s shilling as a covert foot soldier in online struggles. Of course, they are expected to produce in exchange for their pay. The nature of virality online means that quantity easily drowns out quality, and the troll employee’s benchmarks are established accordingly. Commenting or blogging in widespread languages like English could be valuable, but when proficient English speakers are not available, leaning on technology crutches like Google Translate work in a pinch—sufficient for transmitting small passages of semi-coherent propaganda and doing so in volume. Reportedly, these involve twelve-hour days in which a troll will post fifty times daily on various news articles, maintain half a dozen social media accounts on a mainstream platform such as Facebook and post several times each day as each “character,” with an objective of collecting at least 500 friends or subscribers on each account. Troll activity in Twitter is believed to entail the maintenance of ten separate accounts, each posting an average of every fifteen minutes during the workday and compiling 2,000 followers.37 Certain factors assist undemocratic uses of social media. One is the fact that “social networks tend to follow a power law distribution, where a few powerful voices can have a disproportionate amount of influence on the information flow.” For example, the Russian Internet Research Agency used a combination of purchased ads and simulated account traffic in an attempt to contour the outlook and behavior of different segments of the U.S. electorate. This included an estimated $100,000 on social media advertising through Facebook aimed at groups interpreted by Russia to belong to both political fringes in the United States, the plurality of which was directed at African American voters with notable amounts also pointed to conservative voters.38 Researchers believe that “it is likely that the organic posts on Facebook, not the ads, had the most reach.” Indeed, analysis found that a mere twenty pages “represent[ed] 99.6% of all engagement” by Russian Internet Research Agency accounts reaching targeted audiences. Much larger numbers of accounts across various social media platforms were operated, in concert with ads bought on social media, and these facilitated the establishment of a simulation of legitimately viral content, most of the messaging power was concentrated among a small number of accounts that “spread junk news” to “manipulate public opinion and subvert democratic processes.”39 Apparently, employees of the troll farms are not uniformly gung-ho zealots. Many of the employees are desperate for work and have stories like Vitaly Bespalov, who had attempted to find work as a journalist and

124

Myths and Realities of Cyber Warfare

accepted a position at a troll farm only after finding no relevant openings elsewhere and after being assured that the work “will be neutral” rather than propagandistic. Early phases of the work might involve massaging terminology in existing articles so that they would align with the state’s agenda and also remain high-ranking matches when Google and other search engine users sought stories on topics such as the fighting in eastern Ukraine. Redubbing Ukrainian troops into battalions of “volunteers” or “guards” and retitling separatist rebels into “militia” were minor adjustments that threw a shadow on the legitimacy on a defending state while legitimizing rebel groups. Terminological tweaks turned Ukraine (a country) into the Ukraine (a geographic region lacking sovereignty). For many, the work seemed like easy money. While some “true believers” might work among the personnel, others felt morally undisturbed by a conveniently acquired paycheck.40 But the monotonous propagation of lies and invective rumors—all while doing the electronic version of throwing one’s voice—can become exhausting. Being a troll is emotionally dispiriting work, especially for someone willing to contemplate its impact, but even generally it means placing one’s self inside the house of mirrors half the day and bending reality to the creation of political surrealism. One former troll, who had been a Russian Internet Research Agency employee, was quoted describing the difficulty of pretending “to be a redneck from Kentucky, then . . . some white guy from Minnesota who worked all his life, paid taxes and now lives in poverty,” and then fifteen minutes later having to “write something in the slang of [African] Americans from New York.”41 Those who have left employment as trolls have encountered violent threats and recrimination for disengaging or speaking out. After all, when the suspected objective is both to solidify resolve and opinion at home while seeding foreign societies with doubt and disharmony,42 stories that detail the workings of the troll farms or similar entities undercuts both goals. Directly shaping opinion at home and abroad through seeding conversation is, of course, only one way that institutions can seek to reassert control in an internet age. Another approach is to curtail unwanted discourse. Blocking websites is one way to achieve this in the short run. Another method witnessed in the first decades of the 21st century involves monitoring nationals’ use of the internet making sure that nationals understand the implications of this monitoring. The Great Firewall is an obvious example of the former, but it is far from the only example. More than two dozen countries do or have censored parts or all of YouTube access within their borders. Closing down social networking sites can lead to a leapfrog game between dissidents finding a usable site and authorities determining what the site is and closing it down in turn, leading to still more efforts by dissidents to locate a new alternative forum for networking. Some regimes will even seek out dissidents through bogus accounts for friending possible

Unpacking the Mythologies of Social Media125

dissenters and monitoring their behavior, or by offering small cash bounties to online informers. Some researchers have pointed out that, far from leaving states powerless to control or shape or monitor discourse, some aspects of monitoring have actually grown much cheaper in the internet age than they had been during the years coinciding with most of the Cold War. Many monitoring tools can even be acquired off the shelf. Oftentimes, censorship is partnered with semiofficial advice that people would enjoy happier lives if they opted to focus on appropriate and acceptable topics of discussion for their internet discourse.43 FROM SPEECH TO RECRUITMENT The most infamous groups to leverage the internet and especially social media to support warfighting are the self-proclaimed jihadi entities aiming to impose their interpretations of Islam upon other Muslims and ultimately on people living outside the lands that had been populated by Muslims during the medieval era. Marginalized from traditional modes of communication and propaganda, such groups eagerly adopted internet means for distributing their ideas and connecting with sympathizers. Between 2001 and 2006, a period coinciding with the early phases of the Global War on Terrorism and bracketed by the year of the September 11 attacks and the uptick in insurgent violence that precipitated the U.S. surge in Iraq, the number of jihadist websites increased by an estimated 20,000%.44 The emergence of social media opportunities again showed jihadist propagandists as early adopters.45 Recruiting extremists poses complex challenges since it involves locating like-minded individuals who might face prosecution for their intentions and also because it means converting people who would not necessarily even favor major aspects of the philosophy of the movement recruiters want them to join. “Groups with extreme ideologies, from the Bolsheviks to the Nazis, are never mass movements, but rather are relatively small groups of true believers who take power and then impose their worldview on the masses.”46 The gulfs between the likely ideas of recruits and the tenets of extremist movements illustrate the reasoning behind assertions that extremists should be denied small but important rhetorical victories, such as fanatical zealot groups like al-Qaeda and ISIS in the Middle East. Commonly known as “Islamists” or “jihadists,” some argue that using the term (the latter of which refers to holy war) normalizes and lends a degree of credibility to their hate-filled and blood-soaked ideology, which actually opposes the ideas and practices of large numbers of Muslims.47 Patrikarakos’s research into social media’s use in war includes the study of extremists preying on lonely and emotionally vulnerable people seeking to fill an emotional void. The extremists portray their group as an answer to the recruit’s problems, rather than detailing the group’s controversial

126

Myths and Realities of Cyber Warfare

worldview. Following the story of one ISIS recruit, “had she been born forty years earlier, her substitute might well have been Communism or even existentialism” rather than religious extremism.48 Research during the early years of the War on Terrorism observed that terrorist recruitment typically “presented [violence] in three rhetorical masks,” depicting it as the only recourse for resistance, as a counteraction taken in the face of brutality by authorities, and with violence cloaked in terminology used by peace groups in order to imply that violent groups have been forced into choosing violence. “What most sites [sic] do not feature is a detailed description of their violent activities,” notes a study from 2004, although it identifies Hezbollah and Hamas as “two exceptions to this rule.”49 Indeed, recruitment has remained to date the principal way in which terrorist groups have sought to use the internet.50 Website defacements and Twitter account hijacking notwithstanding, these appear to constitute harassment rather than destruction on the order of targeting a power grid or the other kinds of examples frequently cited by authors describing technical cyberwarfare. Defacing websites is not only a far smaller technical hurdle relative to hacking into critical infrastructure and posing a persistent enough adversary to find and exploit glitches for a kinetic effect but defacements can also provide notoriety that can be used in recruitment efforts. A recognized value of the internet for militant groups is that an ethnic diaspora, which in earlier times might have diluted a group’s potential strength, can be used to enable a wider base of funding and support. The mixed outcomes of Chechen militants’ use of the internet in the years preceding the rise of social media speak to the finesse required in an information campaign. The internet proved useful for raising funds from the Chechen diaspora, provided a platform for displaying Russian atrocities, offered a means by which Chechen military successes could be broadcast, and allowed common cause to be established with other Muslims. However, haphazard understanding of the strictures of a messaging campaign can cost support; Chechen broadcast of the killing of Russian prisoners dealt notable damage to its image, in keeping with the trends identified in the cloaking of violence as being reluctant and responsive rather than graphic and aggressive.51 While recruitment and propagandizing have endured as the main purposes of terrorist online activity to date, scholars like Singer and Friedman have recognized that “as the cyber world has evolved, so too has terrorist groups’ use of it.” For example, individuals who have already been recruited can be further encouraged (and informed) about the tactics, techniques, and procedures of terrorist groups through online multimedia that provide tutorials on topics like bomb making.52 Additionally, the strategies for recruitment have morphed in alongside the development of social media and the maturation of the internet and also alongside the rise of a group particularly keen on projecting its message through cyberspace.

Unpacking the Mythologies of Social Media127

ISIS VIOLENCE IN THE SOCIAL MEDIA LENS During its brief but bloody life as a quasi-state, ISIS prioritized the establishment of “its own brand” online and especially via social media.53 Its social media activities defied the tendencies identified among most other terrorist organizations in their online messaging campaigns, however. ISIS used social media as if to establish its own claim on violence, to define itself as a state, and thus to alter the definition of statehood on the international stage. In broad historical terms, the emergence of nation-states coincided with political concepts involving the state’s role in curtailing the violence that might otherwise result from chaos in a state of nature among humans. Strands of contemporary philosophy gave rise to ideas about the responsibility of the state to answer the interests of the governed even to the extent that when governance mutated into oppression, the oppressor could be overthrown so that a new governor could be established who would respond to the grievances and interests of the citizenry. While the political philosophies of figures like Robert Filmer, Thomas Hobbes, and John Locke in no sense substitute for one another and although they disagree starkly on important points, they represent some of the more iconic political philosophers on the cusp of the Enlightenment era. Such ideas provide a bedrock for precepts including that the state possesses a monopoly on the legitimate use of force and violence and that when other actors conduct violence their actions are inappropriate and merit censure or punishment. These ideas, coupled with ideas about the natural alignment of political state boundaries with cultural national populations, helped define the international frontiers and many of the political issues of the modern era. Although ostensible legitimacy in the use of force is far from the only characteristic marking a state, it is a significant factor with respect to ISIS and with respect to its use of social media in conflict. ISIS rejected a range of predominant concepts and models about human and interstate relations. In its claim to constitute a Caliphate, ISIS aspired to a global reach beyond the narrow ethnic definitions that had until recent decades defined at least the conceptualization of nation-states. Although almost three-quarters of its propaganda appeared in Arabic, notable proportions in English (18%) and French (6%)54 pointed to an interest in alluring a following beyond the Middle East region by employing two of the leading global languages. Crucially, ISIS’s online activities reflect its intent to alter the geopolitical landscape. Condemnations of historic colonialism or red-herring accusations about the Sykes-Picot agreement’s implications for the Middle East might be interpreted in this vein. More significantly, the movement’s online presence reflected this attempt to change the meaning of statehood and to conjure a state from extremist holdings across Syria and Iraq.

128

Myths and Realities of Cyber Warfare

Interestingly, its direction can be recognized by comparing its online strategies and activities to al-Qaeda and seeing al-Qaeda’s apparent reaction to ISIS’s messaging campaign. Al-Qaeda efforts to create its own news outlet fizzled and the group continued to rely on existing media in order to disseminate periodic announcements. Reportedly, al-Qaeda’s reclusive head Osama bin Laden scorned aspects of the messaging used by proto-ISIS figures; notebooks seized in bin Laden’s Abbottabad hideout compound included criticism that he had scrawled in frustration about fanciful killing machines touted in the extremist propaganda materials prepared by the future leaders of ISIS.55 Despite being aligned ideologically in terms of radical interpretations of Islam, secrecy cloaked al-Qaeda in a way that led its strategists to doubt the effectiveness of blatant and graphic violence depicted by then-nascent ISIS groups. This may reflect the fact that al-Qaeda understood itself to be a nonstate actor. Its attacks against diplomatic facilities, military installations, and, of course, its attacks on September 11 show that al-Qaeda was fully bent on conducting violence and using it as a tool to prompt fear through which to manipulate world events. That use of violence stands at the center of what defines terrorism. ISIS certainly utilized violence for similar terroristic aims. ISIS also wanted to create a physical caliphate, and this required the making of a quasi-state, which would hold territory and at least somewhat resemble a traditional geopolitical entity. Al-Qaeda critiques of ISIS need to be understood as quibbles regarding the practicability of ISIS’s geopolitical goals rather than constituting measurable daylight between the two with regard to ideological aims. Although disagreements among radicals can rapidly spiral into extreme manifestations, there is little difference between their objectives outside of the younger group adhering to a certain warped optimism about how quickly it could translate extremist violence into tangible gains. This matters because it can help illuminate the reasoning behind ISIS’s bloodcurdling celebration of violence, in ways so graphic that al-Qaeda operatives reportedly critiqued them for their potential to backfire as propaganda. If one imagines ISIS’s overt celebration of its brutality as a political expression of itself as a state—that it implicitly aimed to claim statehood by demonstrating its monopoly over violence in the territories it controlled—then its rejection of al-Qaeda criticisms becomes more understandable in an abstract sense. Gamification of violence, such as purported ISIS interest in jihad games styled on popular console games like Grand Theft Auto,56 not only encourage early engagement by potential recruits but normalize violence and set it within a context of legitimacy that a war-oriented state would idealize. Scholars have suggested that ISIS’s leverage of social media transformed the internet-equipped “Terrorism 2.0” into a “Terrorism 3.0.”57 Indeed, many anecdotes reinforce this conclusion. After killing forty-nine people

Unpacking the Mythologies of Social Media129

in a gay nightclub in Orlando, Omar Mateen called emergency responders in order to pronounce his allegiance to ISIS and then used his smartphone to track whether his attack had gone viral. Mateen’s actions reflect an acute interest in social media as an avenue for disseminating terrorist messages. Another particularly vivid example was ISIS’s use of social media to invite sympathizers online to “suggest a way to kill the Jordanian pilot pig,” or the uploading of countless videos of fiendish murder methods to websites, displaying the deaths of various people caught in ISIS territory and anathema to its beliefs.58 But a further, and also disturbing, conclusion can be drawn as well. The shocking images of wanton murders, carefully edited by ISIS so that they would be gruesome enough to frighten but kept just short of a gra­ phic threshold that would lead to media outlets’ censure and the consequent inhibiting of dissemination, advance yet another sinister purpose.59 Un­abashed portrayals of murderous executions might have reflected the group’s attempts to establish itself as a state and bestow upon itself the right to possess a monopoly on violence. ISIS appears to have attempted to demonstrate its statehood through the seizure of territory, confirm it through the establishment of a theocratic governing elite, and broadcast it through the advertising of official massacres. The fact that the eradication of ISIS’s territorial holdings forced changes to its social media propaganda suggests that its propaganda was indeed bent not only on celebrating ideological murder but on the establishment of a jihadist ideological state. Its endurance could have impacted—warped—the image of statehood, along the lines that other historical massacres and oppressions have tarnished the image of traditional geopolitical forms in the eyes of many during the modern age. SOCIAL MEDIA’S IMPACT ON THE STATE MONOPOLY ON VIOLENCE Social media can be leveraged in very different ways in connection with violence. Some violent nonstate actors directly use social media to sharpen the effect of their attacks. The jihadist attacks in Mumbai in November 2008 were launched with a considerable degree of coordination that made use both of electronic tactical communications and the leveraging of social media. The gunmen were linked by cell phone to a control room in Karachi, Pakistan (550 miles away). Indian bystanders near the attacks would witness their emergency responders’ actions and tweet what was happening, and since fellow Indians who worried about loved ones’ caught in the attack were not the only people watching these posts, personnel in the control room gleaned an extraordinary amount of in-the-moment information about the Indian authorities’ response to the attacks. The control room could then reach the gunmen by cell phone, updating them on police activity and ensuring that the attacks could continue even longer.

130

Myths and Realities of Cyber Warfare

Frustrated Indian Twitter users, realizing that the attackers were making use of the social media updates, hurried to warn fellow users against posting further information and even speciously claimed that the government had ordered an end to such posts.60 Tools can be used in disparate ways, even in the context of violence, as the November 2015 attacks on Paris helps demonstrate. Despite the precedent of the Mumbai attacks, credible reports that the attackers leveraged panicked victims’ tweets as a tool to identify the locations of survivors has not surfaced. However, because the attacks left large numbers of people disoriented and vulnerable, many Parisians responded with the hashtag #PorteOuverte (open door) to identify locations where survivors could safely shelter. One million tweets using this hashtag appeared within a space of ten hours.61 The response to the Mumbai attack of November 2008 and to the Paris attack of November 2015 each saw important information posted onto Twitter, and the difference in results correlate principally to the difference between who was watching and reacting in the two cases. ISIS was not alone in using social media in connection with fighting in the Middle East. In the battle against ISIS, many of the militias that ostensibly fought on behalf of Iraq were reported to have received extensive support from neighboring Iran. Hezbollah, Iran’s proxy in battles with Israel, had already fought against ISIS within Syria. When an Iraqi militia captured a suspected ISIS fighter in 2016 and launched an Instagram vote about whether to kill or release the prisoner, 75,000 votes were posted globally. A follow-up post uploaded two hours later consisted of a selfie photo of a militiaman with the prisoner lying dead in a pool of blood. The caption, “Thanks for vote,” accompanied the image.62 Other actors, outside ISIS and often beyond Islamist terrorism, have also employed social media in ways that potentially affect the state’s traditional monopoly on violence. Significant cases across Europe and Asia point to how contested uses of social media are complicating ideas about the relationship between states and the violence associated with national security policy. Some analysts point to the use of official statements on social media as an inflection point for tacitly accepted violence. Filipino president Rodrigo Duterte’s rhetoric in opposing drug cartels’ power in his country has been identified as an example. An estimated 12,000 people have died in violence meted out to individuals including drug dealers but also extending to ordinary addicts and reportedly other enemies and targets of the regime’s police.63 Confirming the existence and extent of such dynamics can be extremely challenging, although history suggests that the elimination of identified threats to a society can lead to a snowballing effect in which an array of other grievances and grudges become enmeshed in the selection of victims.

Unpacking the Mythologies of Social Media131

Harnessing popular sentiments can become a dangerous game. Other scholars have identified the restlessness of patriotic hackers as a potential source of unrest. While some states have directed patriotic hackers toward external targets as a means of deflecting their malice away from potentially vulnerable institutions at home, this appears to be only a relatively short-term expedient. Eventually, such hackers grow frustrated that their homeland, who they are permitted or encouraged to believe they are supporting through their online activity, is not more strident in its official policy. One study noted that the Chinese military’s “escalation at the end of 2012 followed popular nationalist protests against both Japan and the C[hinese] C[ommunist] P[arty] party-state, suggesting—but not proving—that an autonomous public was one proximal cause of the Chinese government’s escalation of the dispute.” Analysis indicated that a combination of real-world street demonstrations and social media opinion “played a critical role in escalating” a state’s handling of a territorial dispute.64 Given the state’s strong preference for social harmony, the prospect of such an impact would doubtlessly unsettle officials and policy makers, incentivizing policy shifts meant to restrain the opportunity for activists of any stripe to autonomously shape policy. The events of the attempted military coup in Turkey announced in July 2016 remain controversial and murky years later. One account described the coup as initially following the contours of a conventional military coup attempt but being foiled because social media was used to derail the unintentional cooperation of many of the plot leaders’ rank-and-file personnel. Soldiers, mobilized to support the coup but speciously told that they were merely undergoing routine training exercises, were reportedly tipped off by bystanders loyal to the regime; these civilian bystanders were able to show ordinary soldiers social media information related to the coup attempt, and upon learning that their “training” might be used to support a revolt, they ceased cooperating with their commanders and the coup collapsed. Notably, in the wake of the coup, Turkish president Recep Tayyip Erdogan initiated a crackdown on opinion makers. This meant the closure of over 1,000 schools, sixteen television stations, fifty-eight newspapers and magazines, and twenty-nine book publishers as well as the dismissal of 135,000 civil servants. Social media also faced restrictions, despite its apparent use by opponents of the coup.65 In Turkey, social media has been used by the state’s supporters in order to preempt the regime’s overthrow, and the regime in its recognition of the value of social media has moved to tighten its control over a tool already apparently within its grasp. Social media has been notably involved in the low-intensity conflict smoldering in eastern Ukraine, where pro-Russian separatists and the Ukrainian government have clashed since 2014. As noted in the previous chapter, the Ukrainian side has seen Facebook leveraged so that pages supporting that country can facilitate collections of supplies for the front.66

132

Myths and Realities of Cyber Warfare

Both Facebook and its Russian counterpart VKontakte have seen commentary and messaging related to the conflict. The latter platform saw a message posted by a Russian soldier, “All night we were shooting at Ukraine,” go viral.67 Such posts can erode the plausible deniability of third-party state participations hoping to influence the course or outcome of a conflict without being directly recognized as a participant. In the future, even something like the ability to catch a state red-handed could be warped. This easy attribution, for example, could even be used for false-flagging in order to sow doubt about states that are totally uninvolved in a conflict. The use of social media may lend a novelty, virality, and plausibility to such future activities, although false-flag events and manufactured provocations are hardly new conceptually: Nazi Germany began its campaign in Poland with a false-flag “Polish” attack against a German border post in 1939, and Japanese forces manufactured a border incident to justify the invasion of China in 1937. Nonstate actors’ violence, uploaded or even livestreamed onto social media, repeatedly raises controversy. Drug cartels’ posting musicvideo-style images of murdered members of rival gangs has already been mentioned in this book. The April 2017 livestream onto Facebook of an apparently random murder of a seventy-four-year-old on a street in Cleveland, Ohio, caused widespread but temporary shock. Other instances, such as the August 2018 livestream of a gaming tournament in Jacksonville, Florida, happened to record reaction of gamers to a gunman’s spree that ended in his own death and those of two other people; ubiquitous livestreaming means that violence and even future acts of war might appear for in-the-moment consumption without deliberate intent.68 Some attacks of this variety illustrate the ongoing character of the competition between bad actors and social media moderation. By 2016, Facebook, Microsoft, Twitter, and Google each had begun to use automated techniques for curbing terrorist propaganda appearing through their services. Before the end of 2017, Google announced that 80% of the videos uploaded to YouTube by violent extremists had been automatically identified and removed before even being flagged by a single human user.69 But the race between extremists and monitoring projects is a dynamic one. It is complicated by the fact that the tools meant to identify and remove extremist content must know what to look for, and this process is arguably more like risk assessment than early warning.70 The clique that murdered forty-nine people in an anti-Muslim shooting spree in March 2019 across Christchurch, New Zealand, aimed to livestream their attacks. In the immediate wake of the attacks, social media’s reported effectiveness in blocking and removing the content roughly matched the success rates that Google had announced a year and a half earlier with respect to blocking extremist content. Facebook claimed to have removed, within twenty-four hours of the seventeen-minute

Unpacking the Mythologies of Social Media133

livestream, 1.5 million videos supportive of the murders. Eighty percent of these videos were actually blocked at the point of upload, but some journalists noted that this means 300,000 videos escaped to be uploaded and discovered before being removed; reports further alluded to additional videos that Facebook had still not identified or removed within days of the attack. Angrier media reports indicated that the original livestream had attracted a quarter of a million views before removal, calling the situation “the greatest indicator of an abject failure on the part of social media companies—Facebook, Google’s YouTube, and Twitter primarily—to pro­ perly police the content that they make billions of advertising dollars from annually.”71 Still, other journalists reporting on the Christchurch massacre commented on the task involved in automated identification of questionable images and content. One CNN report cited a scholar specializing in digital forensics that although “20% of the work is to get you to 90% accuracy,” four times as much is involved in closing the gap between 90% accuracy and 99.9%.72 Cyber and national security analyst Paul Rosenzweig noted in 2013 that “our hierarchical decision-making structures remain dominant and operate far too slowly to catch up with the pace of cyber activity,” and that policy-making is unequal to the pace of innovation in the digital realm.73 Seemingly novel uses of social media often seem to reinforce the apparent validity of this conclusion. It is worth noticing, however, that policy lies at the top of the levels of conflict, above strategy, operations, and tactics in descending order. Strategies do not turn smoothly on a dime, and policies should not be expected to do so, either. Policies and strategies need to be robust enough to bear the myriad moment-to-moment changes that inevitably occur in a conflict. Tactics and often operations respond to these dynamic changes. One must consider whether the challenges involved in closing the aforementioned automated accuracy gap from 90% certainty to 99.9% certainty constitutes more of a policy issue or a tactical one. A tactical issue is important in the moment, and it may be important in recurring cases. A policy issue carries infinitely more implications and casts a far longer shadow. If delivering further reliability to automated monitoring is a policy issue, it becomes a priority; that does not guarantee that it can be achieved quickly—and it ensures that other questions will arise. The technology that increases the anticipatory powers of automated moderation will find themselves in the hands of others, and those tools will be employed for other purposes. Policy questions—and the espoused answers—about data and prediction will exert massive impact for decades to come.

CHAPTER 7

Data as a Battlespace

DATA, A DIGITAL EXHAUST Jason Healey has long advanced the concept that the most powerful players on the digital stage are the corporate innovators rather than governments or their militaries. He has gone on to argue that “it’s AT&T and Verizon and Microsoft and Cisco” and other companies that have a power to, for example, “stop a cyberattack in its tracks,” whereas military entities can “watch” and “shoot back, but they can’t stop it in the first sense.”1 Many analysts point out that such an ability is far from the only power held by entities that connect the internet to its users. This even greater power emanates from the value that data has when it is leveraged. “Data is the exhaust of the information age,” according to Schneier. The devices that an average person uses track and frequently convey considerable amounts of data about the person who purchased and uses them. For example: the laptop computer tells websites about the software and features that are installed and enabled on it; the smartphone does similarly because it is a computer too, and since it maintains utility by tracking which infrastructure is within reach, it also tracks its own (and the user’s) location; computers in smart thermostats and modern cars track behavioral patterns that the user may not even realize are patterns. Metadata such as the time and destination of calls and messages “is great for mapping relationships,” since this information by definition tells who and when a person contacts, even if it does not convey what is said.2 Other researchers and officials have observed this point as well. One Canadian legislator asserted that “we leave digital crumbs in everything that we do,” comparing this to the evidence trails he had used during his earlier career in police work.3 The chief privacy officer for that country’s postal system, while emphasizing that she was not “going to be the big privacy advocate here and be privacy paranoid,” noted that “we’re all data engines,” and that in technologically advanced societies, people’s

136

Myths and Realities of Cyber Warfare

daily actions contribute “to an awful lot of data that’s being collected and used.”4 Describing an innocent use of Google Maps to search for in-town pizza restaurants, Conti has indicated how data is inadvertently conveyed to search tools like Google, “disclosing—and, hence, strengthening—the link between the search you performed and what you deemed as important in the results.” Queries add up and “eventually, the user will disclose enough information that his or her activities are uniquely identifiable.”5 From a data collection perspective, or from a surveillance standpoint, the portability of devices has been described as revolutionizing the ability to track individuals. Not only does a sufficient amount of digital exhaust go a long way toward making a person identifiable, but portable devices are more likely to travel with the user. If this was the case when comparing the laptop computer or tablet to the desktop, it is infinitely more relevant when comparing the mobile device to the laptop.6 Those most concerned about the privacy implications of this digital exhaust identify a congruence between important trends. Where privacy supporters like Schneier point out that “surveillance is ubiquitous” on the internet, the internet itself is expected to grow increasingly ubiquitous, and the aggregation of disparate data points “may reveal a portrait of the target [he or] she would not share, as a whole, with strangers.”7 Furthermore, compiling and examining these data points lets them congeal to provide pictures not only about who people are but also invites analyzers to draw conclusions about what they desire, expect, and believe. Libicki observes that while conclusions are apt to be drawn, their actual validity is less relevant to their being deemed usable than is their being presumed by the data consumer as correct.8 Even if the data that is tracked is never recorded in error, other factors can contribute to the analysis of data leading to inaccurate conclusions. Momentarily lending a device to another individual, for example, can mean introducing extraneous data points that might be woven among relevant ones. Shopping online for a gift can easily be folded into analytics’ conceptions of a shopper’s personal interests. Googling an unfamiliar or dubious factoid potentially takes on the appearance of a deeper interest in a topic. As search engines anticipate likely queries with suggested search terms, opportunities emerge for users to browse the results of searches diverging from their originally intended term, either out of curiosity or even a misplaced finger on a smartphone screen. Even if some of the latter examples in this list were deemed uncommon, effective analysis would need somehow to compensate for a variety of red herrings. Libicki’s point indicates that such red herrings may instead simply be folded into the mass of data, contributing noise that is to a greater or lesser extent accepted as valid. Analyst Dan Geer’s argument is that “with total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that did happen with your data can you say what did not

Data as a Battlespace137

happen with your data.”9 Attempts to meet this bar would likely trigger long-term changes in what concepts like privacy mean to people.10 USER POLLUTANT OR USER PAY DIRT? Countering the concerns about inaccuracy are yet more arguments that arguably imply threats to privacy. With enough access to enough data, this outlook asserts, a relatively clear picture emerges despite whatever incidental noise occurs. Singer and Brooking, while not necessarily celebrating the conclusion, note that “the endless churn of content produced on the internet each day provides a limitless pipeline of data with which to train . . . increasingly intelligent machines,” which will “predict, quite accurately, what any one person might want to buy or even whom they might vote for.”11 Joel Brenner, a former senior counsel at the National Security Agency, has noted that “most of what is worth knowing in the world is not classified, and technological convergence will make that axiom truer than ever.”12 In an era of data breaches of increasingly monumental scale, Paul Rosenzweig agrees that leaks will reveal less than “the patient piecing together of open-source materials” in a range of contexts.13 To some extent, this dynamic is not new; as a relatively open society during the Cold War, the United States was regularly more vulnerable to open-source research by its adversaries than was the case for the Soviet Union. This, in fact, was one of the reasons that U.S. policy makers opted to improve surveillance technologies via reconnaissance satellites, even going so far as to impact the country’s posture on the making of space policy.14 In another sense, however, the advent and popularity of social media magnify the impact of a tendency that had been in evidence well before the creation of cyberspace. Social media explicitly invites accountholders to answer prompts such as, “What’s on your mind?” Mountains of data are solicited from users by platforms that are presented as being free to use, but the monetarily free services are made possible by the monetization of the resulting data. Some pundits have gone on to assert that users’ provision of this data should be considered a form of digital labor. Researchers in 2015 reported that, armed with four data points regarding three months’ of credit card metadata, they could reidentify 90% of individuals. Social media facilitates the haphazard sharing of data and it also enables the swift and widespread dissemination of what has been shared, and this can pose security risks as well as economic vulnerabilities.15 Some analysis indicates that the problems are likely to mount as time progresses. Singer and Brooking have claimed that foreign intelligence operatives saw President Donald Trump’s Twitter activity as a reservoir of data from which to compile intelligence studies; however, since Trump’s Twitter activity began in his sixties, “nearly every future politician or

138

Myths and Realities of Cyber Warfare

general or soldier or voter will have a much bigger dataset, from much earlier in life” available for analysis.16 Continuous engagement with social media makes much data visible to friends, followers, potentially any and every accountholder, and this is the data to which Singer and Brooking refer. Social media platforms enjoy a vantage point from which to survey, store, and potentially monetize an even greater array of user data. Additionally, Schneier’s point that “when a computer stores your data, there’s always a risk of exposure” seems inescapable.17 Users may feel squeezed between a need or compulsion to utilize the technologies of the information age, on one hand, and a need or desire to shield themselves from the implications of their data being used to exploit them, on the other. Throughout much of the Middle East, privacy is as much a requirement as a desire, and this fact corresponds to the popularity of more discrete networking sites such as WhatsApp and Snapchat relative to more openly visible alternatives like Facebook and Twitter.18 Alternatively, the kinds of “noise” that have been described earlier as reducing the reliability of data can be actively embraced. One example is search term “chaffing.” Parallel to its World War II precedent in which Allied bombers sought to defeat German air defense radar by dropping numerous strips of aluminum foil, chaffing saturates the sensors rather than seeking to starve them. When successful, chaffing confuses sensors until they are unable to distinguish between the decoy and the genuine. Although a clever notion, it is far from foolproof: unless the fake searches closely resemble the genuine ones, the effort will be ineffective; additionally, even if sites are unable to determine which searches are genuine and which are chaff, they can be expected to consider the deliberate consumption of resources and saturation of extraneous data to be an imposition. Conti, while noting that he considers free-to-use tools and services by Google and others as beneficial, asserts that data disclosure is most effectively prevented by avoiding its being generated, because “a bulletproof, anonymous web-browsing experience doesn’t exist.”19 Digital trails are the by-product of activity in the digital age. Schneier asserted that “data is the pollution problem of the information age, and protecting privacy is the environmental challenge.”20 There is another sense in which digital exhaust can be understood—not in competition with Schneier’s interpretation but perhaps alongside it. Activity online leaves digital exhaust as constantly as a person’s breathing exudes carbon dioxide. The materials that people breathe out are a vital resource to other entities within the ecosystem—and this is true biologically but also digitally as well. It might be too much to imagine the entities (corporate and strategic) that utilize data as being the cyberspace counterparts to plants that convert our carbon dioxide exhaust into fresh breathable oxygen. Even a mutually beneficial outcome in the ecosystem stems from the fact that a symbiosis is mutually advantageous, and not all utilizations of data

Data as a Battlespace139

exhaust have a beneficial or even neutral impact on individuals producing the data. Nonetheless, the ecosystem has grown so that people’s data exhaust is far more valuable to some entities than simply being so much empty air. DATA—RAW MATERIALS FOR ECONOMICS Conti observes that, rather than being either evil or altruistic, companies aim to secure a profit, and their policies are structured to support this. To support a dynamic ability to maintain profitability when an environment may change, policies (including privacy policies) are continually subject to change.21 A long-standing feature across many online services has been uncharged use, but there is no such thing as a free lunch, and free-to-use services require alternative methods to deliver a revenue stream; sites like Wikipedia may rely on donations, but search engines, email providers, and others monetize data to sell to third parties. These, in turn, use the data for advertising or other purposes. The built-in logic of this system is that collecting more data about a customer facilitates understanding the customer more deeply, and this enables continuing and even increased services and the consequent flow of economically valuable data. As early as the mid-1990s, some analysts noticed that “this private intelligence-gathering gives some people the creeps,” but the model delivered, and it progressed.22 Schneier emphasizes that “these aren’t good or fair bargains,” but they are readily accepted as online users seek free-to-use functionality, and as a result “the overwhelming bulk of surveillance is corporate, and it occurs because we ostensibly agree to it.” This pattern of collective decisions leaves individuals with unenviably narrowed practical options, as illustrated by the fact that significant publicly available discourse occurs on social media platforms or in the comments sections of reports on media outlet websites. However, this publicly readable information resides in legally private spaces, since the companies that provide the services set the policies that govern usage. This implies a connection between private spaces’ policies about publicly accessible speech and those hosts’ ability to translate the resulting data into profitability. The interests of hosts and clients intersect, but they are distinctly not synonymous.23 Unsurprisingly, leading figures among the companies that transform data from free-to-use services into profits express confidence in the utility of their services and dissemble regarding any less positive potential fallout from the economic leveraging of data. Schmidt, heading Alphabet, has stated, “I argue that Google uses big data to provide valuable services, and if we were to violate your privacy, you would stop using us.” Sir Martin Sorrell, founder of the world’s largest advertising company, views Google as a “frenemy” and touts the value of predictive intelligence, which would

140

Myths and Realities of Cyber Warfare

presumably be fed collected data in order to help it learn to predict: “if the ads  .  .  . are contextually right, they will win through.  .  .  . Our view is that the consumer decides.”24 Scholars on the subject nonetheless infer that these decisions are made without full realization of the implications. The data monetization model “is predicated on users relinquishing individual privacy in exchange for free information and software,” opening the door to the leveraging of data by a range of corporate and noncorporate players.25 In addition to companies monetizing data they collect as a result of providing free-to-use services, the flood of data makes other economic leveraging possible. Analysts a decade ago, considering the cybercrime that was already taking place, noted that only a prioritization of cybersecurity efforts (among government and the public as well as across businesses) could help staunch a trend toward ever-greater amounts of cybercrime.26 Collected data is economically valuable—if companies believed otherwise, they would not bother creating free-to-use services to entice clients as a means of collecting the data. Social networking sites entail a range of potential dangers. These include the open-source harvesting of visible posted data for ulterior purposes; a 2010 survey found that over half of U.S. social media users posted “sensitive information that makes them vulnerable to cybercrime.”27 This statistic is appalling, in part because it is so avoidable at the level of the accountholder. It points to the need for increased awareness of cyber hygiene and, more broadly, an understanding of how carelessly shared information can be leveraged for illicit purposes. Users themselves are by no means the sole source of vulnerability, and the other avenues for illicit data access lie beyond the accountholder’s realistic opportunity to prevent abuse. These include technical methods to lure users into inadvertently allowing malware onto their machines and the hacking of the companies themselves, in order to access the troves of data that they collect in exchange for providing free-to-use services. This data represents mountains of revenue, which are tempting targets of cybercriminals able to access them illicitly. The apparent and reported vulnerability of such data is a cause of outrage. Conti’s warning that “the most effective way to eliminate the problem of web-based information disclosure is to never generate the data in the first place”28 bears repeating and emphasis. When entities decide to collect such data about individuals, the obligation to work actively to secure it is obvious. Illicit online efforts to leverage data economically extend even to state actors. Former NSA head Michael Hayden emphasized the difference between the organization he led earlier and how “every other country conducts espionage for commercial advantage  .  .  . and the poster child of a country conducting espionage for commercial advantage, hands down, is China.”29 Scholars have noted the historic proclivity of China’s state-run entities to spin off into becoming at least ostensibly independent

Data as a Battlespace141

companies. Huawei, frequently cited as a company suspected of involvement in economic espionage and accused of reverse-engineering electronics technologies developed outside China, produced copies of the dominant Cisco network routers used by British Telecom and, by 2011, reportedly dominated half of British Telecom’s infrastructure in that sector.30 Ongoing U.S. distrust coincided with Huawei’s having “focused on Europe,” producing a dynamic in which U.S. policy makers struggled (to date with mixed results) to gain cooperation with traditional European partners in treating with the Chinese company.31 Where the distinction between strategic intelligence and economically lucrative information is fuzzy, various forms of espionage are likely to become intertwined. China is suggested to illustrate this dynamic.32 As described in Chapter 2, Libicki has discounted the impact of such activities, and he has asserted that “no reliable damage estimate is likely to be precise,” since the extent of economic espionage cannot be known and the effect it exerts is impacted by several factors. Klimburg argues that “China seemed to be collecting a huge amount of data that had little or no value and targeting comparatively little data that had great value.”33 Questioning the efficacy of espionage seems a peculiar approach in dismissing its relevance; data may be more useful than is assumed, or it may be used in different ways than is assumed. However, Klimburg also argues that Chinese espionage precipitously diminished in the wake of discussions between then U.S. president Barack Obama and Chinese premier Xi Jinping. Chinese authorities even reportedly cracked down on some of the hackers operating within its country and identified by the United States, although Chinese prosecution surprisingly has cited the massive OPM hack (state espionage impacting at least twenty-two million individuals working for the federal government or as contractors) rather than the illegal and also extensive economically oriented espionage that U.S. officials had associated with the hackers. The confluence of economic and strategic leveraging of data also appears in the digital tools developed and sold to a range of countries that may desire cyber-monitoring capabilities without being able to develop the appropriate technologies in-country. Such customers are reported to direct these technologies upon their own populations, as when “dissidents in Bahrain were shown transcripts of their private e-mail and chat sessions, collected by the government with tools provided by Nokia and Siemens.” Bruce Schneier points to several companies that export monitoring software to countries across Europe, Asia and the Middle East, the Pacific, and Africa.34 DATA—RAW MATERIALS FOR STRATEGIC PURPOSES In the twilight of the 20th century, scholars already noticed that “the media and commercial information sources are becoming the ‘poor man’s

142

Myths and Realities of Cyber Warfare

intelligence service’” because some of the information suddenly becoming available to anyone with internet access, such as high-resolution Earth images taken from satellites, had only been technologically possible for less than half a century and had then been solely the exclusive preserve of intelligence personnel and policy makers belonging to the two Cold War superpowers and only to those with the highest of security clearances.35 Google Earth and Google Maps entrusted the kinds of intelligence that only presidents and top strategic analysts had before possessed, and online tools now provided this with stunning facility and user-friendliness. Deep reluctance about widening accessibility to intelligence data had dissuaded Markus Wolf, the head of East Germany’s Stasi intelligence and secret police entity, from allowing its files to be transferred from paper cards to a more storable and sortable electronic alternative.36 Once accessibility becomes possible, controlling access becomes far more challenging. The strategic leveraging of data derived from social media illustrates this phenomenon. Open-source intelligence long predates the internet, let alone social media. The latter technologies vastly amplify the opportunities for deriving entirely unclassified but potentially important geopolitical information, such as if one country desires information about the public appraisals and sentiments held in a second state about a third.37 Information about such questions may or may not be of specific political utility, and there may or may not be interest within the second state in obscuring this information from visibility; to some degree, however, the opportunity to observe such information exists independent of an intent to prevent its disclosure. Singer and Brooking even assert that “social media has rendered secrets of any consequence essentially impossible to keep.”38 They explain: Consider that in preparation for D-Day in June 1944, the Allies amassed 2 million soldiers and tens of thousands of tanks, cannons, jeeps, trucks, and airplanes in the British Isles. Although German intelligence knew that the Allied forces were there, they never figured out where or when they would strike. That information came only when the first Americans stormed Utah Beach. Today, a single soldier’s or local civilian’s Facebook account would be enough to give away the whole gambit. Indeed, even their digital silence might be enough to give it away, since a gap in the otherwise all-encompassing social media fabric would be conspicuous.39

Careless posts on social media have already enabled important forensic work online that has impacted geopolitical debate beyond the digital realm. Social media posts in connection with the fighting in eastern Ukraine offer a test case. Ostensibly, the separatist forces fighting to break these regions away from the rest of Ukraine did so essentially independently of any foreign support. Assertions of Russia’s quiet support of these groups have met various responses apparently intended to outright reject

Data as a Battlespace143

or otherwise mitigate significant conclusions about Russian state support for the pro-Russian paramilitary groups. Researchers have identified social media use by Ukrainian soldiers, paramilitaries, and Russian personnel. Posts have included a number of photos and video clips uploaded by personnel. Some of these posts ­display information that can help disprove fictitious claims made by various parties. On occasion, both Ukrainian forces and paramilitaries have made implausible claims about simultaneously wresting the same town from the possession of the other. A key issue in the war involves Russian support for the paramilitaries, and international attention to this topic grew in the wake of the shooting down of Malaysian Airlines MH17 in April 2014. When international experts pointed to the use of an advanced antiaircraft missile, a flurry of denials and accusations emanated from Russia touting a number of alternative theories, including that Ukraine had somehow intercepted the passenger jet using one of its own fighter jets or surface-to-air missiles (SAM). Careful analysis of images and video clips posted onto social media refuted Russian claims. Evidence reportedly included footage of a Russian vehicle carrying SAMs through the streets of a village identified by volunteer analysts as Snizhne, in the part of eastern Ukraine controlled by separatists. The other video was disclosed by Ukrainian authorities, showing the vehicle (in the hours immediately following the shooting down of the MH17) missing one missile. Russia’s Ministry of Defense quickly but vainly questioned the geolocation of these images. Patrikarakos notes that “social media broadcasts and amplif[ies] messages—and in so doing it also creates data.”40 Photos inevitably include details such as signs, land features, and even shadows. Even blurry images can be examined for conceivably useful clues about location and time. One of the key ingredients in this forensics coup was the images themselves, many of which were made available online by people who had no idea that they depicted embarrassing clues. Had they been aware that their pictures carried clues that would have embarrassed their side in a conflict, they would not have guessed that these clues could be aggregated to assemble a meaningful, let alone conclusive, set of evidence as with the downing of MH17. A second notable factor in the transformation of isolated clues and data points into a convincing picture of how the passenger jet was destroyed involved crowdsourcing. The analysts who cracked the secrets of the shooting down included a number of people who were interested in finding the truth, who were fascinated in learning details of obscure topics, and who took satisfaction in solving puzzles. But they were not professionally trained weapons experts or defense analysts by training. On one hand, this points potentially to an element of democratization in conducting at least some forensics of open-source information. On the other hand, the fact that the sleuths were a crowdsourced band of patient

144

Myths and Realities of Cyber Warfare

and insightful amateurs became the foundation of the next wave of Russian denials. Semiofficial outlets like Sputnik and RT insistently pointed to the lack of professional training in military topics as evidence that the crowdsourced analysis was unreliable, implying that belief or dissemination of the crowdsourced findings was tainted or calculated.41 These events validate the perils Singer and Brooking illustrate in their allusion to D-Day operational security. The examples furthermore point to the power of crowdsourcing in intelligence scenarios—provided that the objective is the breaking rather than the maintenance of a secret and provided that the entities defending secrets neglect to chaff a crowdsourced forensics effort with misinformation or distractions that derail the analysis. Online images can often tell far more than even what is visible in the background. If terrain features and shadows can betray information to attentive viewers about the location and timing of a photo, other factors such as geotagged data can provide information just as definitive and precise. Unwary Ukrainian soldiers arriving at the front reportedly allowed their phones to geolocate them for Russian intelligence. Text messages were fired off to Ukrainian troops telling them that their bodies would be found when the snows melt, while hostile artillery fired off kinetic messages to the same personnel. Russian commanders were reportedly unaware that many of the troops under their command were similarly posting comments and images that disclosed important information about their own locations also. Junaid Hussain, an important figure in ISIS, is believed to have been tracked and killed in a similar manner through geolocation made possible by incautious use of social networking tools.42 While new tools have come into the hands of states and individuals alike, others remain in the hands of states and other digitally powerful entities. Geopolitical and other security threats form a backdrop against which powerful analytical tools are deemed necessary. These trends provoke unambiguous concern among many scholars.43 In addition to the reported sale by software companies of monitoring tools to states whose desire for monitoring capabilities exceeds their indigenous ability to create tailored monitoring suites, access to the data of popular companies including social media platforms can be purchased by front entities on behalf of their use by entities wishing to remain hidden. Perhaps this concern partly explains the spike of vocal concern about Cambridge Analytica’s activities. The data it compiled, in connection with political research for an electoral campaign, could be converted into conclusions about accountholders in order to facilitate microtargeting of political messages.44 Microtargeting is already used in commercial advertising and politics, although its use, particularly in the latter arena, inspires misgivings from privacy advocates and people concerned about the possible impact on democracy. When political and commercial speech are treated similarly, and when strategic,

Data as a Battlespace145

political, and even economic leveraging of data can resemble each other, this blurring poses security threats in the eyes of many people. A DISCUSSION ON RANSOMWARE Data can be leveraged by being denied as well as through its utilization. One of the seemingly inexorable trends throughout most of the 2010s was the rise of ransomware, in which the attacker locks out an operator from the ability to access his or her own data. Typically, ransomware attacks include a deadline in which the target is required to pay a ransom digitally within a short timespan or face the prospect of data being permanently locked or deleted. Notably, ransom payments were often demanded in the cryptocurrency Bitcoin to obstruct law-enforcement efforts to trace the flow of ransom payments after leaving the accounts of targets. As a topic, ransomware is enmeshed in conflicting data that point analysts toward often opposing conclusions. Paying ransom effectively rewards bad behavior, and widespread conventional wisdom is that in the long run, paying off ransomware attackers rewards and encourages similar attacks. Different sources disagree about the degree to which ransomed data can be expected to be restored. Some reports advice that “those that do pay up often find their files remain encrypted,” while other experts observe that the deletion of ransomed software after payment is a poor practice on the part of attackers because it reduces the likelihood that future victims will comply with demands.45 Even basic statistics such as the average losses suffered by a ransomware target can vary dramatically depending on the source. Symantec has reported that 2018 saw the first decline in ransomware since 2013, although the decline in overall attacks comes as a result of dramatic falloffs in ransomware against end users since ransomware against enterprises appeared to continue to mark slow increases.46 Different forms of malicious online activities can be difficult to distinguish. Even for an alert defender watching cyberattacks occur in the moment, the reconnaissance preliminary to an attack that manipulates or deletes data will match the actions of hackers conducting espionage. Similarly, criminal activity and politically motivated criminal activity can be extremely challenging to distinguish; they can use the same tools and can even involve the same cyberattack experts, depending on the program of the state or nonstate entity involved. The emergence of methods such as “Ransomware-as-a-Service” (RaaS) sharpens this point. RaaS entails the creation of ransomware tools and their being franchised out by criminals to other criminals, with the former enjoying a portion of the revenue extorted through the attacks of the latter.47 Analysts have connected the NotPetya ransomware attacks of mid-2017 to Russian intelligence. The attacks reportedly disabled between 10% and

146

Myths and Realities of Cyber Warfare

30% of the computers in Ukraine, inflicting damage equivalent to 0.5% of the country’s GDP. By February 2018, forensic experts found that the attackers had spent months gathering intelligence on their targets and that the attacks had been made possible by the fact that a “clunky” accounting software remained in widespread use across many Ukrainian businesses despite it not being supported by its maker for the past four years.48 March 2018 saw ransomware attacks targeting government institutions in Atlanta, Georgia; these attacks, one example of a panoply of ransomware events targeting municipal agencies around the United States and beyond, cost more than $2.5 million in recovery and was traced in November 2018 to Iran.49 Third-party repurposing of malware forms yet another source of concern regarding ransomware. After the discovery of the Stuxnet malware in 2010 and research leading the forensics community to conclude that it was developed to target Iranian nuclear enrichment technologies, some authors on cybersecurity topics vocalized concern that the extremely sophisticated Stuxnet malware would be reverse-engineered and repurposed for use by criminals, particularly for launching ransomware. One author wisely noted that “subsequent attacks might not be as carefully crafted or controlled” as had been the case when the original designers had worked to ensure that only the intended target was impacted by the presence of the code.50 Conclusively derivative ransomware harkening to Stuxnet has yet to be reported in the past decade, however. In contrast, the WannaCry ransomware attacks that struck in May 2017 involved “random” targeting that impacted a reported quarter of a million computers, including many used by the National Health Service’s facilities in England and Scotland (although not apparently in Wales or Northern Ireland). Nearly four dozen British hospitals were impacted. Other entities worldwide, including at least two medical facilities in Indonesia, were also impacted. The WannaCry attacks were linked to North Korea, which already had a history of using online criminal activities to shim economic support for its expensive and ambitious military development projects undertaken by one of the poorest states on the planet. Numerous reports asserted that WannaCry was created using illicitly disclosed information as a technical foundation.51 One theorized factor for explaining the apparent eclipse in ransomware against end-users was the lucrative opportunities involved in cryptomining, also called cryptojacking. Cryptomining is an illicit use of other persons’ devices for computational purposes meant to yield the “mining” of cryptocurrencies. In a real sense, cryptomining combines the time theft that occurred in some of the first illicit computer crimes dating back before the advent of the modern internet with the establishment of a botnet on the other. Victims whose devices are utilized by cryptomining are unlikely to recognize that their machine is being used illicitly because obvious

Data as a Battlespace147

signals of misuse would endanger the attacker’s ability to continue its operation. Some attacks reportedly included tools for scanning a victim’s computer in order to determine whether a relatively quick “hit” could be made through ransomware; if ransomware seemed an unpropitious avenue, the malware was to set the victim device up as part of a cryptomining operation. The year 2018 saw considerable fluctuation in the traded value of cryptocurrencies and instability and drops in the price of commodities like Bitcoin beg the issue of the future of cryptomining and its impact on other illicit activities such as ransomware. Schneier warns that ransomware is far from a passing fad and that it can be expected to actually dramatically increase in scale and in potential lethality. WannaCry’s impact on hospitals forced delays in providing medical treatment to many noncritical cases so that resources could be concentrated to preserve patients’ lives and hospital effectiveness. The apparently random array of targets has not led to conclusions that WannaCry was launched with the intent of threatening lives. Schneier observes that the introduction of computerization (especially of internet-connected computerization) to an increasing array of tools and infrastructures on which people’s lives depend will mean much greater risk for cyberattacks demanding ransom payments on the threat of lethal direct or indirect effects. He frankly expects ransomware to occur against automobiles, including with human beings being trapped in vehicles: “it will happen. . . . we just hope it won’t happen at speed.”52 CONCEPTUALIZING AN INFORMATION OPERATIONS BATTLESPACE Mark Twain is credited with the idea that there are no new ideas but simply “new combinations indefinitely” of existing pieces, like the glass shards in a kaleidoscope.53 The novelty of social media can be seen as reassembling a fresh opportunity for the old pattern of influencing others’ ideas through the strategic propagation of suggestive messages. The social media giants appear to have attained a permanent state of dominance over internet usage and discourse around the world. Social media dominance and impact is likely less permanent than it appears in the first quarter of the 21st century, which implies both that the current array of challenges will adjust and recede in the face of others in the future—and that traits should be identified in the weaponization of social media so that they can be recognized later as part of a pattern when they might recur. The emergence and importance of the digital realm do not mean that physical domains stop mattering. Also, examples of struggle within the digital realm do not mean that conflict in the tangible domains will stop or that it will cease to matter. The investment of time and human energy into the digital domain does mean, however, that the implications of conflict

148

Myths and Realities of Cyber Warfare

here cannot be ignored, because they are already occurring. Many people are engaged online to the extent that although they necessarily remain in the tangible domains, they have also partly migrated into digital arenas. Disengagement from the interconnected environment does not solve the problems manifested online, partly because divesting of the internet would not insulate societies or states from the turmoil it can host. Yannakogeorgos aptly noticed in 2014 that “the potential for psyber warfare” has been seriously overlooked and that “in the near-term, this is the more likely use of cyber to create physical effects.”54 Adversaries’ mobilization of cyberspace for psychological warfare or information operations has sparked periodic concern. Authors pointed out doctrinal gaps overlooking the dangers of information operations via cyberspace at least as early as 2007, and analyses appearing a decade later highlight the problems springing from public discourse being seeded with information operations messaging through social media.55 The use of novel technologies puts a new lease on the life of well-worn methods for sowing discord and uncertainty in a targeted state’s homeland or across an international community. Approaches that might have been quickly and widely identified as manipulative can proceed for a long time unquestioned when they carry a new guise.56 Some of the figures who have noticed that “the battle space for the center is social media”57 also argue that the United States needs a rebooted manifestation of the U.S. Information Agency (USIA), an entity established in August 1953 under President Dwight Eisenhower at a time when the Cold War was relatively new and as superpowers were racing simultaneously for arms advantages through thermonuclear weapons and for messaging superiority, especially in the uncommitted “Third World” that was emerging from the status of imperial possession by European states that had been damaged by World War II. The USIA was closed in October 1999, as U.S. policy makers celebrated the first decade of the post–Cold War world and immediately after Russia’s president tapped a former intelligence officer to become the future leader of that country. The history of representative states’ establishing information offices to contest the information environment overseas includes some controversy, including in connection with concerns that some of the materials distributed by such an office might find incidental domestic audiences and inadvertently constitute propagandizing in a technical sense. Such controversy, coupled with the perception by the late 1990s that such an office was unnecessary, proved crucial in the closure of the original USIA. Advocates of a reboot organization capable of contesting online information spaces would necessarily have to address issues of domestic exposure to the satisfaction of policy makers. After all, if “online battles are no longer just the stuff of science fiction or wonky think tank reports, but an integral part of global conflict,” as Singer and Brooking (who do not

Data as a Battlespace149

advocate a rejuvenated USIA), the undeniable fact that U.S. information interactors constitute a disproportionate segment of the world’s online population means that information activities would likely be encountered by Americans.58 Although the definition of trolling is subject to an aspect of subjectivity, Singer and Brooking assert that individuals who have trolled at least once are twice as likely to troll again as people who have never done so.59 Trolling might be thought of as a sort of gateway, in the sense that those initiated, either as trolls or as people who have engaged with them, adopt similar and repeated practices. Intuitively, this tendency will feed the growth of using social media and networking platforms as an information operations battlespace. TENETS IN INFORMATION OPERATIONS BATTLESPACE Two observations seem to characterize the main thrust of the notion that social media makes a crucial information operations battlespace: “In the coming age, conflict will be waged more tha[n] ever for the minds of people,” and “pure ‘Cyberwar’ is dwarfed in its possible results when compared to clever concerted SOFTWAR and Perception Management.”60 Intriguingly, however, these statements were made shortly prior to the popularization of social media. They appeared at the same time that the term “Web 2.0” was first coined in a prediction that social media platforms would transform the way people engaged with the internet. Although the full context of the latter of these two quotations even emphasizes the impact of television as a mode of perception management relative to the impact of technical malware activities, both statements reflect the focus and concern evident today regarding a political entity’s ability to shape perceptions in a target state. Social media has made both exposure and participation in such activities far more probable, since, “unlike major weapons systems, anyone can use a computer”61 or connected device. Despite the possible (but arguably fading) temptation to dismiss the potential of social-media-vectored, information operations and despite the ongoing relevance of battlefield realities in the physical domains (sometimes overlooked by the most zealous students of social media’s strategic effects), the impact of social media must be recognized and set in context. To the extent that social media is available and used by a targeted society, it is liable to constitute an ideal battleground. Three interlocking factors sharpen the effect of such a campaign: simplicity to cope with shortening attention spans, resonance with frameworks that are familiar and evocative in the target’s mind, and sufficient novelty that the new message builds and bends the existing framework into resembling the outlook that is intended by the side launching the operation.62 Fortunately, these themes constitute a pattern that makes a double-edged sword. They can

150

Myths and Realities of Cyber Warfare

potentially help people living in a targeted state identify clues that their society is being targeted. This pattern is understood by adversaries, since “a single image of an ISIS fighter posing with a jar of Nutella” precipitated a flurry of derivative news articles.63 Propaganda is most impactful when its message insinuates itself into the minds of more than the people who are the most sympathetic to the propagandists’ agenda. Normalizing new ideas in the outlook of people who would reject the overt form of propagandists’ agenda marks a considerable victory for a propagandist.64 Propaganda defeats of this kind create for the defender the equivalent of wood rot in a defending state’s ability to retain enough national consensus to maintain coherent policies. This is why the apparent goal of disinformation campaigns is “to contaminate the information space with many  .  .  . versions [of reality], some of them conflicting, to confuse the audience and erode its ability to think critically” until “no news source or narrative can be trusted.”65 This explains why, for example, propagandists striving to obscure the facts of the shooting down of the MH17 used Russian media outlets to release an array of fabricated and sometimes conflicting “theories,” which were then injected more deeply into global discourse through the social media activity of trolls.66 Social media was not created with the purpose of becoming an information operations battlespace, but its form lends itself to that function. Failing to engage in the social media space means ceding it to the enemy, just as declining to give battle when encountering an opponent has meant ceding terrain for centuries of military history. But social media is built around novelty and currency: presence cannot be static because it ceases to retain currency. When old posts drop off the newsfeed, they essentially fall off a cliff to (generally) disappear. When algorithms favor new statements and cannot distinguish fact from fiction, an old truth becomes obsolete and invisible in the face of a fresh lie. Freshening the lie requires particularly little effort if its dissonance relative to other lies poses no problems for a propagandist. The result is a scenario in which the social media space is a beast that must be constantly fed new posts, new images, new links if only to hold ground in terms of apparent relevance and currency. This, and the preference for materials that feel authentic, puts a premium on getting “raw data (even if it’s dishonest data) out of the battlefield and into the information sphere.”67 Just as the strategic use of information did not originate with the rise of social media, it is a threat to not just the world’s more democratic or representative states. States that strongly argue for the concept of sovereignty extending into cyberspace frequently do so because of concern that foreign enemies are seeding the domestic public with dangerous ideas. Russia and China have spent two decades promoting the case for sovereignty to embody a larger role in issues relating to internet governance. Restrictive

Data as a Battlespace151

views about domestic political participation seem to correlate to the most vocal pro-sovereignty arguments, and these, in turn, coincide with the most widely encompassing concepts of what foreign ideas represent dangers. The logical but complicated upshot is that some of the states that do the most through social media avenues to foster confusion among their global neighbors are also the most suspicious of those neighbors’ populations interacting with the home populations of sovereignty-advocate states. Together, these tenets describe how adversaries use social media as an information battlespace. Narratives must be tailored to achieve simplicity, resonance, and novelty in order to capture attention and sustain belief and propagation. The global scope of the battlespace ensures constant surges of other data, including vast amounts totally unrelated to information operations. Battling for minds necessarily occurs in the midst of myriad other dialogs. This continual churn of data pushes all data (including information operations) toward the bottom of newsfeeds, requiring a cyclical refreshing of campaigns. And very different perspectives among policy makers in different countries create divergent ideas about what dialog represents a danger and what does not. STRUGGLE IN AN INFORMATION OPERATIONS BATTLESPACE Whether surreptitiously seeding discourse in other countries or brashly projecting posts and images that celebrate the violence of a renegade geopolitical power, weaponized use of social media impacts security affairs in the 21st century. Some researchers, taking note of this conflict and of the participation by paid employees on either side of such operations, have advanced the idea that the paid troll “sockpuppets” and the subcontracted posting moderators working on behalf of social media giants are “mirrors of each other.”68 This may be true in a limited sense, given the grueling strictures and the fact that both sides are ensconced in graphic and surreal materials. A vital difference, of course, is that trolls project these deleterious materials and subcontracted moderators are employed for the specific purpose of arbitrating among flagged materials and determining which are inappropriate and must be removed. It is true that a target country may see a troll in what a host country finds a patriotic actor in cyberspace, and one country may deem a platform moderator to be what another considers a censor. Nonetheless, the troll is defined by the act of adding content and the moderator is defined by the act of arbitrating and removing content; even different perspectives about what constitutes propaganda (or hate speech or smut) do not alter the fact that trolls infuse dialog with material and moderators remove material. The meaning that this distinction conveys will likely adjust over time and across different contexts.

152

Myths and Realities of Cyber Warfare

Participation in social media’s information battlespace is multidimensional. Anonymous, which has variously “declared war” on a range of institutions including religious cliques, corporations, and nation-states, meted out pulses of harassment against ISIS as well. One wave of activity followed the attack by jihadist gunmen against the ribald French satirical Charlie Hebdo in January 2015, and another followed the much deadlier ISIS-affiliated shootings and bombings that killed 130 people in Paris that November. Participants, such as those who posted in support of “ISIS Trolling Day” the following month or those who viewed or shared such posts ridiculing ISIS, were unlikely to take other actions such as volunteering for the armed forces in hopes of being eventually sent into combat against ISIS.69 Even before ISIS Trolling Day, the jihadist #amessagefromISIStoUS in 2014 was swiftly hijacked away from ISIS accountholders by people antithetical to the murderous quasi-state. After control of it was wrested away from ISIS through concerted anti-jihadist hashtag hijacking, a parody #amessagefromUStoISIS emerged, alongside the satirical #AskIslamicState launched by a comedian in the United Kingdom. One analyst concluded from these events that “not even barbaric murderers are immune to trolling.”70 Indeed, no one, from murderous nonstate entities to individual notables to corporations to the most powerful nation-states, is immune to the weaponization of social media. Yet the tea leaves had pointed in this direction at least as early as the 2006 suicide of a teenager marginalized by cyberbullying. The only ingredients apparently necessary for trolling to weaponize against a target successfully are that the target can be identified, that the target has a reputation or standing that can be assailed, and that the assaults can garner sufficient virality and traction to contest or dominate the conversation. Arguably, the more considerable the reputation of the target, the lower the bar for launching an attack given the contrast between the reputation to be protected and the accusations flung in its direction. If the offense is ever more powerful than the defense in a digitized conflict, this may be one of the most noteworthy—but not yet fully understood—aspects in which it is the case. Singer and Brooking argue that the tilting of conflict toward the internet “renders everyone a potential online combatant.” After all, not only does posting or sharing material propagate it to others; even the act of viewing material contributes to the algorithm-based standing that posted material possesses. Even pure information consumers can be interpreted as impacting the spread of weaponized information. Furthermore, since “social media has erased the distinction between citizen, journalist, activist, and resistance fighter” and created a dynamic in which a person “can play them all at once,”71 the explicit message that Singer and Brooking

Data as a Battlespace153

present is that shooting viral invectives into cyberspace is only slightly different from firing a rifle on the battlefield. However, even anecdotal conversations with personnel serving in uniform do not convey much agreement with such notions, and for some reason. Patrikarakos’s study of the leveraging of social media for war shows that combatants, even those who are directly supported by citizens using social media to organize supplies and delivering these to troops near the front lines, see such actors as supporting, not participating, in war. Nor should self-styled keyboard warriors who dominate Twitter, YouTube, Facebook, VKontakte, or other platforms necessarily rush headlong to claim combatant status. Being a combatant confers rights but also obligations on the battlefield. Furthermore, the industrialization and mobilization regimens that increased involvement of populations from the late 19th century helped inspire and justify the concepts about “total war” that took hold in the 20th century and laid the intellectual foundation for deliberate aerial bombing of urban centers, blockades meant to induce widespread starvation, and even commit genocide against population subgroups deemed innately hostile to a regime. Figures who tout the idea that anyone carrying a smartphone is a “cyber combatant” urgently need to consider circumspectly what ramifications might emerge when others, including adversaries, latch onto the concept and take it to heart. This does not mean that launching information operations through social media cannot have deleterious effects. Analysts have concluded that even “relatively small numbers of Twitter users” can leverage techniques and bot accounts to convey much larger strength online, and imply physical domain power through online messaging.72 Crowdsourcing forensics can overcome misinformation related to combat events. Psychological operations aimed at eroding the political confidence and reliability of military personnel are reported to be a particular sore spot of concern in some militaries.73 Disinformation campaigns’ psychological effects can obstruct consensus and foster uncertainty in ways that help shape the physical battlefield. Imitation is said to be the sincerest form of flattery, and leaders in several countries are reported to be voting with their actions by modeling “bot networks and cyber warriors similar to Russian trolls.” This list is said to include Iran, Turkey, and Venezuela.74 In light of the first regime’s discomfort with social-media-savvy, pro-democracy advocates in 2009, the second regime’s concern with the apparent 2016 coup attempt, and the economic-induced unrest in the third country, this trio forms an interesting set for further investigation elsewhere. Meanwhile, Russia remains the most globally prominent actor in the realm, and a senior-level advisor to Russian president Vladimir Putin was quoted at the 2016 Infoforum conference telling an audience:

154

Myths and Realities of Cyber Warfare

You think we are living in 2016. No, we are living in 1948. And do you know why? Because in 1949, the Soviet Union had its first atomic bomb test. And if until that moment, the Soviet Union was trying to reach [an] agreement with [President Harry] Truman . . . in 1949 everything changed and they started talking to us on an equal footing. . . . I’m warning you: We are at the verge of having “something” in the information area, which will allow us to talk to the Americans as equals.75

The allusion to nuclear weapons appears to be a red herring. Even dire impact from disinformation campaigns cannot independently conjure up the destructiveness and chaos inflicted through nuclear arms. A second aspect merits further note: the tone presumes that in weaponizing information, the steering wheel belongs in the hands of policy makers, even if the keyboards are set before trolls, patriotic hackers, hacktivists, or other citizens.

CHAPTER 8

A Postmodern or a Premodern Future?

POSTMODERN DICTATORSHIP Russia’s Vladimir Putin represents the prototype “postmodern dictator” in the eyes of journalist David Patrikarakos because of Putin’s ardent usage of the internet to promote the apparent interest of his country’s regime. Pertinent literature points to a conclusion that this “postmodern” embrace of electronic technology-enabled connectedness is a reaction to regime fears about the decentralizing tendencies otherwise inherent in the internet. It seems ironic that Vladimir Putin was quoted as complaining in 2016 “that the Internet makes it ‘easy to hide, be rude, insult others, [and] take extreme positions.’” In a real sense, however, the features that are described as postmodern dictatorship respond to the dangers that autocrats perceive in the way connectivity reshapes the landscapes they want to continue to dominate. Thus, the “distance, decentralization, and interaction,”1 for which Rid had such high hopes about nonviolent subversion overturning despotism, instead provides for a more complicated and contested environment. Russian authors Andrei Soldatov and Irina Borogan have documented centralized responses to these trends predating Putin’s time in office. The telecommunications monitoring project known as SORM, initiated in 1994, quickly grew to include the internet. A government-directed “school of bloggers” was opened in mid-2009, and the Russian Internet Research Agency began posting Russian-language tweets before the end of the year. The country’s remaining independent journalistic outlets and its election monitor entity alike would be subjected to DDoS attacks coinciding with the 2011 election. Sometimes flaws in regime-crafted tools for blocking blacklisted sites could be exploited so that blacklists could reportedly be reverse-engineered to obstruct access to regime-approved outlets, and

156

Myths and Realities of Cyber Warfare

sometimes monitoring that was meant to intimidate instead inspired outrage.2 Missteps and mishaps do not alter the conclusion, drawn by analysts on both sides of the frontier, that “throughout the 2000s and 2010s Russia sought to establish itself as a ‘great cyber power.’”3 Researchers in the European Union assert that Russia’s externally oriented efforts stem from a tendency to interpret both domestic and international events as “threats to internal political stability” and therefore as existentially imminent problems. Continuity, borne of “a deeper dread of political subversion” resulting from a freer flow of information and opinion, is evident in the way that Russia handles social media and the attitudes of Soviet policy makers with regard to mass media and information operations. They suggest that this promotes a “tactical rather than strategic” approach that ensures unpredictability but at the price of diminished strategic coherence.4 If this view is accurate, it means that uncertainty and fear trigger actions meant to export similar instabilities in order to reduce the homeland’s presumed relative vulnerability. Data released by Twitter indicated that more than one-third of Twitter activity undertaken by the Russian Internet Research Agency appeared in English, reflecting a serious component of foreign influencing alongside domestic influencing purposes.5 More open societies have demonstrated mixed appraisals of the viability of social-media-induced destabilization as a strategic tool. Speaking to other methods such as technical cyberwar, electronic warfare, and proxy attacks, Libicki suggested in 2012 that “non-obvious warfare is . . . a far better fit for authoritarian” regimes than for democracies. Mandel, reflecting a focus on how avenues such as social media relate to technical cyberwar, agreed that “authoritarian states seem more proficient than democratic states are at using cyber deception to promote cyberattack outcome confusion.”6 Others have suggested that “cyber is a cheap way of engaging” targets while undertaking a Sisyphean task like reconstructing a broken empire, and that the democratic structures under psychological assault are not “newly fragile,” but that threats to democracy are “an old problem with new techniques, and so we need new responses.”7 Although these outlooks do not prescribe solutions in and of themselves, problems grow less insoluble when a target ends the panic that imposes the sense of hapless victimhood. Russian responses to forensic-based accusations of election meddling have included denials sometimes accompanied by wild counteraccusations, such as arguing that evidence of meddling is instead a sign of “a new era of information warfare against Russia.” Reported statements by a Putin confidant dubbed Russia’s “chief troll” blithely referring to having won some foreign election efforts while losing others belies assertions of outraged innocence.8 Analysts have warned that “Russia resorts to trolling

A Postmodern or a Premodern Future?157

primarily in response to its stigmatisation [sic] by the West,” indicating a cycle of recrimination and subversion resisting easy solution.9 Manufacturing and hijacking trends are enticing because actually anticipating them in advance can be extremely difficult. Reports that two dozen countries have initiated censorship patterned on the Russian model may come to be confirmed, disproven, or partially validated as other distinguishable models of censorship emerge from them. Likewise, the significance of Bahrain’s imposing an “internet curfew” and the deliberate localized break in connectivity in the Rohtak district of democratic India, both responses to unrest during 2016, resist clear understanding even if they raise discomforting prospects.10 News that Putin had called for Chinese censorship experts to teach Russian engineers seems to promise bad news for anyone posting or viewing materials inimical to regime preferences. Its further implications are more shrouded from view, however. Analysts have pointed out that Chinese and Russian cyber activities vary in important ways. For years, China had earned ill repute for its diverse espionage efforts that massively outstripped those of Russia in terms of scale, although this mismatch has potentially shrunk according to recent studies regarding APTs. Chinese censorship, overtly oriented toward the maintenance of social “harmony,” has not been associated with campaigns intended to influence elections overseas, in stark contrast to what researchers conclude about Russian activity. Chinese authorities reportedly interpret “spontaneous online movements,” even sometimes ones that are sympathetic and supportive of government and party positions, as a source of danger because they represent independent nodes of popular mobilization.11 It is unclear whether Chinese tools could simply be adopted elsewhere to the same effect, lacking the doctrinal and cultural contexts that had led to their development. Military history includes many examples of weapons technologies being adopted by different nations but being used in different ways and to different effect, based on the varying circumstances, strategic contexts, and doctrinal precepts of different countries and their militaries. Why would this pattern not also apply to monitoring and informational control tools that relate to the weaponization of online information? THE RENAISSANCE OF INFORMATION Legal scholar Yochai Benkler observed in 2006 that “it seems passé today to speak of ‘the Internet revolution.’ In some academic circles, it is positively naïve. But it should not be. The change brought about by the networked information environment is deep. It is structural.”12 Benkler deliberately focused his study on relationships between markets and freedom, and since markets and political dynamics are directly impacted by

158

Myths and Realities of Cyber Warfare

security affairs, it should not be surprising that electronic interconnectedness exerts deep and even structural impact on security matters. In broad terms, the prodigious expansion in connectivity could be said to have increased the volume of information that was accessible, then growing the number of people able to access it, and now expanding what can be affected through the medium. The result of these first two trends has led to such celebrated change as to seem cliché as Benkler noted, but the sensation of an information renaissance ushered popular and institutional embrace of ever-greater connectedness. When the World Wide Web was unveiled in 1990, three million people were online around the world; a decade later, the number was 360 million, and within two years it had increased by another two-thirds, to 587 million people in 2002. This meant that, in the dozen intervening years, the portion of the global population on the internet had mushroomed nearly two-hundredfold, from barely 0.05% to nearly 10%. The next fifteen years saw the global proportion grow fivefold again, to approximately 50%. The proliferation of devices (especially mobile devices) also meant that as early as 2010, the 12.5 billion devices not only outnumbered human beings by nearly 2-to-1 but outnumbered internet users by more than 6-to-1.13 The apparent absence of danger and the presence of opportunity characterizes many attitudes that facilitated this expansion. While the term “cybersecurity” was first used during the 1990s and the late 1990s arguably represented a watershed in U.S. decision makers’ views about the potential risks of maleficent online activity, other viewpoints deliberately sought to avoid overestimating potential dangers. Essays pondering the character of the electronically connected realm sometimes identified the hacker threat as an overdiagnosed specter in view of a hacker ethos celebrating decentralization, informality, and free availability of information. One statistic from 1997, likely to seem quaint in the eyes of readers in the 2020s, was that “one computer on the Internet is broken into every 20 seconds.”14 The interest of academics in sharing information and initial assumptions that trustworthy partners in finite numbers would populate the environment famously led to a cooperative milieu in which information security was rarely considered.15 Arguably, this mindset permeates U.S. policy concepts historically supportive of multi-stakeholder dynamics, individual freedom, and the view of the digital realm as a naturally open cyber commons. Widely, and perhaps especially where individual freedoms are taken for granted, consumers have shown a definitive preference for functionality that eclipsed simple awareness of security issues. Some figures have urged the private sector to assume greater responsibility for product security than occurred in the first decades of the internet, but as one cybersecurity expert has noted, “when people say, ‘oh, cyber security is the biggest issue in the world,’ everyone means that, but what they

A Postmodern or a Premodern Future?159

really mean is that they hope someone else deals with it. . . . It is similar to global warming, pollution,” and other tragedies of the commons. Apple being the exception, interoperability among most other systems has been an engine in the expansion of connectivity and its utility.16 Ubiquity sponsors normalization. This is evident in peace and regarding war. Research indicated that social media began to be thought of (e.g., by deployed personnel) as “a mediated form of talking” rather than as the equivalent to letter writing, despite the fact that writing home was the principal purpose to which it was used by U.S. personnel in Iraq and Afghanistan during the military operations of the 2000s.17 Air Force officials updating Congress on the F-22 Raptor announced that in December 1999 an expectation that the fifth-generation, ultramodern aircraft’s sophisticated avionics was expected to have between 2.2 and 4.5 million lines of code in its software. Before 2020, the “average consumer vehicle” is reported to include over two dozen computers and more than 100 million lines of code, and vehicles as deceptively straightforward as the 2016 Ford F-150 pickup truck were said to reach 150 million lines of code.18 Sophistication can come hand in hand with complication, and complication is sometimes welcomed because its implications are not all understood. RELIANCE ON THE RENAISSANCE Embracing connected technology also involves becoming enmeshed within it. Brantly’s assertion that “we don’t have to plug ourselves into the matrix; we already live in it”19 holds merit. Decades of prioritizing functionality over security has propelled itself so far that capabilities are frequently used to justify themselves. Because, for example, the design of a limited and dedicated circuit board for an appliance is more expensive in the 21st century than the use of a more versatile and generalized alternative, the extraneous capability is inserted into a range of products and then presented to consumers as improvements. The result is a dynamic where cybersecurity figure Tim Singletary sees “social contests such as ‘my refrigerator talks to Facebook and yours doesn’t. . . .’ In the rush to have the latest and greatest, we have forgotten the most basic principles of privacy.”20 Concerns that cyberspace dependence “has passed the point of no return” are intermittently complemented by troubling demonstrations of the hackability of voting machines or other systems on which convenience, democratic practices, or even life itself depends. Observations in defense journals identify an ongoing trend toward increased sophistication in equipment, which other analysts in the same publications point out offer increased capability at the expense of consequent liabilities.21 Communication is upheld as a driver of adaptation, and cyberspace is recognized

160

Myths and Realities of Cyber Warfare

as a thoroughfare of communication, leading some to demand provision for cybersecurity “in a manner corresponding to its current importance.”22 “Attacks work because they violate assumptions,”23 and the best paths toward avoiding dangerous assumptions depend on understanding and analyzing how things work. A common chord among many cybersecurity thinkers is the consequent need for cyber education. As noted in Chapter 4, cyberwarriors should not necessarily all come from the STEM fields of science, technology, engineering, and mathematics—and a STEM background does not denote a natural cyber defender or attacker. However, technical understanding is crucial in any field. Indicators that academia prefers grooming the next generation of Facebook creators over the training of security-conscious programmers raises concern for authorities such as Yannakorgeorgos, who also notes that “most cyberspace users . . . have only fleeting knowledge of the underlying technologies” they rely on.24 Suggested solutions include partnerships across sectors so that cybersecurity experts can not only update their skills but also gain fresh vantage points.25 Such concepts depend on foundations of trust and interoperability that underpin the establishment of the cybered world, but they tend toward honing the skills of existing specialists, whereas increasing the pool remains an elusive goal. Like a distressing parallelism of Winston Churchill’s famed comment after winning the Battle of Britain, Ronald Deibert noted (before many of the tribulations recently witnessed with social media manipulation and ransomware), “never before have we been surrounded by so much technology upon which we depend, and never before have we also known so little about how that technology actually works.”26 POLLUTION OF THE INFOSPHERE Triangulating the dangers of an infosphere-turned-battlespace pose challenges. Singer and Brooking argue that “the internet has become a battlefield,” and evidence supports that contention. The exact meaning remains a subject debated by various analysts and experts of cybersecurity and military affairs, however. Many notable figures deem the more extreme visions of cybergeddon as hype. While Rid has worked to reduce fears about the potential of cyberwar, Healey has contrasted the underappreciation of advanced cyber intrusions with apparently overblown predictions of cyberwar. Other cybersecurity experts have noted that although a growing attack surface and lagging security practices have led to increasingly massive data breaches, breach statistics alone do not prove whether attackers are becoming more talented.27 Cyberterrorism has not to date involved technical intrusions leading to kinetic effects, but terrorist groups wielded social media with a skill and malice that surprised the international community.28 While cyberwar is

A Postmodern or a Premodern Future?161

not synonymous with internet-enabled information operations,29 the latter sometimes (but not always) leverages exploitations accomplished through actions that are prerequisites of the most specific definitions of “cyberwar.” Forensics linking Russian actors to attempts at election-season opinion manipulation demonstrates that although social media provides the vehicle for many online information operations, the ammunition for them is derived through the technical hacking skills that make it possible to gain illicit access to targets, conduct reconnaissance, and exfiltrate copies of materials. Video and audio fabrication techniques producing “proof” of various specious shenanigans have been described as a possible tool for internet-vector information operations of the future. Over a decade ago, Libicki voiced concern that “another role for operational cyberwar would be the creation rather than the destruction of information,” meaning the injection of either false or useless data to obstruct a target’s systems.30 Healey has suggested that “if we’re not able to get [cyber] defense better than what the attackers can throw,” then cyberspace might resemble a digital “Mogadishu” of endemic violence that makes dialog and commerce impossible. “Our grandkids might look back at these days” when people shopped online and posted information on social media “as the days of the Woodstock of the Internet, the free love generation of cyberspace.” Libicki has noted that third-party intervention in cyber conflict could transform online struggles “into a free-for-all,” while other scholars and executives have described scenarios in which comparatively hard targets are less bedeviled with attacks than the “softer targets and so political processes, institutions, and stakeholders” would be left in the lurch.31 Fuzzy lines between commercially available systems and those used by militaries provoke clear concern among cybersecurity researchers. The advantages of interoperability that has pervaded the internet’s growth, combined with the recognized fiscal benefits of using commercial off-the-shelf (COTS) technology, insinuate systems into each other. Scholars have predicted that “many cyberoperations,” despite being nonlethal, “place civilians in the crosshairs.”32 Others have added that historic data, stored for monetization, might be poached by others so that it can be leveraged for strategic uses, and consequently in the long run “organizations must learn to live in a world where less and less information can be kept secret, and where secret information will remain secret for less and less time.”33 Users can be advised to develop a “Spidey-sense” for identifying attacks, but this necessarily entails an intuition-grounded analog to the anti-virus programs that protect against previously recognized incursions. War is inherently dynamic, and experts recognize that cybersecurity similarly “is not a static solution.” Fred Kaplan has conceded that “today’s best practices might be tomorrow’s futile gesture,” and this fact arguably obstructs interest in adopting secure practices since they entail cost and

162

Myths and Realities of Cyber Warfare

complication and may prove abruptly obsolete.34 The mood reeks with the forlorn understanding that the barbarians are not only at the gate but also have gotten through it and that they have been through the breach for years. This realization seems to explain the interest in conceptualizing the role of resilience in cyber defense. Resilience—the ability to continue operating despite intrusions—has been suggested as a way to raise the bar for attackers,35 and it is sometimes even implied to serve a deterrent function. As this book’s author has previously illustrated through historical parallels, defenses are built with an eye toward delaying rather than perpetually defeating an attacker; the time that is purchased must be translated into positive actions with which to reverse the situation confronting the defender.36 This positive action has been hypothesized as operating successfully despite a “degraded or denied environment.” Different researchers and service leaders have suggested that this might entail the application of the electronic warfare concept of the war reserve mode providing latent capabilities to be unleashed in the event of conflict, or military units determining which components are most vital to prioritize for protection so that they can persist and allow units “fighting through the attack” to continue operating.37 SOFTWARE FLAWS AND DIGITAL RUBBLE Vulnerabilities abound. Conti and Raymond point out that, far from being “just resident software,” vulnerabilities also appear throughout “firmware, network protocols, security procedures, human users, physical security,” and elsewhere. In and beyond software coding, complexity breeds mistakes, as does a rush to market or to field a technology and a jaundiced attitude regarding inconvenient processes meant to limit errors.38 These factors apply across the spectrum of cybersecurity issues and are applicable points regarding security even more broadly. On the technical side of the cybersecurity equation lie software weaknesses that pose the prerequisite gaps in the armor through which an exploit can be used. The fact that software imperfections can be leveraged through technical attack explains the philosophy behind notions of blaming the victims for technical cyberattacks, as described in Chapter 3. But “everything electronic typically has a vulnerability in it. There is no such thing as a perfectly secure system.” High-end commercial software is estimated to contain an average of one error in every 20,000 lines of code.39 Internet history has shown massive growth in the size of the software that is widely used. The Windows 95 operating system fit within ten million lines of code; Windows XP appeared six years later and was forty million lines long, and its successors from Vista onward have extended about fifty

A Postmodern or a Premodern Future?163

million lines. Average rates of errors would suggest two and a half thousand software errors, on machines used on billions of devices worldwide. This contributes to why global standardization on Microsoft products has drawn fire from some in the security community.40 As a result, “if you put enough resources against it, you will find a vulnerability somewhere.” Furthermore, software problems have been compared to roaches: “when you find one you can bet that there are a few dozen that you didn’t see.” While marketplace preferences of “fast and cheap over good” likely accelerate the rates at which mistakes are made, the task of producing millions of lines of flawless code boggles the imagination.41 It is useful, if only partly reassuring, to recognize that not every coding error results in the kind of vulnerability that can be used to form an exploit, and malware does not necessarily need only a single exploit to be successfully utilized. Other factors further sharpen the security challenges posed by everlengthening software. Some involve trends such as “BYOD” (bring your own device) and the push to connect systems that had not been designed for connectivity with the broader world. Both trends aim to reduce institutional costs of operation, respectively by shifting device acquisition and maintenance onto users and through more efficient monitoring of industrial or infrastructure systems.42 However, these moves also mean that more will be expected (or assumed) of users who are inexpert in security and that systems upon which people rely for critical services will be more accessible than they had historically been to third parties. Growth in the attack surface poses another area for security consideration. Schneier reminds audiences that “as the system gets more complex, the attack surface gets larger” and more complicated to defend, and “we are designing systems that are getting more complex faster than our ability to secure them” is improving. This dovetails with the tendency to build extraneous functionality into a range of products that a generation ago could not possibly have been thought of as relating to computers or the internet.43 Like the appliance’s chip mentioned earlier, in which a generic and general use type is more cost-efficient to install than a purpose-designed alternative that offers less functionality, products that had not been computers are becoming computers. Computers in the 2000s had been screens with keyboards, sometimes in the portable form of a laptop. Computers essentially through the 2010s expanded to include and even be dominated by even more mobile tablets and smartphones. Further expansion is already underway. Schneier puts it even more bluntly: “traditional computers are screens that we stare at: laptops and phones.” But future activities will involve no longer things with computers in them, it’s computers with things attached to them. Your refrigerator is a computer that keeps things cold. . . . An ATM is just

164

Myths and Realities of Cyber Warfare

a computer with money inside. And when you think about it that way, you start realizing that computer security becomes everything security. . . . Which means it can be hacked, you can have ransomware on it, you can have malware. All of those computer things can now affect your car. I’m not sure we’re ready for that.44

The Internet of Things (IoT) entails not only a landscape promising customer-catering conveniences. It also threatens a world fraught with invisible security concerns. THE IMPACT OF THE INTERNET OF THINGS Science and technology journalist and author James Burke, in his landmark work Connections, introduced the topic of technological dependency by describing systems that are misunderstood and eventually malfunction as being “technology traps.” The eagerness with which online connectivity has been thrust into an ever-expanding range of products makes it unclear how long “noncomputerized alternatives” could even remain available to interested buyers.45 The shift from internet protocol version 4 to version 6 (called IPv6) reflects planning responding to the deluge of internet activity, connected items that already flows onto the world’s markets and networks. Google executives suggest that “the internet is becoming more and more pervasive . . . a little like water or electricity: you cannot live without it, or you will be worse off without it.”46 The comparison of internet access and services to basic utilities is interesting. In modern history, the world’s more prosperous countries have defined basic needs in increasingly expansive ways, and the identification of needs has been soon associated with ostensible guarantees to access, and frequently to changes in scrutiny and regulation. A consensus about the “utility-zation” of the internet presupposes answers to questions about how access and regulation might change as a result of the definition. The comparison to utilities matters in another way that touches more directly to conflict. The first manipulation of SCADA occurred in the spring of 2000, with the Maroochy Shire hack that resulted in raw sewage being released to endanger the local public and damage the local environment. Inexorable internet-connection of infrastructure and other systems raises the prospect of similar events becoming increasingly commonplace. Libicki, notable for his work seeking to deflate the cyberwar concerns that he considers to be hyped, worries that hijacking IoT devices expands opportunities to wreak kinetic effects through cyberattacks. Libicki and others have noticed that most IoT technologies “will be owned by people unable or unwilling to pay requisite attention to security” and who may not recognize how weakly secured IoT devices are by default or how their tasks (such as in refrigerators or other such appliances) involve their being operational and connected 24/7.47

A Postmodern or a Premodern Future?165

On one hand, devices that are set up and left to operate unattended make virtually ideal recruits for DDoS botnets; lacking protection, they can easily be hacked, and once under a bot herder’s control, they will operate for extended periods. After all, no one unplugs their refrigerator at night. On the other hand, manipulating IoT that impacts health and safety (Libicki offers cars and drones as examples) can cause “serious harm to their owners and worse, third parties, if their controls are usurped.”48 These are dangers in a world where, as Conti and Raymond observe, “an enemy combatant can be Grandma’s toaster”; Schneier concludes that giving computers “hands and feet” means extending “the ability to affect the world in a direct physical manner.” IoT devices also provide an obvious and already leveraged opportunity for surveillance of a target.49 Experts have recognized that weapons must be sufficiently controllable to be actually useful. Lacking control over a weapon means firing a loose cannon.50 This is a valuable fact to consider with respect to the ways that IoT devices could conceivably be transformed into cyberweapons for harassment, surveillance, ransoming, or kinetic effects. Another important caveat, however, is that not all states or nonstate actors set the same threshold with respect to what a “sufficiently controllable” weapon looks like. War in the physical domains is rife with examples of weapons that some combatants spurned while others deemed them adequately reliable, or accurate, or precise. Having been demonstrated across the land, sea, air domains, and in the exploitation of space, something similar can be expected in the cyber domain as well. Even sober advocates of IoT, and the related concept of the Cloud, concede that IoT “offers great promise” while also meaning that “where everything is connected, then anything can be disrupted.” If thinking about the need for resilience had been justified by realizations that hackers have long permeated networks and slipped around or over the security walls that had characterized perimeter-style security, the implications of IoT and the Cloud underscore the need for robust security efforts. One software executive admitted, “if you’re using that Cloud, you’ve just extended your attack surface.” Another, seeking to refute suggestions that “the Cloud’s not secure,” pointed to the need to “assume breach,” abandon the perimeter-style approach to cybersecurity, simplify guidance to customers, and “embrace . . . machine learning” for defense.51 Other analysis similarly carries an undercurrent that the answer to technologically precipitated challenges involves more technology. Analysts at the Center for a New American Security have argued that “cloud computing may also allow smaller businesses with limited resources to better protect their information against sophisticated attacks.”52 At face value, this argument seems to accept the purported advantages of distributing computing and storage without weighing other considerations about how such placement might impact an adversary’s ability to access the same data. While using more technology to address a technology-enabled

166

Myths and Realities of Cyber Warfare

problem may seem counterintuitive, it is not necessarily as convoluted as it might seem—provided that the additional technology is used for specific reasons and in meaningful ways. Conti and Raymond predict that the “not too distant future” will involve “counter robot tactics” in cyberwar as autonomy progresses. However, “becoming a Luddite recluse isn’t a solution” because citizenship in the 21st century means “learn[ing] to cope with these new realities.”53 Coping with those realities requires comprehending dynamics at play in the present and anticipating what lies ahead. DIGITAL FEUDALISM? While some industry executives have suggested that the path to better cybersecurity is “a long journey” and that “taking a user perspective is the first step,”54 some researchers raise alarms suggesting that the user perspective is being altered in ways that redefine dynamics between human users and institutions. Both of these possibilities can be expected to impact conflict in the digital realm. Arguably, trends that are identified from one vantage point as responding to the user’s perspective, when examined from another standpoint, exert a shaping influence upon the options available to users. The interconnections that characterize the modern internet rest on interoperability. If computer history is divided into epochs, and a preinternet era of computing is distinguished from the development and then the expansion of the internet, the techniques, and technologies allowing communication across different systems by definition play a central role. Protocols written to permit connection between different kinds of systems revolutionized the utility of individual machines by constructing a network through which they could be used to link to other, different, computers. A cornerstone of the popularity of personal computers and android devices has been the commonality and functionality established among them. Interconnection and interoperability have allowed impressive efficiencies, although some security-conscious researchers have also noted that these same dynamics also provide opportunities for more sinister actors to reach and manipulate more devices and users. Apple, famously or infamously, spurns the type of interoperability that characterizes so much of the world of computers. This more narrowly defined, walled-in alternative has won a considerable consumer following and for several reasons. Apple’s lucrative role as a gatekeeper has permitted a vast array of apps to become available within its walled gardens, with the company enjoying a large portion of any app’s revenues and with users feeling secure that the functionality they invite onto their devices has already been vetted. This responds to user demand but also helps impact the contours of what users perceive and desire.

A Postmodern or a Premodern Future?167

The functionalities that users expect from devices and applications nonetheless require designers to create specific platforms that can operate in diverse ways. For example, a smartphone is a telephone for voice calls, but it also must allow “videoconferencing, downloading video clips, listening to music, using a GPS to navigate to a location, and connecting to any number of networks including G[lobal] S[ystem] M[obile] cellular or Wi-Fi.” Adding functionality to a device inevitably involves the need to embed various standards that make different functionalities possible. Since some of these entail royalty payments for intellectual property represented in the standards, this can impact device and service costs.55 Increased functionality brings the insinuation of larger numbers of potentially different standards. This contributes to the growing complexity of development and dovetails with the emergence of long and complex coding that can inadvertently shelter software flaws described earlier. Businesses have displayed a noticeable aversion with respect to shouldering responsibilities and burdens associated with cybersecurity. Researchers assert that “Silicon Valley must accept more of the political and social responsibility that the success of its technology has thrust upon it.”56 A potentially compelling case to this effect can be made; an important consideration, however, involves questions about what that responsibility obligates or empowers companies to do. These responsibilities fall into areas that resemble governance. Companies are predictably loathe to enter the simultaneously thankless, expensive, and controversial realm of governance. This contributes to the Wild West atmosphere that has confounded policy makers hoping to protect intellectual property, guard national security, and preserve freedom of speech without facilitating propagation of criminal activity or violence. However, even if Silicon Valley companies were to heed the advice given by Singer and Brooking and “accept more . . . political and social responsibility,” it is not clear how widely their embrace of governance roles would or should be greeted with exuberant enthusiasm. Those steps would empower private sector entities with determining what speech or commerce is appropriate or acceptable. Most situations will fall into greyer regions than obvious black-or-white edges of the spectrum, and Silicon Valley (and its global counterparts) would hold positions as nondemocratic governors. Entities that have been widely decried for doing too little in the face of various information manipulations would be entrusted with decisions that impact an unempowered public. Users of services and platforms are not citizens, and they are not voters. While customers may seek to vote with their dollars and take business elsewhere, this can prove challenging. When users purchase the products they need as services, they are less able to vote through their economic decisions. Framing software, platforms, and infrastructure as services tether users to the companies that

168

Myths and Realities of Cyber Warfare

provide them. Rosenzweig notes that “the cloud is really a name for a variety of services.” Among them all, “the consumer does not manage or control the underlying cloud infrastructure” ranging from servers to storage, but instead “has access to his data or applications on an as-needed basis from the cloud service provider.”57 This is a dynamic that has been easily visible since the popularity of various walled-in electronic entertainment platforms (offering products such as music or sophisticated games). Users can discard a service only by abandoning their time or monetary investment in the “as-service” product as being a sunk cost. Considering the “as-service” model in combination with the apparent trends toward an IoT invites conclusions that parallel some of Schneier’s predictions. “Companies are analogous to feudal lords, and we are their vassals, peasants, and—on a bad day—serfs.”58 Other internet models have inspired others to allude to serfdom as well. The early and then eclipsed internet giant America Online (AOL), aimed to moderate internet materials by offering to pay users a discount in exchange for dozens of hours of weekly effort in content moderation. “AOL’s serfs” eventually launched a series of lawsuits “arguing that they have been led into a ‘cyber-sweatshop.’”59 Although the moderating-for-discounts model fell out of vogue and was replaced by subcontracting paid moderators, that had arguably been a more voluntary arrangement than the type produced through as-service models. Noting that social media giants resemble “feudal lord[s]” who dominate communication alternatives, Schneier suggests that “it’s becoming increasingly difficult to not pledge allegiance to at least one of them.”60 Schneier’s allusion to feudalism may ring even truer than he has realized. One of the notable features of feudalism specifically stemmed from the military and political implications it fostered. Kinship influenced decisions about with whom a vassal should align and affiliate. The vast majority of the population were serfs, lacking rights and serving as base vassals to their local lord. But feudalism resembled a pyramid in its structure, and among lords, an individual was almost guaranteed to be a vassal to a higher lord while also being the lord to local vassals lower in the hierarchy. Political jockeying meant that the bonds of ostensible fidelity could abruptly be disturbed or even reversed. The feudal imagery can be extended beyond simply the idea of consumers standing in serfdom to corporate masters. Consumers, as clients of various as-service products, can be understood existing in simultaneous and overlapping vassalhood relative to various institutions. These institutions, focused on global connection and the revenue it enables, are necessarily transnational in character. If these institutions take on the greater responsibilities sometimes demanded of them, their role will expand increasingly into one resembling governance. Deibert described the internet as “less a pure public commons and more a mixed-pooled resource.”61 Where products are framed as services, clients can resemble vassals, and the prospect of overlapping

A Postmodern or a Premodern Future?169

fealties—particularly where institutions embody governing and commercial traits—can become acute. With the social credit system now being introduced in China, political reliability is determined through extensive monitoring; the resulting conclusions derived from that data are thought to extend even into the service available to customers at stores and the matches enabled in the leading national matchmaking site.62 The services of today could become transformed into a cat’s cradle of overlapping yet conflicting obligations to institutions including transnational companies that have partnered with various states. The complete implications in the context of a cybered conflict are not fully clear, but reason indicates that the topic is worth further and more detailed analysis in the future. TEA LEAVES IN DARKNESS Historians are not in the business of predicting the future but of examining and explaining the past. A critical factor to remember when thinking about future events is contingency. Seemingly inevitable trends are in fact shaped by decisions. Patterns and currents can be identified and can help craft the ways in which people interpret the choices available to them. Their interpretations matter because these impact the decisions they make, and those decisions shape events. Speaking about the legion security vulnerabilities present in an IoT environment, Schneier has argued that “connecting everything is our future. I don’t think we can back our way out of it, [we] have to forward our way out of it.”63 However compelling this assertion may seem, it implicitly (and probably inadvertently) reduces the apparent alternatives to either a forlorn retreat and a careful advance. The method for “forwarding” out of IoT security dangers is a concept liable to be interpreted in several alternative ways. History offers value, even in the relatively new cyber domain. Among other things, history can offer grounds for sober reassurance. The idea that “we’re reaching a point where tech is moving faster than policy or conception or mental models, and we’ve never lived in sight of that before”64 sounds less unprecedented when historical examples are introduced for consideration, e.g., the technological advances in the last decades of the 19th century and the first decades of the 20th. The rapid introduction of telephones and radio for communication, of iron and steel for ship construction, of oil to replace coal as a fuel, technologies allowing heavier-than-air flight, and chemistry advances enabling rapid-fire weapons represent only some of the most noticeable technological changes occurring in the thirty-eight-year span between the 1876 and the 1914. A similar span would reach from the publication of this book back to the early expansion of ARPANET. The expansion and maturation of the internet is staggering, but it is not quite as entirely unprecedented as some might imagine.

170

Myths and Realities of Cyber Warfare

Because the conflict made use of all these advances in technology, World War I is often considered to be a culmination of those technological trends, frequently sharing the stage with factors like globalization or imperialism. Singer and Friedman, noting a parallel with the dynamics on the eve of World War I, have observed that “new technologies didn’t actually give the offense the advantage” a century ago and they assert that today “the best defense is a good defense,” in contrast to notions that stronger offense could be decisive and could deter an offensive by another power. Others who have cast a wary eye on the guns of August have also hastened to clarify that these parallels do not spell a fate.65 Contingency matters and people can and do choose to avert crises that they are able to perceive and appreciate. Furthermore, that contingency was as true in the past as it is today. Past events were not cemented by fate but shaped by decisions people took, including those taken to skirt the dangers that those decision makers were able to identify and conceptualize. This helps explain why scholars such as David Sulek and Ned Moran have asserted that “no single analogy will suffice in considering the complex challenges of cyberspace.”66 No set of circumstances ever exactly duplicates another, and awareness and prioritization of earlier examples help inform mindsets and therefore the context within which people understand their own circumstances. To this can be further added the examples from cyber domain struggles that have themselves become a part of recent history. Healey, a long-standing advocate of institutional learning from and about past cyber conflict, notes that “when you have people that feel that they are the first generation” encountering a problem, “and it turns out they’re the fifth” to do so, “they’re going to make terrible mistakes that [they] don’t have to.”67 Even conflicts such as the NATO operations against Serbia in 1999, occurring in the youth of cyberspace and internet appearing as part of the information operations landscape, can teach lessons about the danger of presuming that information superiority translates to complete knowledge.68 Some general conclusions can be extended to the conflict in cyberspace. Analysis dating to the time of the Kosovo operations noted that “technology will complicate not lessen the task facing those who must meld it into operational art.”69 Modern military history graphically displays the validity of this point. The technologies of 1914, applied to war, presented challenges that were workably addressed only with the advent of combined arms warfare, refined through bloody battlefield processes across much of the 20th century. Even when technology has appeared to simplify operational art, as for example with developments in fortifications from the Renaissance through the Enlightenment, it has in fact added complexity to aspects like logistics that underlie military activities, including at the operational level. “Technology itself is not necessarily the primary

A Postmodern or a Premodern Future?171

variable leading to change” because organizational, doctrinal, and other factors in dialog with technology “provide the medium by which technological change translates into wider social change.”70 Predictions that “the LikeWars of tomorrow will be fought by highly intelligent, inscrutable algorithms” spewing convincing propaganda and that autonomous systems “become primary cyber fighters on the future battlefield”71 rely on contingency for their realization. Rather than fall beguiled by apparently inexorable tendencies, it is vital to keep in mind that decisions—made in the context of contemporary circumstances and the ways that they are understood—will exert a powerful impact on whether and how the predictions about tomorrow’s wars will become realized. This book has examined several of the key clichés and mythologies about cyberwarfare, including notions about cyberweapons carrying instantaneous speed, having a reach that reduces distances to irrelevance, being inherently cheap and widely available, being limited to a single use, exerting only reversible effects, or representing an inherent offensive superiority. Like generalizations, mythologies often stem from elements of fact. The often contradictory aspects of various mythologies, including certainly about cyber conflict, illustrates how complex issues can suffer from over-reduction. This book has also sought to consider how social media has been refashioned for weaponized uses, as well as why certain forms have been effectively commandeered for other functions. The human need for connection, met by tools that promise to help fill that need, creates an environment oozing with potentially valuable data that can be compiled, aggregated, and leveraged for monetization. The leveraging of data stands at the core of how such networking services could be provided without overt fees. The flow of data defines the internet, and the model of many internet giants embodies the leveraging of data. It should come as little surprise that potentially lucrative resources also hold potential strategic uses as well. Martin Libicki’s advice that “it pays to be serious but not desperate”72 holds true regarding struggles in the cyber domain. Although the domain is largely defined by its technologies, challenges and dynamics in cyber-domain security are not confined to technical aspects. This is why “security will not be ‘solved’ in the foreseeable future”73—or indeed likely beyond that. Assertions that “how we prepare for our enemies might just help to invent them” carries a degree of truth, in the sense that actions shape the subsequent context and impact others’ perceptions and how they prioritize their alternative options. Actions within conflict similarly affect ideas defining the appropriate and the acceptable.74 War remains a competition among intelligent antagonists bent on imposing their will and their policy priorities on the shape of the future

172

Myths and Realities of Cyber Warfare

world. The conceptualization of terms like violence may alter, but violence will remain central to conflict because the imposition of policy goals in the face of a determined and motivated adversary will likely fail except with the presence of latent or enacted violence. Technologies continually change the character of wars, but technologies do not change the nature of warfare. The introduction of new fighting domains will be especially apt to change the character of conflicts and certainly, the introduction of new domains presents the need for conceptualization. Into this gulf, many partly or conditionally accurate concepts have gained conventional acceptance—and this needs to be counterbalanced by analysis and reflection. Even the establishment of additional domains of warfare will not alter its enduring nature.

Notes

CHAPTER 1 1. Carl von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton, NJ: Princeton University Press, 1984), 77–88, 606. 2. David Patrikarakos, War in 140 Characters: How Social Media Is Reshaping Conflict in the Twenty-First Century (New York: Hachette, 2017), 5, 261. 3. Jeremy Rabkin and John Yoo, Striking Power: How Cyber, Robots, and Space Weapons Change the Rules for War (New York: Encounter, 2017), 35, 233. 4. P.  W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (Boston, MA: Eamon Dolan, 2018), 10. 5. David E. Sanger, The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age (New York: Crown, 2018), 308. 6. Michael V. Hayden, Playing to the Edge: American Intelligence in the Age of Terror (New York: Penguin, 2016), 132. 7. Milton L. Mueller, Networks and States: The Global Politics of Internet Governance (Cambridge, MA: MIT Press, 2010), 24; Dean Cheng, Cyber Dragon: Inside China’s Information Warfare and Cyber Operations (Santa Barbara, CA: Praeger Security International, 2017), 15, 85. 8. Martin Van Creveld, Technology and War: From 2000 BC to the Present (New York: Free Press, 1989), 290, 312. 9. Martin C. Libicki, Cyberspace in Peace and War (Annapolis, MD: Naval Institute Press, 2016), 88. 10. Aaron F. Brantly, “The Violence of Hacking: State Violence and Cyberspace,” Cyber Defense Review 2, no. 1 (Spring 2017): 77; David Whetham, “Cyber Chevauchées,” in Binary Bullets: The Ethics of Cyberwarfare, ed. Fritz Allhoff, Adam Henschke, and Bradley Jay Strawser (Oxford, UK: Oxford University, 2016), 84; Cheng, Cyber Dragon, 40–41.

174Notes 11. Gregory Conti and David Raymond, On Cyber: Towards an Operational Art for Cyber Conflict (New York: Kopidion, 2017), 274. 12. For example, Lucas Kello has argued that “the Clausewitzian philosophical framework misses the essence of the cyber danger and conceals its true significance: the virtual weapon is expanding the range of possible harm and outcomes between the concepts of war and peace.” Lucas Kello, “The Meaning of Cyber Revolution: Perils to Theory and Statecraft,” International Security 38, no. 2 (2014): 22. 13. Nicholas Michael Sambaluk, “Introduction,” in Paths of Innovation: From the Twelfth Century to the Present, ed. Nicholas Michael Sambaluk (Lanham, MD: Lexington Books, 2018), vii. 14. Robert Ghanea-Hercock, “Why Cyber Security Is Hard,” Georgetown Journal of International Affairs, International Engagement on Cyber 2012 (2012): 81. 15. William Jackson, “How Can We Be at Cyberwar If We Don’t Know What It Is?” GCN, March 22, 2010, ­https://​­gcn​.­com​/­articles​/­2010​/­03​/­22​/­cybereye​ -­cyberwar​-­debate​.­aspx; Joe Burton, “NATO’s Cyber Defence: Strategic Challenges and Institutional Adaptation,” Defence Studies 15, no. 4 (2015): 306; Nazli Choucri et al., “Institutions for Cyber Security: International Responses and Global Imperatives,” Information Technology for Development 20, no. 2 (2014): 110. 16. Bruce Schneier, “Keynote by Mr. Bruce Schneier—CyCon 2018,” NATOCCDCOE, June 20, 2018, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­oQ1TJsEppOg. 17. Aaron F. Brantly, The Decision to Attack: Military and Intelligence Cyber Decision-Making (Athens: University of Georgia, 2016), 166. 18. Dennis F. Poindexter, The Chinese Information War: Espionage, Cyberwar, Communications Control and Related Threats to United States Interests (Jefferson, NC: McFarland, 2018), 28; National Institute of Standards and Technology, “Glossary of Key Informational Security Terms,” NISTIR 7298 Revision 2 (May 2013), 11. 19. Gordon Corera, Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage (New York: Pegasus Books, 2015), 292–93. 20. Sanger, The Perfect Weapon, 158, 285–89; Brantly, The Decision to Attack, 165. 21. Rabkin and Yoo, Striking Power, 8, 63, 184–85. 22. Ryan Jenkins, “Cyberwarfare as Ideal War,” Binary Bullets, 89. 23. William D. Bryant, “Resiliency in Future Cyber Combat,” Strategic Studies Quarterly 9, no. 4 (Winter 2015): 101. 24. Rock Stevens and Jeffrey Trent, “Offensive Digital Countermeasures: Exploring the Implications for Governments,” Cyber Defense Review 3, no. 3 (Fall 2018): 96. 25. Stevens and Trent, “Offensive Digital Countermeasures,” 106–9; David A. Wallace and Mark Visger, “The Use of Weaponized ‘Honeypots’ under the Customary International Law of State Responsibility,” Cyber Defense Review 3, no. 2 (Summer 2018): 37–39. 26. Libicki, Cyberspace in Peace and War, 264. 27. Seumas Miller, “Cyberattacks and ‘Dirty Hands’: Cyberwar, Cybercrime, or Covert Political Action?” Binary Bullets, 242; David Turns, “Cyber War and the Concept of ‘Attack’ in International Humanitarian Law,” in International Humanitarian Law and the Changing Technology of War, ed. Dan Saxon (Leiden: Martinus Nijhoff, 2013), 227. 28. Thomas Rid, Cyber War Will Not Take Place (Oxford, UK: Oxford University Press, 2013), 18.

Notes175 29. Joseph S. Nye, “Nuclear Lessons for Cyber Security?” Strategic Studies Quarterly 5, no. 4 (Winter 2011): 22, 36. 30. Scott Jasper, Strategic Cyber Deterrence: The Active Cyber Defense Option (Lanham, MD: Rowman & Littlefield, 2017), 91–93, 104. See also Michael N. Schmitt, ed., Tallinn Manual on the International Law Applicable to Cyber Warfare (Cambridge, UK: Cambridge University Press, 2013). 31. Stevens and Trent, “Offensive Digital Countermeasures,” 103; Kello, “The Meaning of Cyber Revolution,” 20. 32. Michael V. Hayden, qtd. in “Citizen Soldier: The Next Battlefield: On Cyber War,” Pritzker Military Museum & Library, March 10, 2015, ­https://​­www​.­youtube​ .­com​/­watch​?­v​=​­Kmu2DQkqSpA. 33. Fred Kaplan, Dark Territory: The Secret History of Cyber War (New York: Simon & Schuster, 2016), 214, 273; Sanger, The Perfect Weapon, 18. 34. P. W. Singer and Allan Friedman, Cybersecurity and Cyberwar: What Everyone Needs to Know (Oxford, UK: Oxford University Press, 2014), 126. 35. Josef Schrofl, Bahram M. Rajaee, and Dieter Muhr, “Introduction,” in Hybrid and Cyber War as Consequences of the Asymmetry: A Comprehensive Approach Answering Hybrid Actors and Activities in Cyberspace, ed. Josef Schrofl, Bahram M. Rajaee, and Dieter Muhr (Frankfurt: Peter Lang, 2011), 21. 36. Ehsan Ahrari, “US Military Strategic Perspectives on the PRC: New Frontiers of Information-Based War,” Asian Survey 27, no. 12 (December 1997): 1171; Billy E. Pope, A Better State of War: Surmounting the Ethical Cliff in Cyber Warfare (Montgomery, AL: Maxwell AFB, Air University Press, 2014), 39. 37. Fred Kaplan. “The Secret History of Cyber War—SANS Digital Forensics and Incident Response Summer 2017,” SANS Digital Forensics and Incident Response, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­XKjBlrLct4. 38. Poindexter, The Chinese Information War, 57. 39. Sanger, The Perfect Weapon, 303; Sven Herpig and Thomas Reinhold, “Spotting the Bear: Credible Attribution and Russian Operations in Cyberspace,” in Hacks, Leaks and Disruptions: Russian Cyber Strategies, ed. Nicu Popescu and Stanislav Secrieru (Paris, France: European Union Institute for Security Studies, 2018), 38; Clement Guitton, Inside the Enemy’s Computer: Identifying Cyber-Attackers (London, UK: Hurst & Company, 2017), 154, 186. 40. Libicki, Cyberspace in Peace and War, 52. 41. Conti and Raymond, On Cyber, 231. 42. Kristin M. Lord and Travis Sharp, eds., America’s Cyber Future: Security and Prosperity in the Information Age (Washington, DC: Center for a New American Security, 2011), 27. 43. Richard A. Clarke, Cyber War: The Next Threat to National Security and What to Do about It (New York: Harper Collins, 2010), 197. 44. Conti and Raymond, On Cyber, 231. 45. Leon Panetta, “Remarks by Secretary Panetta on Cybersecurity to the Business Executives for National Security, New York City,” October 11, 2012, ­http://​ ­archive​.­defense​.­gov​/­transcripts​/­transcript​.­aspx​?­transcriptid​=​­5136. 46. Winn Schwartau testimony, quoted in Hearing before the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology, June 27, 1991 (Washington, DC: Government Printing Office, 1991), 10.

176Notes 47. Schwartau testimony, quoted in Hearing before the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology, June 27, 1991, 94. 48. Richard Stiennon, There Will Be Cyberwar: How the Move to Network-Centric War Fighting Has Set the Stage for Cyberwar (Birmingham, MI: IT-Harvest Press, 2015), 19, 103. 49. Casey Fleming, Eric L. Qualkenbush, and Anthony M. Chapa, “The Secret War against the United States,” Cyber Defense Review 2, no. 3 (Fall 2017): 25. 50. Ronald J. Deibert, Black Code: Surveillance, Privacy, and the Dark Side of the Internet (Toronto, Canada: McClelland & Stewart, 2013), 44–47. 51. The term “Pearl Harbor” appears once in the book, in a subsequent chapter, and overtly in connection with Panetta’s statement. James A. Green, “Introduction,” in Cyber Warfare: A Multidisciplinary Analysis, ed. James A. Green (New York: Routledge, 2015), 2. 52. Poindexter, The Chinese Information War, 208. 53. Conti and Raymond, On War, 178, 198–99. 54. Chris C. Demchak, Wars of Disruption and Resilience: Cybered Conflict, Power, and National Security (Athens: University of Georgia Press, 2011), 175; Brantly, The Decision to Attack, 159. 55. Paul Rosenzweig, Cyber Warfare: How Conflicts in Cyberspace Are Challenging America and Changing the World (Santa Barbara, CA: Praeger, 2013), 58; Hayden, Playing to the Edge, 136. 56. Michael N. Schmitt, ed., Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Cambridge, UK: Cambridge University Press, 2017), 169; Michael V. Hayden, “Michael Hayden, Full Q&A, Oxford Union,” Oxford Union, August 25, 2017. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­exw9HpK​_ytl. 57. Rosenzweig, Cyber Warfare, 95. 58. Rosenzweig, Cyber Warfare, 38–44; Cheng, Cyber Dragon, 178, 182. 59. Adam Satariano, “Huawei Security ‘Defects’ Are Found by British Authorities,” The New York Times, March 28, 2019, ­https://​­www​.­nytimes​.­com​/­2019​/­03​ /­28​/­technology​/­huawei​-­security​-­british​-­report​.­html; Julian E. Barnes and Adam Satariano, “US Campaign to Ban Huawei Overseas Stumbles as Allies Resist,” The New York Times, March 17, 2019, ­https://​­www​.­nytimes​.­com​/­2019​/­03​/­17​/­us​ /­politics​/h ­ uawei​-b ­ an​.h ­ tml​?a­ ction​=​­click​&m ­ odule​=​­RelatedCoverage​&p ­ gtype ​=​­Article​&­region​=​­Footer; Sanger, The Perfect Weapon, 102; Poindexter, The Chinese Information War, 158–61. 60. Poindexter, The Chinese Information War, 154–56; Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (New York: W. W. Norton, 2015), 30, 86. 61. Elad Yoran and Edward Amoroso, “The Role of Commercial End-to-End Secure Mobile Voice in Cyberspace,” Cyber Defense Review 3, no. 1 (Spring 2018): 60–61; Corera, Cyberspies, 178, 344. 62. Conti and Raymond, On Cyber, 140; E. Lincoln Bonner III, “Cyber Power in 21st Century Joint Warfare,” Joint Forces Quarterly 74 (2014): 104; Laura DeNardis, Global War for Internet Governance (New Haven, CT: Yale University Press, 2014), 203–7. 63. Patrikarakos, War in 140 Characters, 206; Haroom K. Ullah, Digital World War: Islamists, Extremists, and the Fight for Cyber Supremacy (New Haven, CT: Yale University Press, 2017), 92.

Notes177 64. Kim Zetter, “Security Manual Reveals the OPSEC Advice ISIS Gives Recruits,” Wired, November 19, 2015, ­https://​­www​.­wired​.­com​/­2015​/­11​/­isis​ -­opsec​-­encryption​-­manuals​-­reveal​-­terrorist​-­group​-­security​-­protocols. 65. Ullah, Digital World War, 90; Abdel Bari Atwan, Islamic State: The Digital Caliphate (Oakland: University of California, 2015), 26. 66. Sanger, The Perfect Weapon, 82, 94. 67. Chris Gaylord, “SkyGrabber: Is Hacking Military Drones Too Easy?” Christian Science Monitor, December 17, 2009, ­https://​­www​.­csmonitor​.­com​/­Technology​ /­Horizons​/­2009​/­1217​/­SkyGrabber​-­Is​-­hacking​-­military​-­drones​-­too​-­easy. 68. Corera, Cyberspies, 110. 69. Hayden, “Michael Hayden, Full Q&A, Oxford Union.” 70. Kim Zetter, Zero Days: Stuxnet and the Launch of the World’s First Digital Weapon (New York: Crown, 2016), 20; Libicki, Cyberspace in Peace and War, 14; Brian M. Mazanec, The Evolution of Cyber War: International Norms for Emerging-Technology Weapons (Lincoln, NE: Potomac, 2015), 175. 71. Singer and Friedman, Cybersecurity and Cyberwar, 116; Rabkin and Yoo, Striking Power, 174. 72. Jenkins, “Cyberwarfare as Ideal War,” 97–98; Panayotis A. Yannakogeorgos, Strategies for Resolving the Cyber Attribution Challenge (Montgomery, AL: Maxwell AFB, Air University, 2013), 60. 73. John Markoff, “A Silent Attack, but Not a Subtle One,” The New York Times, September 26, 2010, ­https://​­www​.­nytimes​.­com​/­2010​/­09​/­27​/­technology​/­27virus​ .­html; Zetter, Zero Days, 97; Jan Neutze, qtd. in “Defending Canadian Democracy from Cyber Attacks,” Public Policy Forum, June 6, 2018, ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­JRWJX0Wbf3Y. 74. Libicki, Cyberspace in Peace and War, 149; Rid, Cyber War, 32–33; Robert Mandel, Optimizing Cyberdeterrence: A Comprehensive Strategy for Preventing Foreign Cyberattacks (Washington, DC: Georgetown University Press, 2017), 100. 75. Kello, “The Meaning of Cyber Revolution,” 14; Zetter, Zero Days, 182. 76. Rabkin and Yoo, Striking Power, 168; Corera, Cyberspies, 291. 77. Martin R. Stytz and Sheila B. Banks, “Toward Attaining Cyber Dominance,” Strategic Studies Quarterly 8, no. 1 (Spring 2014): 60.

CHAPTER 2 1. Deborah A. Liptak, “Information Warfare and Cybersecurity,” in Web of Deceit: Misinformation and Manipulation in the Age of Social Media, ed. Anne P. Mintz (Medford, NJ: CyberAge Books, 2012), 86. 2. Robert Citino, Quest for Decisive Battle: From Stalemate to Blitzkrieg in Europe, 1899–1940 (Lawrence: University of Kansas Press, 2002). 3. Clarke, Cyber War, 31. 4. Timothy L. Thomas, Cyber Silhouettes: Shadows over Information Operations (Fort Leavenworth, KS: Foreign Military Studies Office, 2005), 278. 5. Mandel, Optimizing Cyberdeterrence, 108–12. 6. Gabriel Klein, Marko Jahnke, Jens Tolle, and Peter Martini, “Enhancing Graph-Based Automated DoS Attack Response,” in The Virtual Battlefield: Perspectives on Cyber Warfare, ed. Christian Czosseck and Kenneth Geers (Amsterdam, Netherlands: IOS Press, 2009), 249.

178Notes 7. Schmitt, Tallinn Manual 2.0, 168. 8. Clifford Stoll, The Cuckoo’s Egg: Tracking a Spy through the Maze of Computer Espionage (New York: Doubleday, 1989). 9. Ron Marks, “Unmasking the Spy: Intelligence Gathering,” Dole Institute of Politics, October 30, 2018, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­hha5dKGx374. 10. Michael V. Hayden, “General Michael Hayden: Beyond Snowden: An NSA Reality Check, Oxford Union,” Oxford Union, February 18, 2014, ­https://​­www​ .­youtube​.­com​/­watch​?­v​=​­ETVH2P2iU​-­o. 11. Rid, Cyber War, 99. 12. Robert M. Lee et al., “German Steel Mill Cyber Attack,” in Industrial Control Systems (Bethesda, MD: SANS, 2014), 6. 13. R. Goychayev et al., Cyber Deterrence and Stability: Assessing Cyber Weapon Analogues through Existing WMD Deterrence and Arms Control Regimes (Alexandria, VA: National Technical Information Service, 2017), 1.21. 14. DeNardis, Global War, 100–101. 15. Franklin D. Kramer, Stuart H. Starr, and Larry K. Wentz, eds., Cyberpower and National Security (Washington, DC: National Defense University Press, 2009), 177. 16. Christopher Fitzgerald Wrenn, “Strategic Cyber Deterrence” (PhD dissertation, Tufts University, Medford, MA, 2012), 176, 190; Brian Mazanec and Bradley A. Thayer, Deterring Cyber Warfare: Bolstering Strategic Stability in Cyberspace (New York: Palgrave Macmillan, 2015), 18, 35. 17. Farivar Cyrus, “A Brief Explanation of Media Coverage of Cyberattacks,” The Virtual Battlefield, 183. 18. Lene Hansen and Helen Nissenbaum, “Digital Disaster, Cyber Security, and the Copenhagen School,” International Studies Quarterly 53, no. 4 (December 2009): 1169. 19. Jose Nazario, “Politically Motivated Denial of Service Attacks,” The Virtual Battlefield, 166. 20. Wrenn, “Strategic Cyber Deterrence,” 268. 21. Nazario, “Politically Motivated Denial of Service Attacks,” 168–69. 22. Kramer et al., Cyberpower and National Security, 419. 23. Patrick Neal, “Active Cyber Defence: Why We Should Hack Back at the Cyberattackers,” SERENE-RISC, November 6, 2018, ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­uHLcRKZq0jk. 24. Constantinos Kolias et al., “DDoS in the IoT: Mirai and Other Botnets,” Computer 50, no. 7 (2017): 81. 25. Libicki, Cyberspace in Peace and War, 162–63. 26. Bill Blunden and Violet Cheung, Behold a Pale Farce: Cyberwar, Threat Inflation & the Malware Industrial Complex (Walterville, OR: Trine Day, 2014), 239. 27. Mazanec, Evolution of Cyber War, 178. 28. Singer and Friedman, Cybersecurity and Cyberwar, 56–58. 29. Singer and Friedman, Cybersecurity and Cyberwar, 59. 30. Emilio Iasiello, “China’s Three Warfares Strategy Mitigates Fallout from Cyber Espionage Activities,” Journal of Strategic Security 9, no. 2 (Summer 2016): 63. 31. Sanger, The Perfect Weapon, 106. 32. Sanger, The Perfect Weapon, 108. 33. Clarke, Cyber War, 233. 34. Corera, Cyberspies, 194. 35. Clarke, Cyber War, 126–27.

Notes179 36. Mandiant, APT1: Exposing One of China’s Cyber Espionage Units (Alexandria, VA: Mandiant, 2013), 22. 37. Libicki, Cyberspace in Peace and War, 100–105. 38. Libicki, Cyberspace in Peace and War, 336. 39. Poindexter, The Chinese Information War, 172. 40. Corera, Cyberspies, 195. 41. Cheng, Cyber Dragon, 126. 42. Mandiant, APT1. 43. Guitton, Inside the Enemy’s Computer, 125. 44. Patryk Pawlak, “Protecting and Defending Europe’s Cyberspace,” Hacks, Leaks and Disruptions, 106–8. 45. Robert S. Mueller, III, Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I of II (Washington, DC: U.S. Department of Justice, 2019), 36–38, 173. 46. For more on the Zimmerman telegram, see Thomas Boghardt, The Zimmerman Telegram: Intelligence, Diplomacy, and America’s Entry into World War I (Annapolis, MD: Naval Institute, 2012). 47. Stephen Korns, “Botnets Outmaneuvered,” Armed Forces Journal, January 2009, ­http://​­armedforcesjournal​.­com​/­botnets​-­outmaneuvered. 48. United States Armed Forces, Joint Operations JP 3-0, January 17, 2017, changed October 22, 2018, III-38. 49. United States Army, Offense and Defense ADP 3-90, August 2018, 1–2. 50. Conti and Raymond, On Cyber, 88–89, 106. 51. Clarke, Cyber War, 31. 52. Rabkin and Yoo, Striking Power, 166. 53. Demchak, Wars of Disruption and Resilience, 195. 54. Demchak, Wars of Disruption and Resilience, 204; Pope, A Better State of War, 25. 55. Kramer et al., Cyberpower and National Security, 417. 56. Corera, Cyberspies, 190–91. 57. Kenneth Geers, “The Cyber Threat to National Critical Infrastructure: Beyond Theory,” Information Security Journal: A Global Perspective 18, no. 1 (2009): 3; Gregory J. Rattray, “International Collaborative Responses to Cyber Incidences, Panel 4,” Georgetown Journal of International Affairs, International Engagement on Cyber 2012 (2012): 259; Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, ­https://​­www​.­wired​ .­com​/­story​/­notpetya​-­cyberattack​-­ukraine​-­russia​-­code​-­crashed​-­the​-­world​/. 58. Richard Stiennon, “A Short History of Cyber Warfare,” Cyber Warfare, 21. 59. Quoted in A. K. Dewdney, “Computer Recreations: Of Worms, Viruses and Core War,” Scientific American, 260, no. 3 (March 1989): 110. 60. Zetter, Zero Days, 315. 61. Libicki, Cyberspace in Peace and War, 50; Kramer et al., Cyberpower and National Security, 424. 62. Stiennon, “A Short History of Cyber Warfare,” Cyber Warfare, 23. 63. Libicki, Cyberspace in Peace and War, 49–50. 64. Rid, Cyber War, 48–49. 65. Zetter, Zero Days, 279. 66. Maura Conway, “Reality Check: Assessing the (Un)Likelihood of Cyberterrorism,” in Counterterrorism: Understanding, Assessment, and Response, ed. Thomas M. Chen, Lee Jarvis, and Stuart Macdonald (New York: Springer, 2014), 109–110.

180Notes 67. Kramer et al., Cyberpower and National Security, 449. 68. Jan Tighe, “Cyber Warfare in the Maritime Domain,” CSIS, September 14, 2017, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­qeEBiRunR0Y. 69. Elizabeth Zornes, qtd. in “CIO Association of Canada: Cybersecurity—Whose Job Is It Anyway?” BC Aware, February 10, 2016. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­u2dropOG3qc. 70. Scott A. Weed, US Policy Response to Cyber Attack on SCADA Systems Supporting Critical National Infrastructure (Montgomery, AL: Maxwell AFB, Air University Press, 2017), 28. 71. Wrenn, “Strategic Cyber Deterrence,” 195, 284. 72. Martin Libicki, Cyberdeterrence and Cyberwar (Santa Monica, CA: RAND, 2009), 199. 73. Nazario, “Politically Motivated Denial of Service Attacks,” 172–79; Rid, Cyber War, 169. 74. Zetter, Zero Days, 388. 75. Mazanec, Evolution of Cyber War, 183. 76. Zetter, Zero Days, 100–105, 111. 77. Schneier, Data and Goliath, 146–47. 78. Libicki, Cyberdeterrence and Cyberwar, 158. 79. Zetter, Zero Days, 222. 80. Libicki, Cyberdeterrence and Cyberwar, 143; Stiennon, There Will Be Cyberwar, 102; Goychayev, Cyber Deterrence, 1.35. 81. Kramer et al., Cyberpower and National Security, 165. 82. Zetter, Zero Days, 289, 388. 83. Zetter, Zero Days, 380. 84. Singer and Friedman, Cybersecurity and Cyberwar, 62–63. 85. John Weigelt, qtd. in “Cloud and Cyber Security in Canada,” nGage Events, November 27, 2017, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­2mwWG26tZa8. 86. Rid, Cyber War, 51. 87. Kevin Jones, qtd. in “Discussion: Emerging Technologies and Cyber Security—CyCon 2018,” NATOCCDCOE, August 14, 2018, ­https://​­www​.­youtube​ .­com​/­watch​?­v​=​­tm1GbB57m​_w. 88. Clarke, Cyber War, 132; Panayotis A. Yannakogeorgos, “Rethinking the Threat of Terrorism,” Counterterrorism, 56. 89. Libicki, Cyberspace in Peace and War, 76–77. 90. Rid, Cyber War, 68–73. 91. Sanger, The Perfect Weapon, 48. 92. Sanger, The Perfect Weapon, 166. 93. Jordan Robertson and Michael Riley, “How Hackers Took Down a Power Grid,” Bloomberg, January 14, 2016, ­https://​­www​.­bloomberg​.­com​/­news​/­articles​ /­2016​-­01​-­14​/­how​-­hackers​-­took​-­down​-­a​-­power​-­grid. 94. Jayson M. Spade, Information as Power: China’s Cyber Power and America’s National Security (Carlisle, PA: Army War College Press, 2012), 26. 95. Piret Pernik, “The Early Days of Cyberattacks: The Cases of Estonia, Georgia, and Ukraine,” Hacks, Leaks and Disruptions, 64. 96. Yannakogeorgos, “Rethinking the Threat of Terrorism,” 56; Weed, US Policy Response, 9.

Notes181

CHAPTER 3 1. Goychayev et al., Cyber Deterrence, 1.34. 2. Rid, Cyber War, 9, 166. 3. Colin S. Gray, Making Strategic Sense of Cyber Power: Why the Sky Is Not Falling (Carlisle, PA: Strategic Studies Institute, 2013), x. 4. Mandel, Optimizing Cyberdeterrence, 114. 5. Singer and Friedman, Cybersecurity and Cyberwar, 39. 6. Conti and Raymond, On Cyber, 41. 7. Kati Pohjanpalo, “Finland Detects Cyber Attack on Online Election-Results Service,” Bloomberg, April 10, 2019, ­https://​­www​.­bloomberg​.­com​/­news​/­articles​ /­2019​-­04​-­10​/­finland​-­detects​-­cyber​-­attack​-­on​-­online​-­election​-­results​-­service. 8. Michael Schwirtz and Sheera Frenkel, “In Ukraine, Russia Tests a New Facebook Tactic in Election Tampering,” New York Times, March 29, 2019, ­https://​­www​ .­nytimes​.­com​/­2019​/­03​/­29​/­world​/­europe​/­ukraine​-­russia​-­election​-­tampering​ -­propaganda​.­html; Alina Polyakova, “Want to Know What’s Next in Russian Election Interference? Pay Attention to Ukraine’s Elections,” March 28, 2019, ­https://​ ­www​.­brookings​.­edu​/­blog​/­order​-­from​-­chaos​/­2019​/­03​/­28​/­want​-­to​-­know​-­whats​ -­next​-­in​-­russian​-­election​-­interference​-­pay​-­attention​-­to​-­ukraines​-­elections. 9. Guitton, Inside the Enemy’s Computer, 107. 10. Rabkin and Yoo, Striking Power, 175. 11. Pope, A Better State of War, 5, 28. 12. Pope, A Better State of War, 39–42. 13. Jenkins, “Cyberwarfare as Ideal War,” 97–99. 14. Jenkins, “Cyberwarfare as Ideal War,” 111. 15. Randall R. Deibert, “Distinctive Ethical Issues of Cyberwarfare,” Binary Bullets, 57. 16. Schmitt, Tallinn Manual 2.0, 445. 17. Christopher S. Yoo, The Dynamic Internet: How Technology, Users, and Businesses Are Transforming the Network (Washington, DC: American Enterprise Institute, 2012). 18. Costs imposed on the public sector were estimated at 6.5 million Estonian kroons, and costs for the largest bank were estimated as between 10 million and 1 billion kroons. This equates to $560,000 public sector and between $900,000 and $9 million in contemporary U.S. dollars. Pernik, “The Early Days of Cyberattacks,” 56–58. 19. Pernik, “The Early Days of Cyberattacks,” 62. 20. Rosenzweig, Cyber Warfare, 72. 21. Boden was subsequently charged and convicted of the attack and sentenced to two years in prison. Brantley, The Decision to Attack, 73; DeNardis, Global War, 87. 22. Hayden, Playing to the Edge, 152. 23. Zetter, Zero Days, 326–27. 24. Lee et al., “German Steel Mill Attack.” 25. Brian M. Mazanec, The Evolution of Cyber War: International Norms for Emerging-Technology Weapons (Lincoln, NE: Potomac, 2015), 233. 26. Elizabeth Dubois, qtd. in “Defending Canadian Democracy from Cyber Attacks.”

182Notes 27. Marks, “Unmasking the Spy”; Sanger, The Perfect Weapon, 213. 28. Kello, “The Meaning of Cyber Revolution,” 32. 29. Bill Gertz, iWar: War and Peace in the Information Age (New York City: Threshold, 2017), 189. 30. Thomas, Cyber Silhouettes, 279–80. 31. Cheng, Cyber Dragon, 89. 32. Sanger, The Perfect Weapon, 239. 33. Tighe, “Cyber Warfare in the Maritime Domain.” 34. Libicki, Cyberspace in Peace and War, 132. 35. Heather M. Roff, “Cyber Perfidy, Ruse, and Deception,” Binary Bullets, 221. 36. Libicki, Cyberdeterence and Cyberwar, 54–55, 155–56. 37. Singer and Friedman, Cybersecurity and Cyberwar, 153. 38. Mazanec and Thayer, Deterring Cyber Warfare, 16. 39. Clarke, Cyber War, 145. 40. Mandel, Optimizing Cyberdeterrence, 158; Pope, A Better State of War, 47. 41. Lord and Sharp, America’s Cyber Future, 12; Schneier, “Keynote by Mr. Bruce Schneier—CyCon 2018.” 42. Goychayev et al., Cyber Deterrence, 1.21. 43. In a subsequent book published seven years later, Libicki reframed this by writing that “contrary to the popular notion that offense dominates in cyberspace, countermeasures are often cheaper to develop than measures,” but “what can make countermeasures expensive is testing and distributing them.” Libicki, Cyberdeterrence and Cyberwar, 32; Libicki, Cyberspace in Peace and War, 156. 44. Weed, US Policy Response, 28. 45. Brantly, The Decision to Attack, 159. 46. Brantly, The Decision to Attack, 74. 47. Hayden, qtd. in “Citizen Soldier.” 48. Corera, Cyberspies, 284. 49. Mandel, Optimizing Cyberdeterrence, 123. 50. Conti and Raymond, On Cyber, 2. 51. Regarding the declaration of capabilities and vulnerabilities, see Goychayev et al., Cyber Deterrence, 1.35. For a description of the psychological impact of a cyberweapon’s effectiveness, see Mandel, Optimizing Cyberdeterrence. 52. Rid, Cyber War, 168. 53. Libicki, Cyberspace in Peace and War, 61. 54. Libicki, Cyberspace in Peace and War, 311, italics in original. 55. Lee et al., “German Steel Mill Attack”; Liptak, “Information Warfare and Cybersecurity,” 100–101. 56. Jason Healey, qtd. in “Citizen Soldier.” 57. Conti and Raymond, On Cyber, 139, 257. 58. Jasper, Strategic Cyber Deterrence, 113, 167. 59. Arguably, official characterizations caused further consternation. Director of National Intelligence James Clapper refused during his testimony to Congress to refer to the OPM hack as an “attack,” specifically pointing to its instead being an intelligence action formally deemed acceptable rather than illicit—and in contrast to OPM’s handling of compromised persons’ identification, which was handled as if it were the result of cybercrime. Sanger, The Perfect Weapon, 116–17. 60. Libicki, Cyberspace in Peace and War, 24, 154, 163.

Notes183 61. Libicki, Cyberspace in Peace and War, 78. 62. Libicki, Cyberspace in Peace and War, 25, italics in original. 63. Dorothy Denning, “Cyberwarriors: Activists and Terrorists Turn to Cyberspace,” Harvard International Review 23, no. 2 (Summer 2001): 74; Libicki, Cyberspace in Peace and War, 25, 36. 64. Brandon Valeriano and Ryan C. Maness, Cyber War versus Cyber Realities: Cyber Conflict in the International System (Oxford: Oxford University, 2015), 220; Blunden and Cheung, Behold a Pale Farce, 422. 65. Jones, “Discussion.” 66. Mandel, Optimizing Cyberdeterrence, 100. 67. Panayotis A. Yannakogeorgos and Adam B. Lowther, eds., Conflict and Cooperation in Cyberspace: The Challenge to National Security (Boca Raton, FL: Taylor & Francis, 2014), 142. 68. The Russo-Georgian War 2008: The Role of Cyber Attacks in the Conflict (Fairfax, VA: Armed Forces Communications and Electronics Association, 2012), 23; Libicki, Cyberdeterrence and Cyberwar, 137. 69. Conti and Raymond, On Cyber, 47; Mandel, Optimizing Cyberdeterrence, 154–57. 70. Conti and Raymond, On Cyber, 228. 71. Thomas, Cyber Silhouettes, 112; Cheng, Cyber Dragon, 91; Conway, “Reality Check,” 111; Liptak, “Information Warfare and Cybersecurity,” 94. 72. Brantly, The Decision to Attack, 58. 73. Tighe, “Cyber Warfare in the Maritime Domain.” 74. Kello, “The Meaning of Cyber Revolution,” 19, 26; Libicki, Cyberspace in Peace and War, 324–29. 75. Chris Walls, “ILD 2012 Panel Discussion: Cyber Attacks: The Operators Perspective,” U.S. Naval War College, October 10, 2012. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­pO1a7IfKzAk. 76. Conti and Raymond, On Cyber, 192. 77. Clarke, Cyber War, 158. 78. Rid, Cyber War, 79. 79. David P. Fidler, “The Path to Less Lethal and Destructive War?” International Humanitarian Law, 324. 80. Pope, A Better State of War, 60. 81. Jenkins, “Cyberwarfare as Ideal War,” 99; Matthew Beard, “Beyond Tallinn: The Code of the Cyberwarrior?” Binary Bullets, 155; Daphna Canetti et al., “Immune from Cyberfire? The Psychological and Physiological Effects of Cyberwarfare,” Binary Bullets, 172. 82. Rabkin and Yoo, Striking Power, 56. 83. George A. Lucas, “Emerging Norms for Cyberwarfare,” Binary Bullets, 30. 84. Martin C. Libicki, “Sub Rosa Cyber War,” The Virtual Battlefield, 53. 85. Corera, Cyberspies, 298; Lord and Sharp, America’s Cyber Future, 23. 86. Conti and Raymond, On Cyber, 45.

CHAPTER 4 1. Brantly, The Decision to Attack, 87, 159; Mandel, Optimizing Cyberdeterrence, 5; Gertz, iWar, 161.

184Notes 2. Emilio Iasiello, “Is Cyber Deterrence an Illusory Course of Action?” ASPJ Africa & Francophonie 8, no. 1 (2018): 39. 3. Corera, Cyberspies, 181. 4. Christopher A. Ford, “Here Come the Cyber-Privateers,” Hudson Institute, July  19,  2010, ­https://​­ w ww​.­h udson​ .­o rg​ /­research​ /­9112​ -­here​ -­c ome​ -­the​ -­c yber​ -­privateers. 5. Bruce Schneier, “How to Survive a Cyberattack, Bruce Schneier on Cyberwar in the 21st Century,” Hidden Forces, September 19, 2018, ­https://​­www​.­youtube​ .­com​/­watch​?­v​=​­9Oja9nngwRg. 6. Nicu Popescu and Stanislav Secrieru, “Conclusion: Russia—From Digital Outlier to Great Superpower,” Hacks, Leaks and Disruptions, 117. 7. Libicki, Cyberspace in Peace and War, 317. 8. Libicki, Cyberspace in Peace and War, 183. 9. Lee, “German Steel Mill Attack,” 5. 10. Thomas, Cyber Silhouettes, 278. 11. Donald Boian, “ILD 2012 Panel Discussion,” ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­pO1a7IfKzAk. 12. Wrenn, “Strategic Cyber Deterrence,” 208. 13. Libicki, Cyberspace in Peace and War, 214; Herpig and Reinhold, “Spotting the Bear,” 37. 14. Guitton, Inside the Enemy’s Computer, 147–50. 15. Yannakogeorgos and Lowther, Conflict and Cooperation, 55. 16. Neil C. Rowe, “The Attribution of Cyber Warfare,” Cyber Warfare, 61–72; Nazario, “Politically Motivated Denial of Service Attacks,” 166. 17. Zetter, Zero Days, 307. 18. Mandel, Optimizing Cyberdeterrence, 204; Conti and Raymond, On Cyber, 243; Rid, Cyber War, 152; Jasper, Strategic Cyber Deterrence, 42. 19. Guitton, Inside the Enemy’s Computer. 20. Yannakogeorgos and Lowther, Conflict and Cooperation, 58, 71; Yannakogeorgos, Strategies for Resolving the Cyber Attribution Challenge, 15, 27, 65. 21. Yannakogeorgos and Lowther, Conflict and Cooperation, 202. 22. Patrikarakos, War in 140 Characters, 171–90. Koen Gijsbergs, a Dutch research fellow at the University of Oxford, has suggested that even improved attribution may not deliver better enforcement of appropriate international behavior: “There is hard evidence that the rocket [used to shoot down MH-17] came from Russia. Russia denies [it]. And what’s going to happen—the international community is not doing too much; two nations have now made Russia liable for the activity, Australia and the Netherlands. . . . And that was 300 people killed” rather than with attribution of a non-kinetic cyberattack. Koen Gijsbergs, qtd. in “Panel: Defending a Nation against Cyber Attack—CyCon 2018,” June 20, 2018, h ­ ttps://​­www​ .­youtube​.­com​/­watch​?­v​=​­SM2hQUPcYOE. 23. Atwan, Islamic State, 17, 29, 62. 24. Blunden and Cheung, Behold a Pale Farce, 94. 25. Andrei Soldatov and Irina Borogan, The Red Web: The Kremlin’s War on the Internet (New York: Public Affairs, 2015); Andrei Soldatov and Irina Borogan, “Russia’s Approach to Cyber: The Best Defence Is a Good Offence,” Hacks, Leaks and Disruptions, 23. 26. Libicki, Cyberspace in Peace and War, 248.

Notes185 27. Michael Gross, “Enter the Cyber Dragon,” Vanity Fair, September 2011, ­ ttps://​­www​.­vanityfair​.­com​/­news​/­2011​/­09​/­chinese​-­hacking​-­201109; Libicki, Cyber­ h space in Peace and War, 243. 28. Herpig and Reinhold, “Spotting the Bear,” 40. 29. Quoted in Philip Larrey, Connected World: From Automated Work to Virtual Wars: The Future, by Those Who Are Shaping It (New York: Penguin, 2017), 144. 30. Rid, Cyber War, 128–29. 31. Parmy Olson, We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency (New York: Little, Brown and Company, 2012), 6–21, 25; Singer and Friedman, Cybersecurity and Cyberwar, 83. 32. Olson, We Are Anonymous, 35, 50–52. 33. Singer and Friedman, Cybersecurity and Cyberwar, 82. 34. Olson, We Are Anonymous, 89, 343, 348. 35. Olson, We Are Anonymous, 383. 36. Singer and Brooking, LikeWar, 117. 37. Olson, We Are Anonymous, 113, 411. 38. Olson, We Are Anonymous, 74–79; 114–22. 39. Hayden, qtd. “Citizen Soldier”; Timothy McKenzie, Is Cyber Deterrence Possible? (Montgomery, AL: Maxwell AFB, Air University Press, 2017), 10. 40. Jason Healey, ed., A Fierce Domain: Conflict in Cyberspace, 1986 to 2012 (Vienna, VA: Cyber Conflict Studies Association, 2012), 146; Demchak, Wars of Disruption and Resilience, 194–95. 41. Demchak, Wars of Disruption & Resilience, 206; Wrenn, “Strategic Cyber Deterrence,” 210–11; Farivar, “A Brief Explanation of Media Coverage of Cyberattacks,” 183. 42. The Russo-Georgian War 2008, 15. A virtually identical characterization later appeared in Healey’s A Fierce Domain, 201. 43. Beard, “Beyond Tallinn,” 151. 44. Schmitt, Tallinn Manual 2.0, 408–9. 45. Pope, A Better State of War, 39. 46. Olsen, We Are Anonymous, 138, 47. Mandel, Optimizing Cyberdeterrence, 185–86. 48. Jasper, Strategic Cyber Deterrence, 183. 49. Healey, A Fierce Domain, 202. 50. Healey, A Fierce Domain, 149–51. 51. Healey, A Fierce Domain, 136–42. 52. Singer and Brooking, LikeWar, 191. 53. Conti and Raymond, On Cyber, 268. 54. William E. Parker IV, Cyber Workforce Retention (Montgomery, AL: Maxwell AFB, Air University Press, 2016), 5–6. 55. Conti and Raymond, On Cyber, 268–69. 56. Panayotis A. Yannakogeorgos and John P. Geis II, The Human Side of Cyber Conflict: Organizing, Training, and Equipping the Air Force Cyber Workforce (Montgomery, AL: Maxwell AFB, Air Force Research Institute, 2016), 52–53, 100. 57. Yannakogeorgos and Geis, The Human Side, 73. 58. Conti and Raymond, On Cyber, 271. 59. Nicholas Michael Sambaluk, “Training Tomorrow’s Cyber Warriors,” in Cyber Warfare, ed. Paul J. Springer (Santa Barbara, CA: ABC-CLIO, 2015), 116–19.

186Notes 60. Conti and Raymond, On Cyber, 192–95. 61. Mueller, Networks and States, 241. 62. Singer and Friedman, Cybersecurity and Cyberwar, 186. 63. Roger Hurwitz, “Depleted Trust in the Cyber Commons,” Strategic Studies Quarterly 6, no. 3 (Fall 2012): 36. 64. Then-president Barack Obama declared in his 2011 State of the Union address that “our infrastructure used to be the best, but our lead has slipped. South Korean homes now have greater Internet access than we do.” This was part of a section arguing for greater infrastructure investment across the board and pointing to different U.S. infrastructure shortcomings relative to several countries. Barack Obama, “Remarks by the President in State of Union Address,” ­https://​ ­obamawhitehouse​.­archives​.­gov​/­the​-­press​-­office​/­2011​/­01​/­25​/­remarks​-­president​ -­state​-­union​-­address. 65. Alexander Klimburg, The Darkening Web: The War for Cyberspace (New York: Penguin, 2017), 392–93. 66. Brantly, The Decision to Attack, 138–39. 67. This is a comparison of Brantly’s findings about cyber power, alongside statistics on the most powerful militaries and the largest economies as of 2018. Even extending the consideration to the countries with a position somewhere in the top two dozen spaces for each of the lists yields only 15 countries. “The 25 most powerful militaries in the world,” quoted in Christopher Woody and Jenny Cheng, “Here’s the Hardware the World’s Top 25 Militaries Have in Their Arsenals,” Business Insider, March 1, 2018, ­https://​­www​.­businessinsider​.­com​/­here​-­are​ -­the​-­worlds​-­most​-­powerful​-­militaries​-­2018​-­2; “Report for Selected Countries and Subjects,” International Monetary Fund, ­https://​­www​.­imf​.­org​/­external​/­pubs​/­ft​ /­weo​/­2018​/­02​/­weodata​/­index​.­aspx. 68. Alan D. Campen and Douglas H. Dearth, eds., Cyberwar 3.0: Human Factors in Information Operations and Future Conflict (Fairfax, VA: AFCEA International Press, 2000), 58. 69. Klimburg, The Darkening Web, 82. 70. Michael Pal, qtd. in “Defending Canadian Democracy from Cyber Attacks”; Pope, A Better State of War, 29; Scott W. Beidleman, “Defining and Deterring Cyber War,” Master’s thesis (Carlisle, PA: Army War College, 2009). 71. Campen and Dearth, Cyberwar 3.0, 34. 72. Healey, A Fierce Domain, 22. 73. Rosenzweig, Cyber Warfare, 204. 74. Chris C. Demchak and Peter Dembrowski, “Rise of a Cybered Westphalian Age,” Strategic Studies Quarterly 5, no. 1 (Spring 2011): 32–61. 75. Chris C. Demchak and Peter Dombrowski, “Cyber Westphalia: Asserting State Prerogatives in Cyberspace,” Georgetown Journal of International Affairs: International Engagement on Cyber III (2013–14): 33. 76. Jasper, Strategic Cyber Deterrence, 144. 77. Rosenzweig, Cyber Warfare, 207; Rabkin and Yoo, Striking Power, 228, 236. 78. Rosenzweig, Cyber Warfare, 206. 79. David Faris, “Architectures of Control and Mobilization in Egypt and Iran,” in Social Media in Iran: Politics and Society after 2009, ed. David M. Faris and Babak Rahimi (Albany: SUNY, 2015), 204. 80. Cheng, Cyber Dragon, 63, 66, 74.

Notes187 81. Poindexter, The Chinese Information War, 172. 82. Quoted in Michael Kolton, “Interpreting China’s Pursuit of Cyber Sovereignty and Its Views on Cyber Deterrence,” Cyber Defense Review 2, no. 1 (Spring 2017): 129. 83. Simon Sharwood, “Chinese President Xi Seeks Innovation Independence,” The Register, June 1, 2018, ­https://​­www​.­theregister​.­co​.­uk​/­2018​/­06​/­01​/­xi​_xinping​ _science​_technology​_policy​_speech. 84. Jasper, Strategic Cyber Deterrence, 152. 85. Singer and Brooking, LikeWar, 85; Jacques deLisle, Avery Goldstein, and Guobin Yang, “Introduction,” in The Internet, Social Media, and a Changing China, ed. Jacques deLisle, Avery Goldstein, and Guobin Yang (Philadelphia: University of Pennsylvania, 2016),4; Tang Lan and Zhang Xin, “Can Cyber Deterrence Work?” in Global Cyber Deterrence: Views from China, the US, Russia, India, and Norway, ed. Andrew Nagorski (New York: EastWest Institute, 2010), 2. 86. Iasiello, “China’s Three Warfares,” 59. 87. Klimburg, The Darkening Web, 107. 88. Goychayev et al., Cyber Deterrence, 1.74; Michael N. Schmitt and Lisa Vihul, “The Emergence of International Legal Norms for Cyberconflict,” Binary Bullets, 40; Elena Chernenko, “Russia’s Cyber Diplomacy,” Hacks, Leaks and Disruptions, 47–48. 89. Lord and Sharp, America’s Cyber Future, 26; Siim Alatalu, “NATO’s Responses to Cyberattacks,” Hacks, Leaks and Disruptions, 97; Josef Schrofl, Bahram M. Rajaee, and Dieter Muhr, “Summary and Outlook,” Hybrid and Cyber War as Consequences of the Asymmetry, 295. 90. Keith Tresh and Maxim Kovalsky, “Toward Automated Information Sharing: California Cybersecurity Integration Center’s Approach to Improve on the Traditional Information Sharing Models,” Cyber Defense Review 3, no. 2 (Summer 2018): 30; Weed, US Policy Response, 22. 91. Benjamin Peters, How Not to Network a Nation: The Uneasy History of the Soviet Internet (Cambridge, MA: MIT, 2016), 2. 92. Klimburg, The Darkening Web, 19. 93. For detailed considerations of internet governance issues, see DeNardis, Global War; Mueller, Networks and States; and Yoo, The Dynamic Internet. 94. For example, Canada’s Director General for the national government’s cybersecurity entity noted to an audience that cooperation “with our international allies and partners” will both reduce cybersecurity threats and that “we’re also hopefully going to be able to implant Canadian values into the use of the internet and IT products and services worldwide.” Collen Merchant, “Canada’s Vision for Security and Prosperity in the Digital Age,” SERENE-RISC, November 16, 2018, ­https://​­www​.­youtube​.­com​/­watch​?​=​­v​=​­d2uwmwfKdDQ. 95. Mueller, Networks and States, 131, 136. 96. Nina Hachigian, “China’s Cyber-Strategy,” Foreign Affairs 80, no. 2 (March– April 2001): 118. 97. Schneier, Data and Goliath, 70; Singer and Brooking, LikeWar, 85–91. 98. Sanger, The Perfect Weapon, 109. 99. Singer and Brooking, LikeWar, 91. 100. Liptak, “Information Warfare and Cybersecurity,” 94; Deibert, Black Code, 97.

188Notes 101. Singer and Brooking, LikeWar, 236. 102. Singer and Brooking, LikeWar, 216–18. 103. Greg Conti, Googling Security: How Much Does Google Know about You? (Upper Saddle River, NJ: Addison-Wesley, 2009), 128. 104. Conti, Googling Security, 130.

CHAPTER 5 1. Singer and Brooking, LikeWar, 44. 2. Betsy Schiffman, “Status Update: Facebook Is Letting Users Drop the ‘Is’,” Wired ­https://​­www​.­wired​.­com​/­2007​/­11​/­status​-­update​-­f; Lisa Ellen Silvestri, Friended at the Front: Social Media in the American War Zone (Lawrence: University of Kansas Press, 2015), 69; “System and Method for Dynamically Providing a News Feed about a User of a Social Network,” Espacenet Patent Search, ­https://​ ­worldwide​.­espacenet​.­com​/p ­ ublicationDetails​/b ­ iblio​?­CC​=​­US​&N ­ R​=​­7669123​&­KC​ =​&­FT​=​­E​&­locale​=​­en​_EP; Elizabeth Stinson, “Facebook Reactions, the Totally Redesigned Like Button, Is Here,” Wired, February 24, 2016, ­https://​­www​.­wired​.­com​ /­2016​/­02​/­facebook​-­reactions​-­totally​-­redesigned​-­like​-­button. 3. Transcribed in full in Sam Gustin, “Read Facebook CEO Mark Zuckerberg’s IPO Letter,” Time, February 1, 2012, ­http://​­business​.­time​.­com​/­2012​/­02​/­01​/­read​ -­facebook​-­ceo​-­mark​-­zuckerbergs​-­ipo​-­letter. 4. Silvestri, Friended at the Front, 57. 5. Transcribed in full in Sam Gustin, “Read Facebook CEO Mark Zuckerberg’s IPO Letter.” 6. Total revenue figures show the company bringing in nearly $18 billion in 2015, over $27 billion in 2016, over $40 billion in 2017, and nearly $56 billion in 2018. Net income was about $3 billion, $10 billion, $16 billion, and $22 billion, respectively. Thus, the company’s low costs are nevertheless rising more sharply than revenue, the latter figure continues to mount steadily. “FB Company Financials,” NASDAQ, March 12, 2019, ­https://​­www​.­nasdaq​.­com​/­symbol​/­fb​/­financials. 7. Deibert, Black Code, 58–60. 8. Katherine A. Kaplan, “Facemash Creator Survives Ad Board,” The Harvard Crimson, November 19, 2003, ­https://​­www​.­thecrimson​.­com​/­article​/­2003​/­11​/­19​ /­facemash​-­creator​-­survives​-­ad​-­board​-­the. 9. A typo in Bruce Schneier’s Data and Goliath incorrectly dates this event as occurring in 2012. Schneier, Data and Goliath, 115. 10. Alex Hern, “OKCupid: We Experiment on Users. Everyone Does,” The Guardian, July 29, 2014, ­https://​­www​.­theguardian​.­com​/­technology​/­2014​/­jul​/­29​ /­okcupid​-­experiment​-­human​-­beings​-­dating. 11. Atwan, Islamic State, 67. 12. Patrikarakos, War in 140 Characters, 104, 107–9, 122–29. 13. Patrikarakos, War in 140 Characters, 129. 14. Singer and Brookings, LikeWar, 164. 15. Singer and Brookings, LikeWar, 128, 143; Klimburg, The Darkening Web, 368; Nitin Agarwal et al., “Examining the Use of Botnets and Their Evolution for Propaganda Dissemination,” Defence Strategic Communications 2 (March 2017): 94. 16. Larrey, Connected World, 33–34. 17. Singer and Brookings, LikeWar, 201–2.

Notes189 18. Klimburg, The Darkening Web, 266, 311; Yannakogeorgos and Lowther, Conflict and Cooperation, 171. 19. Michael Neiberg, “America and the Unintended Consequences of War— Michael Neiberg,” National WWI Museum and Memorial, November 4, 2017, ­https://​­www​.­youtbue​.­com​/­watch​?­v​=​­NEFnkXlaah4. 20. Singer and Brookings, LikeWar, 204. 21. Singer and Brookings, LikeWar, 210. 22. Singer and Brookings, LikeWar, 213. 23. Schneier, “How to Survive a Cyberattack.” 24. Larrey, Connected World, 278. 25. Jarred Prier, “Commanding the Trend: Social Media as Information Warfare,” Strategic Studies Quarterly 11, no. 4 (Winter 2017): 53. 26. Larrey, Connected World, 257. 27. Patrikarakos, War in 140 Characters, 31, 73. 28. Prier, “Commanding the Trend,” 54. 29. Prier, “Commanding the Trend,” 54. 30. Lahle Wolfe, “Twitter User Statistics 2008 through 2017,” The Balance Careers, ­h ttps://​­ w ww​.­t hebalancecareers​ .­c om​ /­t witter​ -­s tatistics​ -­2 008​ -­2 009​ -­2 010​ -­2 011​ -­3515899; “Number of Monthly Active Twitter Users Worldwide from 1st Quarter 2010 to 4th Quarter 2018,” Statistica, ­https://​­www​.­statista​.­com​/­statistics​/­282087​ /­number​-­of​-­monthly​-­active​-­twitter​-­users. 31. Singer and Brooking, LikeWar, 202–8. 32. Singer and Brooking, LikeWar, 4–5. 33. Prier, “Commanding the Trend,” 63. 34. J.  M. Berger and Jonathon Morgan, The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter (Washington, DC: Brookings Institute, 2015), 25. 35. Berger and Morgan, The ISIS Twitter Census, 7. 36. Berger and Morgan, The ISIS Twitter Census, 3. 37. Prier, “Commanding the Trend,” 55. 38. Prier, “Commanding the Trend,” 64. 39. Brian L. Steed, ISIS: An Introduction and Guide to the Islamic State (Santa Barbara, CA: ABC-CLIO, 2016), 65. 40. Prier, “Commanding the Trend,” 52. 41. Ullah, Digital World War, 40; Ashton Carter, “Remarks by Secretary Carter in a Media Availability Aboard USS John C. Stennis in the South China Sea,” April 15, 2016, ­https://​­dod​.­defense​.­gov​/­News​/­Transcripts​/­Transcript​-­View​/­Article​ /­722593​/­remarks​-­by​-­secretary​-­carter​-­in​-­a​-­media​-­availability​-­aboard​-­uss​-­john​-­c​ -­stennis​-­i. 42. Singer and Brooking, LikeWar, 208. 43. Atwan, Islamic State, 27; Prier, “Commanding the Trend,” 64; Singer and Brooking, LikeWar, 209. 44. Twitter, “An Update on Our Efforts to Combat Violent Extremism,” August  18, 2016, ­https://​­blog​.­twitter​.­com​/­official​/­en​_us​/­a​/­2016​/­an​-­update​-­on​ -­our​-­efforts​-­to​-­combat​-­violent​-­extremism​.­html. 45. Prier, “Commanding the Trend,” 65. 46. Prier, “Commanding the Trend,” 65–66. 47. Singer and Brooking, LikeWar, 121.

190Notes 48. Dubois, qtd. in “Defending Canadian Democracy from Cyber Attacks.” 49. Sanger, The Perfect Weapon, 183. 50. Bertrand Boyer, “Countering Hybrid Threats in Cyberspace,” Cyber Defense Review,  February  2017, ­https://​­ c yberdefensereview​.­a rmy​.­m il​ /­C DR​ -­C ontent​ /­Articles​/­Article​-­View​/­Article​/­1134632​/­countering​-­hybrid​-­threats​-­in​-­cyber space. 51. Singer and Brooking, LikeWar, 125. 52. Prier, “Commanding the Trend,” 68–71. 53. Singer and Brooking, LikeWar, 130. 54. Media Ajir and Bethany Vailliant, “Russian Information Warfare: Implications for Deterrence Theory,” Strategic Studies Quarterly 12, no. 3 (Fall 2018): 75–76. 55. Soldatov and Borogan, The Red Web, 210. 56. Soldatov and Borogan, The Red Web, 284, 292–93. 57. Cheng, Cyber Dragon, 74; Klimburg, The Darkening Web, 257. 58. Kramer et al., Cyberpower and National Security, 161. 59. The rest of the top five sites are Facebook, Baidu, and Wikipedia, in descendant order. Notably, one of these is a social media giant and another is a web search tool. “The Top 500 Sites on the Web,” Alexa, ­https://​­www​.­alexa​.­com​/­topsites. 60. Singer and Brooking, LikeWar, 203, 237; Yannakogeorgos and Lowther, Conflict and Cooperation, 169; Olson, We Are Anonymous, 130. 61. Glenn Alexander Crowther, “The Cyber Domain,” Cyber Defense Review 2, no. 3 (Fall 2017): 74; Kramer et al., Cyberpower and National Security, 152. 62. Sanger, The Perfect Weapon, 243; Gertz, iWar, 227. 63. Paul Klench Rozumski, “The Rise of Social Media and Its Role in Future Protests and Revolutions,” in Evolution of Cyber Technologies and Operations to 2035, ed. Misty Blowers (New York: Springer, 2015), 147. 64. “Twitter has served as a primary medium for political campaigning at least since the election of then-US President Barack Obama in 2012. However, the platform is vulnerable to manipulation by small groups of users.” Ben Nimmo, Measuring Traffic Manipulation on Twitter (Oxford, UK: Oxford University Press, 2018), 5. This sense of surprise may stem from a widespread affinity for that administration clouding awareness that social mobilization inherently involves the leveraging of effective tools to garner brand recognition and support. The surprising element should not be that undemocratic entities used a tool that had been proven through democratic utilization; rather, the surprise is that the prospect of unsavory elements using tools of freshly proven value was not anticipated. 65. Sanger, The Perfect Weapon, 243. 66. Klimburg, The Darkening Web, 129.

CHAPTER 6 1. For more, see “Comparing Today’s Computers to 1995’s,” Relatively Interesting, ­https://​­www​.­relativelyinteresting​.­com​/­comparing​-­todays​-­computers​-­to​-­1995s. 2. Kramer et al., Cyberpower and National Security, 346; Deibert, Black Code, 89–90. 3. Singer and Brooking, LikeWar, 45. 4. Campen and Dearth, Cyberwar 3.0, 166; Singer and Brooking, LikeWar, 48. 5. Singer and Brooking, LikeWar, 42; Rosenzweig, Cyber Warfare, 216.

Notes

191

6. For a description of the disenfranchisement in . in “Defending Canadian Democracy from Cyber Attacks.” 7. Singer and Brooking, LikeWar, 122. 8. Silvestri, Friended at the Front, 46–59. 9. Singer and Brooking, LikeWar, 124–25. 10. Patrikarakos, War in 140 Characters, 264; Beata Bialy, “Social Media—From Social Exchange to Battlefield,” Cyber Defense Review 2, no. 2 (Summer 2017): 74; Chase Buckle, “Top 10 Reasons for Using Social Media among Facebookers,” GlobalWebIndex, July 6, 2016, ­https://​­blog​.­globalwebindex​.­com​/­chart​-­of​-­the​-­day​/­top​ -­10​-­reasons​-­for​-­using​-­social​-­media​-­among​-­facebookers; Prier, “Commanding the Trend,” 58. 11. Rid, Cyber War, 134. 12. Yannakogeorgos and Lowther, Conflict and Cooperation, 162. 13. Singer and Brooking, LikeWar, 77. 14. John Larson, quoted in Campen and Dearth, Cyberwar 3.0, 213. 15. Reza Mosoudi Nejad, “Trans-spatial Public Action: The Geography of Iranian Post-Election Protests in the Age of Web 2.0,” Social Media in Iran, 178. 16. Faris, “Architectures of Control and Mobilization in Egypt and Iran,” 209; Xymena Kurowska and Anatola Reshetnikov, “Russia’s Trolling Complex at Home and Abroad,” Hacks, Leaks and Disruptions, 26; Pal, qtd. in “Defending Canadian Democracy from Cyber Attacks”; DeNardis, Global War, 57. 17. Singer and Brooking, LikeWar, 238. 18. DeNardis, Global War, 140–49. 19. Larrey, Connected World, 302. 20. Marks, “Unmasking the Spy.” 21. Singer and Brooking, LikeWar, 238. 22. Conti, Googling Security, 211. 23. Larrey, Connected World, 60, 240–48. 24. See Samuel P. Huntington’s The Soldier and the State: The Theory and Politics of Civil-Military Relations (New York: Vintage Books, 1964). 25. Singer and Brooking, LikeWar, 59. 26. Martin C. Libicki, “The Convergence of Information Warfare,” Strategic Studies Quarterly 11, no. 1 (Spring 2017): 52; Singer and Brooking, LikeWar, 43. 27. Prier, “Commanding the Trend,” 61. 28. Singer and Brooking, LikeWar, 101. 29. Larrey, Connected World, 174. 30. Patrikarakos, War in 140 Characters, 24–37. 31. Farah Baker, “@Farah_Gazan,” Twitter, ­https://​­twitter​.­com​/­Farah​_Gazan. 32. Singer and Brooking, LikeWar, 96, 101; Prier, “Commanding the Trend,” 69. 33. Patrikarakos, War in 140 Characters, 134. 34. DeNardis, Global War, 10. 35. Nye, “Nuclear Lessons for Cyber Security?” 27. 36. Meg Smith, “Introduction: If It’s on the Internet, It Must Be True,” Web of Deceit, 14–15. 37. Patrikarakos, War in 140 Characters, 143; Singer and Brooking, LikeWar, 99. 38. Philip N. Howard, Bhrarath Gamesh, and Dimitra Liotsiou, The IRA, Social Media and the Political Polarization in the United States, 2012–2018 (Oxford,

192Notes UK: Oxford University Press, 2018), 32, 39; Mueller, Report on the Investigation, 24–25. 39. Howard et al., The IRA, 13, 18; Samantha Bradshaw, Lisa-Maria Neudert, and Philip N. Howard, Government Responses to Malicious Use of Social Media (Riga, Latvia: NATO Stratcom CDE, 2018), 21. 40. Patrikarakos, War in 140 Characters, 134–40. 41. Singer and Brooking, LikeWar, 100. 42. Patrikarakos, War in 140 Characters, 148–50. 43. Jari Eloranta, Hossein Kermani, and Babak Rahimi, “Facebook Iran,” Social Media in Iran, 28; Singer and Brooking, LikeWar, 83–88; Corera, Cyberspies, 270. 44. Timothy L. Thomas, “Cyber Mobilization: The Neglected Aspect of Information Operations and Counterinsurgency Doctrine,” in Countering Terrorism and Insurgency in the 21st Century: International Perspectives. Volume 1: Strategic and Tactical Considerations, ed. James J. F. Forest (Westport, CT: Praeger, 2007), 369. 45. Chris Bronk and Gregory S. Anderson, “Encounter Battle: Engaging ISIL in Cyberspace,” Cyber Defense Review 2, no. 1 (Spring 2017): 96. 46. Patrikarakos, War in 140 Characters, 228. 47. The difficulty with denying extremists these rhetorical advantages is that doing so requires the kind of explication of the reasoning behind redefining jihadist fighters as irhabists or “unholy warriors.” Such an effort to reclaim the rhetorical terrain would have been arguably much more possible when the Global War on Terrorism was new and in the fresh wake of the September 11 attacks. For a description of the advantages of more accurately such defining extremist fighters as unholy warriors, see Kramer et al., Cyberpower and National Security, 463. 48. Patrikarakos, War in 140 Characters, 221. 49. Gabriel Weimann, Special Report: How Modern Terrorism Uses the Internet (Washington, DC: United States Institute of Peace, 2004), 4. 50. Clarke, Cyber War, 136. 51. Thomas, Cyber Silhouettes, 184–90. 52. Singer and Friedman, Cybersecurity and Cyberwar, 100–101. 53. Bronk and Anderson, “Encounter Battle,” 97. 54. Prier, “Commanding the Trend,” 64. 55. Atwan, Islamic State, 16–17, 67. 56. Ullah, Digital World War, 57. 57. Oz Sultan, “Combatting the Rise of ISIS 2.0 and Terrorism 3.0,” Cyber Defense Review 2, no. 3 (Fall 2017): 48. 58. Singer and Brooking, LikeWar, 134–35. 59. Patrikarakos, War in 140 Characters, 250–51. 60. Singer and Brooking, LikeWar, 56. 61. Vindu Goel and Sydney Ember, “As Paris Terror Attacks Unfolded, Social Media Tools Offered Help in Crisis,” New York Times, November 14, 2015, h ­ ttps://​ ­www​.­nytimes​.­com​/­2015​/­11​/­15​/­technology​/­as​-­paris​-­terror​-­attacks​-­unfolded​ -­social​-­media​-­tools​-­offered​-­help​-­in​-­crisis​.­html. 62. Singer and Brooking, LikeWar, 58. 63. Singer and Brooking, LikeWar, 13. 64. Pete Gries, Derek Steiger, and Wang Tao, “Social Media, Nationalist Protests, and China’s Japan Policy: The Diaoyu Islands Controversy,” The Internet, Social Media, and a Changing China, 163, 176. 65. Singer and Brooking, LikeWar, 81.

Notes193 66. Patrikarakos, War in 140 Characters, 107–10. 67. Soldatov and Borogan, The Red Web, 308. 68. James Rogers, “Jacksonville Mass Shooting: Chilling Twitch Livestream Records Horrific Attack,” Fox News, August 27, 2018, ­https://​­www​.­foxnews​.­com​ /­tech​/­jacksonville​-­mass​-­shooting​-­chilling​-­twitch​-­livestream​-­records​-­horrific​ -­attack. 69. Singer and Brooking, LikeWar, 209, 221. 70. For a comparison of risk assessment and early warning with respect to violence and social media, see Joseph G. Beck, The Technology of Nonviolence: Social Media and Violence Prevention (Cambridge, MA: MIT, 2012), 51–52. 71. Zack Whittaker, “Facebook Failed to Block 20% of Uploaded New Zealand Shooter Videos,” Tech Crunch, March 17, 2019, ­https://​­techcrunch​.­com​/­2019​/­03​ /­17​/­facebook​-­new​-z­ ealand; Kieren McCarthy, “Click Here to See the New Zealand Livestream Mass-Murder Vid! This is the Internet Facebook, YouTube, Twitter Built!,” The Register, March 15, 2019, ­https://​­www​.­theregister​.­co​.­uk​/­2019​/­03​/­15​ /­new​_zealand​_murder. 72. Donie O’Sullivan, “Facebook Says It’s Policing Its Platform, But It Didn’t Catch a Livestream of a Massacre. Why?” CNN, March 15, 2019, ­https://​­www​.­cnn​ .­com​/­2019​/­03​/­15​/­tech​/­facebook​-­new​-­zealand​-­content​-­moderation​/­index​.­html. 73. Rosenzweig, Cyber Warfare, 200.

CHAPTER 7 1. Healey, qtd. in “Citizen Soldier.” 2. Schneier, Data and Goliath, 13–17, 37. 3. Amrik Virk, qtd. in “CIO Association of Canada” 4. Amanda Maltby, qtd. in “Canadians Connected 2018: Expert Panel on Cybersecurity,” CIRANEWS, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­8t0hIK75X9A. 5. Conti, Googling Security, 89, 183–84. 6. Corera, Cyberspies, 342. 7. Schneier, Data and Goliath, 32; Michael Skerker, “Moral Concerns with Cyberespionage: Automated Keyword Searches and Data Mining,” Binary Bullets, 258. 8. Libicki, Cyberspace in Peace and War, 96. 9. Dan Geer, “Personal Data and Government,” Cambridge, MA, October 7, 2013, ­https://​­kit​.­mit​.­edu​/­sites​/­default​/­files​/­documents​/­Geer​_MIT​_KIT​_2013​ _Conference​.­pdf. 10. Larrey, Connected World, 149. 11. Singer and Brooking, LikeWar, 221. 12. Joel Brenner, America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare (New York: Penguin, 2011), 193. 13. Rosenzweig, Cyber Warfare, 127. 14. For more on the role of intelligence needs’ shaping of Cold War-era U.S. space policy and its impact on thinking about combat in the “aerospace” domain, see Nicholas Michael Sambaluk, The Other Space Race: Eisenhower and the Quest for Aerospace Security (Annapolis, MD: Naval Institute, 2015). 15. Singer and Brooking, LikeWar, 52; Scott E. Solomon, Social Media: The Fastest Growing Vulnerability to the Air Force Mission (Montgomery, AL: Maxwell AFB, Air University Press, 2017), 25; Klimburg, The Darkening Web, 42, 49. 16. Singer and Brooking, LikeWar, 54.

194Notes 17. Schneier, Data and Goliath, 130. 18. Ullah, Digital World War, 156. 19. Conti, Googling Security, 274, 291–93, 300. 20. Schneier, Data and Goliath, 238. 21. Conti, Googling Security, 22, 300. 22. Michael Geiss, qtd. in “Canadians Connected 2018”; Businessweek report from 1994 quoted in Corera, Cyberspies, 321. 23. Schneier, Data and Goliath, 47–50, 189, 209, 221. 24. Quoted in Larrey, Connected World, 46, 109, 124. 25. DeNardis, Global War, 231–33. 26. Kramer et al., Cyberpower and National Security, 463. 27. Dieter A. Waldvogel, “Social Media and the DOD: Benefits, Risks, and Mitigation,” Air and Space Power Journal 31, no. 2 (Summer 2017): 120. 28. Conti, Googling Security, 291. 29. Hayden, qtd. in “Citizen Soldier.” 30. Demchak, Wars of Disruption and Resilience, 285–86; Barnes and Satariano, “US Campaign to Ban Huawei Overseas Stumbles”; Satariano, “Huawei Security ‘Defects’ Are Found by British Authorities.” 31. Adam Satariano and Joanna Berendt, “Poland Arrests 2, Including Huawei Employee, Accused of Spying for China,” The New York Times, January 11, 2019, ­https://​­www​.­nytimes​.­com​/­2019​/­01​/­11​/­world​/­europe​/­poland​-­china​-­huawei​ - ­s py​. ­h tml ​ ? ­a ction ​ = ​­c lick ​ & ­m odule ​ = ​­R elatedCoverage ​ & ­p gtype ​ = ​­A rticle ​ & ­region​ =Footer. 32. Corera, Cyberspies, 185. 33. Libicki, Cyberspace in Peace and War, 101; Klimburg, The Darkening Web, 278. 34. Schneier, Data and Goliath, 81. 35. Campen and Dearth, Cyberwar 3.0, 145. 36. Corera, Cyberspies, 101. 37. Chuanjie Zhang, “Images of the DPRK in China’s New Media,” The Internet, Social Media, and a Changing China, 202. 38. Singer and Brooking, LikeWar, 19. 39. Singer and Brooking, LikeWar, 52. 40. Patrikarakos, War in 140 Characters, 175–82. 41. Patrikarakos, War in 140 Characters, 185–91. 42. Soldatov and Borogan, The Red Web, 307; Ullah, Digital World War, 90–91. 43. For more, see Mueller, Networks and States, 176–83. 44. Singer and Brooking, LikeWar, 157. 45. Sam Cook, “2017–2019 Ransomware Statistics and Facts,” CompariTech, August 25, 2018, ­https://​­www​.­comparitech​.­com​/­antivirus​/­ransomware​-­statistics​/; Conti and Raymond, On Cyber, 194. 46. “Internet Security Threat Report: Executive Summary,” Symantec, February 2019, ­https://​­www​.­symantec​.­com​/­content​/­dam​/­symantec​/­docs​/­reports​/­istr​ -­24​-­executive​-­summary​-­en​.­pdf. 47. Campen and Dearth, Cyberwar 3.0, 49; “The Economic Impact of Cybercrime—No Slowing Down,” McAfee, February 2018, ­https://​­www​.­mcafee​ .­c om​ /­e nterprise​ /­e n​ -­u s​ /­a ssets​ /­e xecutive​ -­s ummaries​ /­e s​ -­e conomic​ -­i mpact​ -­cybercrime​.­pdf.

Notes195 48. Pernik, “The Early Days of Cyberattacks,” 62; Sanger, The Perfect Weapon, 155–56. 49. “Significant Cyber Incidents,” Center for Strategic and International Studies, ­https://​­www​.­csis​.­org​/­programs​/­cybersecurity​-­and​-­governance​/­technology​ -­policy​-­program​/­other​-­projects​-­cybersecurity. 50. Zetter, Zero Days, 182. 51. Sarah Marsh, “The NHS Trusts Hit by Malware—Full List,” The Guardian, May 12, 2017, ­https://​­www​.­theguardian​.­com​/­society​/­2017​/­may​/­12​/­global​ -­cyber​-­attack​-­nhs​-­trusts​-­malware; “Global Cyber Attack: A Look at Some Prominent Victims,” The Straits Times, May 13, 2017, ­https://​­www​.­straitstimes​.­com​ /­world​/­organisations​-­hit​-­by​-­global​-­cyberattack; Sanger, The Perfect Weapon, 285–90. 52. Schneier, “How to Survive a Cyberattack.” 53. Quoted in Albert Bigelow Paine, Mark Twain, a Biography (Farmington Hills, MI: Gale, 2002), ­https://​­ebooks​.­adelaide​.­edu​.­au​/­t​/­twain​/­mark​/­paine​/­index​ .­html, 1332. 54. Yannakogeorgos, “Rethinking the Threat of Terrorism,” 60. 55. Thomas, “Cyber Mobilization,” 359; Ajir and Vailliant, “Russian Information Warfare,” 75. 56. Goychayev et al., Cyber Deterrence, 1.23. 57. Gertz, iWar, 105. 58. Singer and Brooking, LikeWar, 162. The International Telecommunications Union estimates that 48% of the world’s population (7.4 billion in 2017) had been online at least once in the past 12 months. The nearly 250 million U.S. persons active online (about three-quarters of the U.S. population) thus constitute almost 7% of the global online population, while the United States is home to approximately 4.5% of the world’s overall population. 59. Singer and Brooking, LikeWar, 146. 60. Campen and Dearth, Cyberwar 3.0, 11, 92–93. 61. Campen and Dearth, Cyberwar 3.0, 11. 62. Singer and Brooking, LikeWar, 139–42. 63. Singer and Brooking, LikeWar, 142. 64. Prier, “Commanding the Trend,” 56–57. 65. Aristedes Mahairas and Mikhail Dvilyanski, “Disinformation—Дезинформ ация (Dezinformatsiya),” Cyber Defense Review 3, no. 3 (Fall 2018): 25. 66. Singer and Brooking, LikeWar, 97–98. 67. Patrikarakos, War in 140 Characters, 81. 68. Singer and Brooking, LikeWar, 218. 69. Ralph Martins, “Anonymous’ Cyberwar against ISIS and the Asymmetrical Nature of Cyber Conflict,” Cyber Defense Review 2, no. 3 (Fall 2017): 100. 70. Ullah, Digital World War, 223. 71. Singer and Brooking, LikeWar, 62, 194. 72. Prier, “Commanding the Trend,” 63. 73. Cheng, Cyber Dragon, 142–44. 74. Prier, “Commanding the Trend,” 77. 75. David Ignatius, “Russia’s Radical New Strategy for Information Warfare,” January 18, 2017, ­https://​­www​.­washingtonpost​.­com​/­blogs​/­post​-­partisan​/­wp​

196Notes /­2017​/0­ 1​/1­ 8​/r­ ussias​-r­ adical​-n ­ ew​-s­ trategy​-f­ or​-i­ nformation​-w ­ arfare​/?u ­ tm​_term​=​ .­6096ba113098.

CHAPTER 8 1. Yannakogeorgos and Lowther, Conflict and Cooperation, 163; Lincoln Pigman, “Behind Russia’s Cyberwarfare Lies a Serious Case of Cyber-Phobia,” Washington Post, January 17, 2019, ­https://​­www​.­washingtonpost​.­com​/­news​/­monkey​ -­cage​/­wp​/­2019​/­01​/­17​/­behind​-­russias​-­cyberwarfare​-­lies​-­a​-­serious​-­case​-­of​-­cyber​ -­phobia​/?­utm​_term​=​.­d00eeb73524c. 2. Soldatov and Borogan, The Red Web, 71–73, 116, 150–52, 267–68, 277–79; Howard et al., The IRA, 9. 3. Popescu and Secrieru, “Conclusion,” 116. 4. Christopher A. Ford, “The Trouble with Cyber Arms Control,” The New Atlantis no. 29 (Fall 2010): 59–62; Nicu Popescu and Stanislav Secrieru, “Introduction,” Hacks, Leaks and Disruptions, 20. 5. Howard et al., The IRA, 25. 6. Martin C. Libicki, “The Specter of Non-Obvious Warfare,” Strategic Studies Quarterly 6, no. 3 (Fall 2012), 100; Mandel, Optimizing Cyberdeterrence, 217. 7. Marks, “Unmasking the Spy”; Pal, qtd. in “Defending Canadian Democracy from Cyberattacks.” 8. Quoted in Jarno Limnell, “Russian Cyber Activities in the EU,” Hacks, Leaks and Disruptions, 71; Jean-Baptiste Jeangene Vilmer, “Lessons from the Macron Leaks,” Hacks, Leaks and Disruptions, 78. 9. Kurowska and Reshetnikov, “Russia’s Complex at Home and Abroad,” 31. 10. Singer and Brooking, LikeWar, 79, 103. 11. Singer and Brooking, LikeWar, 88–91. 12. Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, CT: Yale University, 2006), 1. 13. Singer and Brooking, 34–35; Dave Evans, How the Next Evolution of the Internet Is Changing Everything (New York: Cisco Internet Business Solutions Group, 2011), 3–4; Oscar Jonsson, “The Next Front: The Western Balkans,” Hacks, Leaks and Disruptions, 96. 14. Hansen and Nissenbaum, “Digital Disaster,” 1156; Debora Halbert, “Discourses of Danger and the Computer Hacker,” The Information Society: An International Journal 13, no. 4 (1997): 363, 366–68. 15. Corera, Cyberspies, 78–83. 16. Hurwitz, “Depleted Trust,” 23; Scott Jones, qtd. in “Defending Canadian Democracy from Cyber Attacks”; Madeline Carr, “Public-Private Partnerships in National Cyber-Security Strategies,” International Affairs 92, no. 1 (2016): 56; Larrey, Connected World, 154; Demchak, Wars of Disruption and Resilience, 186. 17. Silvestri, Friended at the Front, 47. 18. Bert Chapman, Global Defense Procurement and the F-35 Joint Strike Fighter (London: Palgrave MacMillan, 2019), 95; Brantly, “The Violence of Hacking,” 86; Roberto Saracco, “Guess What Requires 150 Million Lines of Code. . . .” EIT Digital, January 13, 2016, ­https://​­www​.­eitdigital​.­eu​/­news​-­events​/­blog​/­article​/­guess​ -­what​-­requires​-­150​-­million​-­lines​-­of​-­code.

Notes197 19. Brantly, The Decision to Attack, 4. 20. Schneier, “How to Survive a Cyberattack”; Tim Singletary, “Dark Web and the Rise of Underground Networks,” Evolution of Cyber Technologies and Operations to 2035, 118–19. 21. Neno Malisevic, “Options for Tackling Current and Future Cyber Threats,” Hybrid and Cyber War as Consequences of the Asymmetry, 187; Alexander Essex, “Internet Voting in Canada: A Cyber Security Perspective,” October 6, 2016, ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­nJdymuMeQTQ; Chris Babcock, “Preparing for the Cyber Battleground of the Future,” Air and Space Power Journal 30 (November–December 2015), 65; Keith B. Nordquist, “The New Matrix of War: Digital Dependence in Contested Environments,” Air and Space Power Journal 32, no. 1 (Spring 2018): 109. 22. Evans, How the Next Evolution of the Internet Is Changing Everything, 6; Nancy Blacker, “Winning the Cyberspace Long Game—Applying Collaboration and Education to Deepen the US Bench,” Cyber Defense Review 2, no. 2 (Summer 2017): 23. 23. Daniel Bilar, “On nth Order Attacks,” The Virtual Battlefield, 265. 24. Yannakorgeorgos, Strategies for Resolving the Cyber Attribution Challenge, 27; Yannakorgeorgos and Geiss, The Human Side of Cyber Conflict, 24. 25. Parker, Cyber Workforce Retention, 43. 26. Deibert, Black Code, 6. 27. Singer and Brooking, LikeWar, 19; Rid, Cyber War, 39; Healey, A Fierce Domain, 85; Jones, qtd. in “Discussion.” 28. Thomas M. Chen, Lee Jarvis, and Stuart Macdonald, eds., “Conclusions,” Counterterrorism, 203; Prier, “Commanding the Trend,” 62. 29. Stytz and Banks, “Toward Attaining Cyber Dominance,” 57. 30. Vilmer, “Lessons from the Macron Leaks,” 75–80; Singer and Brooking, LikeWar, 225; Libicki, Cyberdeterrence and Cyberwar, 152. 31. Healey, qtd. in “Citizen Soldier”; Libicki, Cyberspace in Peace and War, 283; Conti and Raymond, On Cyber, 200; Rex Hughes, “Towards a Global Regime for Cyber Warfare,” The Virtual Battlefield, 115; Neutze, qtd. in “Defending Canadian Democracy from Cyber Attacks.” 32. Pope, A Better State of War, 30; Heiko Borchert and Felix Juhl, “Securing Cyberspace: Building Blocks for a Public-Private Cooperation Agenda,” Hybrid and Cyber War as Consequences of the Asymmetry, 163; Canetti et al., “Immune from Cyberfire?” 173. 33. Conti, Googling Security, 200; Brenner, America the Vulnerable, 83. 34. Dave Chiswell, qtd. in “Canadians Connected 2018”; Iasiello, “Is Cyber Deterrence an Illusory Course of Action?” 49; Kaplan, “The Secret History of Cyber War.” 35. Goychayev et al., Cyber Deterrence, 1.53. 36. Nicholas M. Sambaluk, “The Challenge of Security: West Point’s Defenses and Digital Age Implications, 1775–1777,” Cyber Defense Review 2, no. 1 (Spring 2017): 163–64. 37. Danelle Barrett, “Cybersecurity: Focusing on Readiness and Resiliency for Mission Assurance,” Cyber Defense Review 2, no. 3 (Fall 2017): 19; Bryant, “Resiliency in Future Cyber Combat,” 104; Tighe, “Cyber Warfare in the Maritime Domain.” 38. Conti and Raymond, On Cyber, 123.

198Notes 39. Boian, “ILD 2012 Panel Discussion”; Klimburg, The Darkening Web, 60. 40. Clarke, Cyber War, 89; Conti and Raymond, On Cyber, 123; Libicki, Cyberspace in Peace and War, 72–73. 41. Boian, “ILD 2012 Panel Discussion”; Schneier, “How to Survive a Cyberattack”; Blunden and Cheung, Behold a Pale Farce, 357. 42. Solomon, Social Media, 7; Weed, US Policy Response, 11. 43. Schneier, “Keynote by Mr. Bruce Schneier—CyCon 2018”; Schneier, “How to Survive a Cyberattack.” 44. Schneier, “How to Survive a Cyberattack.” 45. See James Burke, Connections (Boston, MA: Little, Brown & Company, 1978); Babcock, “Preparing for the Cyber Battleground of the Future,” 66. 46. Mueller, Networks and States, 224–25; quoted in Larrey, Connected World, 296. 47. Libicki, “The Convergence of Information Warfare,” 53–54; Kolias et al., “DDoS in the IoT,” 83. 48. Kolias et al., “DDoS in the IoT,” 83; Libicki, “The Convergence of Information Warfare,” 53. 49. Conti and Raymond, On Cyber, 43; Schneier, “How to Survive a Cyberattack”; Mohamed Abomhara and Geir M. Koin, “Cyber Security and the Internet of Things: Vulnerabilities, Threats, Intruders and Attacks,” Journal of Cyber Security 4 (2015): 68. 50. Zetter, Zero Days, 381. 51. Brad Smith, “The Price of Cyber-Warfare,” RSA Conference, April 17, 2018. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­XGutGYNfEw0; Gordon Blackie, “Cloud and Cyber Security in Canada”; Weigelt, “Cloud and Cyber Security in Canada.” 52. Lord and Sharp, America’s Cyber Future, 30. 53. Conti and Raymond, On Cyber, 260, 264. 54. Jones, qtd. in “Discussion.” 55. DeNardis, Global War, 196. 56. Carr, “Public-Private Partnerships in National Cyber-Security Strategies,” 61–62; Singer and Brooking, LikeWar, 236. 57. Rosenzweig, Cyber Warfare, 214. 58. Schneier, Data and Goliath, 58. 59. Singer and Brooking, LikeWar, 216–17. 60. Schneier, Data and Goliath, 58. 61. Deibert, Black Code, 242. 62. Singer and Brooking, LikeWar, 91. 63. Schneier, “How to Survive a Cyberattack.” 64. Schneier, “How to Survive a Cyberattack.” 65. Singer and Friedman, Cybersecurity and Cyberwar, 153, 155; Patrikarakos, War in 140 Characters, 266–67. 66. David Sulek and Ned Moran, “What Analogies Can Tell Us about the Future of Cybersecurity,” The Virtual Battlefield, 130. 67. Healey, qtd. in “Citizen Soldier.” 68. Thomas, Cyber Silhouettes, 137, 147–50. 69. Campen and Dearth, Cyberwar 3.0, 116. 70. Campen and Dearth, Cyberwar 3.0, 233. 71. Singer and Brooking, LikeWar, 226; Alexander Kott, “Intelligent Autonomous Agents Are Key to Cyber Defense of the Future Army Networks,” Cyber Defense Review 3, no. 3 (Fall 2018): 67.

Notes199 72. Libicki, Cyberspace in Peace and War, 1. 73. Conti and Raymond, On Cyber, 259. 74. James Der Derian, Virtuous War: Mapping the Military-Industrial-Media -Entertainment Network (Boulder, CO: Westview, 2001), 108; Pope, A Better State of War, 59; Mazanec, The Evolution of Cyber War, 21.

Bibliography

Abomhara, Mohamed, and Geir M. Koin. “Cyber Security and the Internet of Things: Vulnerabilities, Threats, Intruders and Attacks.” Journal of Cyber Security 4 (2015): 65–88. Agarwal, Nitin, Samer Al-khateeb, Rick Galeano, and Rebecca Goolsby. “Examining the Use of Botnets and Their Evolution for Propaganda Dissemination.” Defence Strategic Communications 2 (March 2017): 87–112. Ahrari, Ehsan. “US Military Strategic Perspectives on the PRC: New Frontiers of Information-Based War.” Asian Survey 27, no. 12 (December 1997): 1163–80. Ajir, Media, and Bethany Vailliant. “Russian Information Warfare: Implications for Deterrence Theory.” Strategic Studies Quarterly 12, no. 3 (Fall 2018): 70–89. Allhoff, Fritz, Adam Henschke, and Bradley Jay Strawser, eds. Binary Bullets: The Ethics of Cyberwarfare. Oxford, UK: Oxford University, 2016. Atwan, Abdel Bari. Islamic State: The Digital Caliphate. Oakland: University of California, 2015. Babcock, Chris. “Preparing for the Cyber Battleground of the Future.” Air and Space Power Journal 30 (November–December 2015): 61–73. Baker, Farah. “@Farah_Gazan.” Twitter. Accessed August 2, 2019. ­https://​­twitter​ .­com​/­Farah​_Gazan. Barnes, Julian E., and Adam Satariano. “US Campaign to Ban Huawei Overseas Stumbles as Allies Resist.” New York Times, March 17, 2019. Accessed April 2, 2019. ­https://​­www​.­nytimes​.­com​/­2019​/­03​/­17​/­us​/­politics​/­huawei​-­ban​ .­html. Barrett, Danelle. “Cybersecurity: Focusing on Readiness and Resiliency for Mission Assurance.” Cyber Defense Review 2, no. 3 (Fall 2017): 15–20. Beck, Joseph G. The Technology of Nonviolence: Social Media and Violence Prevention. Cambridge, MA: MIT, 2012.

202Bibliography Beidleman, Scott W. “Defining and Deterring Cyber War.” Master’s thesis. Army War College, Carlisle, PA, 2009. Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University, 2006. Berger, J. M., and Jonathon Morgan. The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter. Washington, DC: Brookings Institute, 2015. Bialy, Beata. “Social Media—From Social Exchange to Battlefield.” Cyber Defense Review 2, no. 2 (Summer 2017): 69–89. Blacker, Nancy. “Winning the Cyberspace Long Game—Applying Collaboration and Education to Deepen the US Bench.” Cyber Defense Review 2, no. 2 (Summer 2017): 21–30. Blowers, Misty, ed. Evolution of Cyber Technologies and Operations to 2035. New York: Springer, 2015. Blunden, Bill, and Violet Cheung. Behold a Pale Farce: Cyberwar, Threat Inflation & the Malware Industrial Complex. Walterville, OR: Trine Day, 2014. Boghardt, Thomas. The Zimmerman Telegram: Intelligence, Diplomacy, and America’s Entry into World War I. Annapolis, MD: Naval Institute, 2012. Boian, Donald. “ILD 2012 Panel Discussion.” Accessed September 16, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­pO1a7IfKzAk. Bonner, E. Lincoln, III. “Cyber Power in 21st Century Joint Warfare.” Joint Forces Quarterly 74 (2014): 102–9. Boyer, Bertrand. “Countering Hybrid Threats in Cyberspace.” Cyber Defense Review, February 2017. Accessed July 31, 2019. ­https://​­cyberdefensereview​.­army​ .­mil​/­CDR​-­Content​/­Articles​/­Article​-­View​/­Article​/­1134632​/­countering​ -­hybrid​-­threats​-­in​-­cyberspace. Bradshaw, Samantha, Lisa-Maria Neudert, and Philip N. Howard. Government Responses to Malicious Use of Social Media. Riga, Latvia: NATO Stratcom CDE, 2018. Brantly, Aaron F. The Decision to Attack: Military and Intelligence Cyber DecisionMaking. Athens: University of Georgia, 2016. Brantly, Aaron F. “The Violence of Hacking: State Violence and Cyberspace.” Cyber Defense Review 2, no. 1 (Spring 2017): 73–91. Brenner, Joel. America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare. New York: Penguin, 2011. Bronk, Chris, and Gregory S. Anderson, “Encounter Battle: Engaging ISIL in Cyberspace.” Cyber Defense Review 2, no. 1 (Spring 2017): 93–107. Bryant, William D. “Resiliency in Future Cyber Combat.” Strategic Studies Quarterly 9, no. 4 (Winter 2015): 87–107. Buckle, Chase. “Top 10 Reasons for Using Social Media among Facebookers.” GlobalWebIndex, July 6, 2016. Accessed February 1, 2019. h ­ ttps://​­ blog​ .­globalwebindex​.­com​/­chart​-­of​-­the​-­day​/­top​-­10​-­reasons​-­for​-­using​-­social​ -­media​-­among​-­facebookers. Burke, James. Connections. Boston, MA: Little, Brown & Company, 1978. Burton, Joe. “NATO’s Cyber Defence: Strategic Challenges and Institutional Adaptation.” Defence Studies 15, no. 4 (2015): 297–319. Campen, Alan D., and Douglas H. Dearth, eds. Cyberwar 3.0: Human Factors in Information Operations and Future Conflict. Fairfax, VA: AFCEA International Press, 2000.

Bibliography203 “Canadians Connected 2018: Expert Panel on Cybersecurity.” CIRANEWS. Accessed November 5, 2018. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­8t0h IK75X9A. Carr, Madeline. “Public-Private Partnerships in National Cyber-Security Strategies.” International Affairs 92, no. 1 (2016): 43–62. Carter, Ashton. “Remarks by Secretary Carter in a Media Availability Aboard USS John C. Stennis in the South China Sea.” U.S. Department of Defense, April 15, 2016. Accessed September 17, 2017. ­https://​­dod​.­defense​.­gov​/­News​ /­Transcripts​ /­Transcript​ -­View​ /­A rticle​ /­7 22593​ /­remarks​ -­b y​ -­s ecretary​ -­carter​-­in​-­a​-­media​-­availability​-­aboard​-­uss​-­john​-­c​-­stennis​-­i. Chapman, Bert. Global Defense Procurement and the F-35 Joint Strike Fighter. London: Palgrave MacMillan, 2019. Chen, Thomas M., Lee Jarvis, and Stuart Macdonald, eds. Counterterrorism: Understanding, Assessment, and Response. New York: Springer, 2014. Cheng, Dean. Cyber Dragon: Inside China’s Information Warfare and Cyber Operations. Santa Barbara, CA: Praeger Security International, 2017. Choucri, Nazli, Stuart Madnick, and Jeremy Ferwerda. “Institutions for Cyber Security: International Responses and Global Imperatives.” Information Technology for Development 20, no. 2 (2014): 96–121. “CIO Association of Canada: Cybersecurity—Whose Job Is It Anyway?” BC Aware, February 10, 2016. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­u2dropOG3qc. Citino, Robert. Quest for Decisive Battle: From Stalemate to Blitzkrieg in Europe, 1899– 1940. Lawrence: University of Kansas Press, 2002. “Citizen Soldier: The Next Battlefield: On Cyber War.” Pritzker Military Museum & Library, March 10, 2015. Accessed August 15, 2019. ­https://​­www​.­youtube​ .­com​/­watch​?­v​=​­Kmu2DQkqSpA. Clarke, Richard A. Cyber War: The Next Threat to National Security and What to Do about It. New York: Harper Collins, 2010. Clausewitz, Carl von. On War. Edited by Michael Howard and Peter Paret. Prince­ ton, NJ: Princeton University Press, 1984. “Cloud and Cyber Security in Canada.” nGage Events, November 27, 2017. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­2mwWG26tZa8. “Comparing Today’s Computers to 1995’s.” Relatively Interesting, June 26, 2019. Accessed March 14, 2019. ­https://​­www​.­relativelyinteresting​.­com​/­comparing​ -­todays​-­computers​-­to​-­1995s. Conti, Greg. Googling Security: How Much Does Google Know about You? Upper Saddle River, NJ: Addison-Wesley, 2009. Conti, Gregory, and David Raymond. On Cyber: Towards an Operational Art for Cyber Conflict. New York: Kopidion, 2017. Cook, Sam. “2017–2019 Ransomware Statistics and Facts.” CompariTech, August 25, 2018. Accessed March 20, 2019. ­https://​­www​.­comparitech​.­com​/­antivirus​ /­ransomware​-­statistics. Corera, Gordon. Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage. New York: Pegasus Books, 2015. Creveld, Martin Van. Technology and War: From 2000 BC to the Present. New York: Free Press, 1989. Crowther, Glenn Alexander. “The Cyber Domain.” Cyber Defense Review 2, no. 3 (Fall 2017): 63–78.

204Bibliography Czosseck, Christian, and Kenneth Geers, eds. The Virtual Battlefield: Perspectives on Cyber Warfare. Amsterdam, Netherlands: IOS Press, 2009. “Defending Canadian Democracy from Cyber Attacks.” Public Policy Forum, June 6, 2018. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​ ?­v​=​­JRWJX0Wbf3Y. Deibert, Ronald J. Black Code: Surveillance, Privacy, and the Dark Side of the Internet. Toronto, Canada: McClelland & Stewart, 2013. deLisle, Jacques, Avery Goldstein, and Guobin Yang, eds. The Internet, Social Media, and a Changing China. Philadelphia: University of Pennsylvania, 2016. Demchak, Chris C. Wars of Disruption and Resilience: Cybered Conflict, Power, and National Security. Athens: University of Georgia Press, 2011. Demchak, Chris, and Peter Dombrowski. “Cyber Westphalia: Asserting State Prerogatives in Cyberspace.” Georgetown Journal of International Affairs: International Engagement on Cyber III (2013–14): 29–38. Demchak, Chris, and Peter Dombrowski. “Rise of a Cybered Westphalian Age.” Strategic Studies Quarterly 5, no. 1 (Spring 2011): 32–61. DeNardis, Laura. Global War for Internet Governance. New Haven, CT: Yale University Press, 2014. Denning, Dorothy. “Cyberwarriors: Activists and Terrorists Turn to Cyberspace.” Harvard International Review 23, no. 2 (Summer 2001): 70–75. Der Derian, James. Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. Boulder, CO: Westview, 2001. Dewdney, A. K. “Computer Recreations: Of Worms, Viruses and Core War.” Scientific American 260, no. 3 (March 1989): 110–13. “Discussion: Emerging Technologies and Cyber Security—CyCon 2018.” NATOCCDCOE, August 14, 2018. Accessed August 15, 2019. ­ https://​­ www​ .­youtube​.­com​/­watch​?­v​=​­tm1GbB57m​_w. “The Economic Impact of Cybercrime—No Slowing Down.” McAfee, February 2018. Accessed March 20, 2019. ­https://​­www​.­mcafee​.­com​/­enterprise​/­en​ -­us​/­assets​/­executive​-­summaries​/­es​-­economic​-­impact​-­cybercrime​.­pdf. Evans, Dave. How the Next Evolution of the Internet Is Changing Everything. New York: Cisco Internet Business Solutions Group, 2011. Faris, David M., and Babak Rahimi, eds. Social Media in Iran: Politics and Society after 2009. Albany: SUNY, 2015. “FB Company Financials.” NASDAQ, March 12, 2019. Accessed March 12, 2019. ­https://​­www​.­nasdaq​.­com​/­symbol​/­fb​/­financials. Fleming, Casey, Eric L. Qualkenbush, and Anthony M. Chapa. “The Secret War against the United States.” Cyber Defense Review 2, no. 3 (Fall 2017): 25–31. Ford, Christopher A. “Here Come the Cyber-Privateers.” Hudson Institute, July 19, 2010. Accessed April 15, 2019. ­https://​­www​.­hudson​.­org​/­research​/­9112​ -­here​-­come​-­the​-­cyber​-­privateers. Forest, James J. F., ed. Countering Terrorism and Insurgency in the 21st Century: International Perspectives. Volume 1: Strategic and Tactical Considerations. Westport, CT: Praeger, 2007. Gaylord, Chris. “SkyGrabber: Is Hacking Military Drones Too Easy?” Christian Science Monitor, December 17, 2009. Accessed March 27, 2019. h ­ ttps://​ ­www​.­csmonitor​.­com​/­Technology​/­Horizons​/­2009​/­1217​/­SkyGrabber​-­Is​ -­hacking​-­military​-­drones​-­too​-­easy.

Bibliography205 Geer, Dan. “Personal Data and Government.” Cambridge, MA, October 7, 2013. Accessed March 19, 2019. ­https://​­kit​.­mit​.­edu​/­sites​/­default​/­files​ /­documents​/­Geer​_MIT​_KIT​_2013​_Conference​.­pdf. Geers, Kenneth. “The Cyber Threat to National Critical Infrastructure: Beyond Theory.” Information Security Journal: A Global Perspective 18, no. 1 (2009): 1–7. Gertz, Bill. iWar: War and Peace in the Information Age. New York: Threshold, 2017. Ghanea-Hercock, Robert. “Why Cyber Security Is Hard.” Georgetown Journal of International Affairs, International Engagement on Cyber 2012 (2012): 81–89. “Global Cyber Attack: A Look at Some Prominent Victims.” The Straits Times, May  13, 2017. Accessed March 20, 2019. ­https://​­www​.­straitstimes​.­com​ /­world​/­organisations​-­hit​-­by​-­global​-­cyberattack. Goel, Vindu, and Sydney Ember. “As Paris Terror Attacks Unfolded, Social Media Tools Offered Help in Crisis.” New York Times, November 14, 2015. Accessed March 18, 2019. ­https://​­www​.­nytimes​.­com​/­2015​/­11​/­15​/­technology​/­as​ -­paris​-­terror​-­attacks​-­unfolded​-­social​-­media​-­tools​-­offered​-­help​-­in​-­crisis​ .­html. Goychayev, R., G. A. Carr, R. A. Wiese, D. A. Donnelly, S. L. Clements, J. M. Benz, K. E. Rodda, R. A. Bartholomew, A. D. McKinnon, and R. B. Andres. Cyber Deterrence and Stability: Assessing Cyber Weapon Analogues through Existing WMD Deterrence and Arms Control Regimes. Alexandria, VA: National Technical Information Service, 2017. Gray, Colin S. Making Strategic Sense of Cyber Power: Why the Sky Is Not Falling. Carlisle, PA: Strategic Studies Institute, 2013. Green, James A., ed. Cyber Warfare: A Multidisciplinary Analysis. New York: Routledge, 2015. Greenberg, Andy. “The Untold Story of NotPetya, the Most Devastating Cyberattack in History.” Wired, August 22, 2018. Accessed April 18, 2019. ­https://​ ­w ww​.­w ired​.­c om​/­s tory​/­n otpetya​-­c yberattack​-­u kraine​-­r ussia​-­c ode​ -­crashed​-­the​-­world. Gross, Michael. “Enter the Cyber Dragon.” Vanity Fair, September 2011. Accessed April 25, 2019. ­https://​­www​.­vanityfair​.­com​/­news​/­2011​/­09​/­chinese​ -­hacking​-­201109. Guitton, Clement. Inside the Enemy’s Computer: Identifying Cyber-Attackers. London, UK: Hurst & Company, 2017. Gustin, Sam. “Read Facebook CEO Mark Zuckerberg’s IPO Letter.” Time, February 1, 2012. Accessed March 12, 2019. ­http://​­business​.­time​.­com​/­2012​/­02​ /­01​/­read​-­facebook​-­ceo​-­mark​-­zuckerbergs​-­ipo​-­letter. Hachigian, Nina. “China’s Cyber-Strategy.” Foreign Affairs 80, no. 2 (March–April 2001): 118–33. Halbert, Debora. “Discourses of Danger and the Computer Hacker.” The Information Society: An International Journal 13, no. 4 (1997): 361–74. Hansen, Lene, and Helen Nissenbaum. “Digital Disaster, Cyber Security, and the Copenhagen School.” International Studies Quarterly 53, no. 4 (December 2009): 1155–75. Hayden, Michael V. “General Michael Hayden: Beyond Snowden: An NSA Reality Check, Oxford Union.” Oxford Union, February 18, 2014. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­ETVH2P2iU​-­o.

206Bibliography Hayden, Michael V. “Michael Hayden, Full Q&A, Oxford Union.” Oxford Union, August 25, 2017. Accessed September 11, 2019. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­exw9HpK​_ytl. Hayden, Michael V. Playing to the Edge: American Intelligence in the Age of Terror. New York: Penguin, 2016. Hayden, Michael, Andrea Rigoni, Gregory J. Rattray, Lord Reid of Cardowan, Peiran Wang, Gavin Reid, Jaan Priisalu, and Catherine Lotrionte. “International Collaborative Responses to Cyber Incidences, Panel 4.” Georgetown Journal of International Affairs, International Engagement on Cyber 2012 (2012): 243–70. Healey, Jason, ed. A Fierce Domain: Conflict in Cyberspace, 1986 to 2012. Vienna, VA: Cyber Conflict Studies Association, 2012. Hearing before the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology, June 27, 1991. Washington, DC: Government Printing Office, 1991. Hern, Alex. “OKCupid: We Experiment on Users. Everyone Does.” The Guardian, July 29, 2014. Accessed March 11, 2019. ­https://​­www​.­theguardian​.­com​ /­technology​/­2014​/­jul​/­29​/­okcupid​-­experiment​-­human​-­beings​-­dating. Howard, Philip N., Bhrarath Gamesh, and Dimitra Liotsiou. The IRA, Social Media and the Political Polarization in the United States, 2012–2018. Oxford, UK: Oxford University Press, 2018. Huntington, Samuel P. The Soldier and the State: The Theory and Politics of CivilMilitary Relations. New York: Vintage Books, 1964. Hurwitz, Roger. “Depleted Trust in the Cyber Commons.” Strategic Studies Quarterly 6, no. 3 (Fall 2012): 20–45. Iasiello, Emilio. “China’s Three Warfares Strategy Mitigates Fallout from Cyber Espionage Activities.” Journal of Strategic Security 9, no. 2 (Summer 2016): 45–69. Iasiello, Emilio. “Is Cyber Deterrence an Illusory Course of Action?” ASPJ Africa & Francophonie 8, no. 1 (2018): 35–51. Ignatius, David. “Russia’s Radical New Strategy for Information Warfare.” Washington Post, January 18, 2017. Accessed March 21, 2018. h ­ ttps://​­www​ .­w ashingtonpost​ .­c om​ /­b logs​ /­p ost​ -­p artisan​ /­w p​ /­2 017​ /­0 1​ /­1 8​ /­russias​ -­radical​-­new​-­strategy​-­for​-­information​-­warfare. “Internet Security Threat Report: Executive Summary.” Symantec, February 2019. Accessed  March  2,  2019. ­https:// ​­ w ww​.­s ymantec​ .­c om​ /­c ontent​ /­d am​ /­symantec​/­docs​/­reports​/­istr​-­24​-­executive​-­summary​-­en​.­pdf. “Internet Voting in Canada: A Cyber Security Perspective,” October 6, 2016. Accessed September 11, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=nJd​ ymuMeQTQ. Jackson, William. “How Can We Be at Cyberwar If We Don’t Know What It Is?” GCN, March 22, 2010. Accessed April 19, 2019. ­https://​­gcn​.­com​/­articles​ /­2010​/­03​/­22​/­cybereye​-­cyberwar​-­debate​.­aspx. Jasper, Scott. Strategic Cyber Deterrence: The Active Cyber Defense Option. Lanham, MD: Rowman & Littlefield, 2017. Kaplan, Fred. Dark Territory: The Secret History of Cyber War. New York: Simon & Schuster, 2016.

Bibliography207 Kaplan, Fred. “The Secret History of Cyber War—SANS Digital Forensics and Incident Response Summer 2017.” SANS Digital Forensics and Incident Response. Accessed September 11, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­XK​ jB1lrLct4. Kaplan, Katherine A. “Facemash Creator Survives Ad Board.” The Harvard Crimson, November 19, 2003. Accessed June 17, 2018. h ­ ttps://​­www​.­thecrimson​ .­com​/­article​/­2003​/­11​/­19​/­facemash​-­creator​-­survives​-­ad​-­board​-­the. Kello, Lucas. “The Meaning of Cyber Revolution: Perils to Theory and Statecraft.” International Security 38, no. 2 (2014): 7–40. Klimburg, Alexander. The Darkening Web: The War for Cyberspace. New York: Penguin, 2017. Kolias, Constantinos, Georgios Kambourakis, Anelgos Stavrou, and Jeffrey Voas. “DDoS in the IoT: Mirai and Other Botnets.” Computer 50, no. 7 (2017): 80–84. Kolton, Michael. “Interpreting China’s Pursuit of Cyber Sovereignty and Its Views on Cyber Deterrence.” Cyber Defense Review 2, no. 1 (Spring 2017): 119–53. Korns, Stephen. “Botnets Outmaneuvered.” Armed Forces Journal, January 2009. Accessed  August  15,  2019. ­http://​­ a rmedforcesjournal​ .­c om​ /­b otnets​ -­outmaneuvered. Kott, Alexander. “Intelligent Autonomous Agents Are Key to Cyber Defense of the Future Army Networks.” Cyber Defense Review 3, no. 3 (Fall 2018): 57–70. Kramer, Franklin D., Stuart H. Starr, and Larry K. Wentz, eds. Cyberpower and National Security. Washington, DC: National Defense University Press, 2009. Larrey, Philip. Connected World: From Automated Work to Virtual Wars: The Future, by Those Who Are Shaping It. New York: Penguin, 2017. Lee, Robert M., Michael J. Assante, and Tim Conway. “German Steel Mill Cyber Attack.” Industrial Control Systems. Bethesda, MD: SANS, 2014. Libicki, Martin C. “The Convergence of Information Warfare.” Strategic Studies Quarterly 11, no. 1 (Spring 2017): 49–65. Libicki, Martin C. Cyberdeterrence and Cyberwar. Santa Monica, CA: RAND, 2009. Libicki, Martin C. Cyberspace in Peace and War. Annapolis, MD: Naval Institute Press, 2016. Libicki, Martin C. “The Specter of Non-Obvious Warfare.” Strategic Studies Quarterly 6, no. 3 (Fall 2012): 88–101. Lord, Kristin M., and Travis Sharp, eds. America’s Cyber Future: Security and Prosperity in the Information Age. Washington, DC: Center for a New American Security, 2011. Mahairas, Aristedes, and Mikhail Dvilyanski. “Disinformation—Дезинформация (Dezinformatsiya).” Cyber Defense Review 3, no. 3 (Fall 2018): 21–27. Mandel, Robert. Optimizing Cyberdeterrence: A Comprehensive Strategy for Preventing Foreign Cyberattacks. Washington, DC: Georgetown University Press, 2017. Mandiant. APT1: Exposing One of China’s Cyber Espionage Units. Alexandria, VA: Mandiant, 2013. Markoff, John. “A Silent Attack, but Not a Subtle One.” New York Times, September 26, 2010. Accessed September 5, 2018. ­https://​­www​.­nytimes​.­com​/­2010​/­09​ /­27​/­technology​/­27​/­virus​.­html.

208Bibliography Marks, Ron. “Unmasking the Spy: Intelligence Gathering.” Dole Institute of Politics, October 30, 2018. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­hha5dKGx374. Marsh, Sarah. “The NHS Trusts Hit by Malware—Full List.” The Guardian, May 12, 2017. Accessed March 20, 2019. ­https://​­www​.­theguardian​.­com​/­society​ /­2017​/­may​/­12​/­global​-­cyber​-­attack​-­nhs​-­trusts​-­malware. Martins, Ralph. “Anonymous’ Cyberwar against ISIS and the Asymmetrical Nature of Cyber Conflict.” Cyber Defense Review 2, no. 3 (Fall 2017): 95–105. Mazanec, Brian M. The Evolution of Cyber War: International Norms for EmergingTechnology Weapons. Lincoln, NE: Potomac, 2015. Mazanec, Brian, and Bradley A. Thayer. Deterring Cyber Warfare: Bolstering Strategic Stability in Cyberspace. New York: Palgrave Macmillan, 2015. McCarthy, Kieren. “Click Here to See the New Zealand Livestream Mass-Murder Vid! This Is the Internet Facebook, YouTube, Twitter Built!” The Register, March 15, 2019. Accessed March 18, 2019. ­https://​­www​.­theregister​.­co​.­uk​ /­2019​/­03​/­15​/­new​_zealand​_murder. McKenzie, Timothy. Is Cyber Deterrence Possible? Montgomery, AL: Maxwell AFB, Air University Press, 2017. Merchant, Collen. “Canada’s Vision for Security and Prosperity in the Digital Age.” SERENE-RISC, November 16, 2018. Accessed December 2, 2018. ­https://​­www​.­youtube​.­com​/­watch​?​=​­v​=​­d2uwmwfKdDQ. Mintz, Anne P., ed. Web of Deceit: Misinformation and Manipulation in the Age of Social Media. Medford, NJ: CyberAge Books, 2012. Mueller, Milton L. Networks and States: The Global Politics of Internet Governance. Cambridge, MA: MIT Press, 2010. Mueller, Robert S., III. Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I of II. Washington, DC: U.S. Department of Justice, 2019. Nagorski, Andrew, ed. Global Cyber Deterrence: Views from China, the US, Russia, India, and Norway. New York: EastWest Institute, 2010. National Institute of Standards and Technology. “Glossary of Key Informational Security Terms.” NISTIR 7298 Revision 2, May 2013. Neal, Patrick. “Active Cyber Defence: Why We Should Hack Back at the Cyberattackers.” SERENE-RISC, November 6, 2018. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­uHLcRKZq0jk. Neiberg, Michael. “America and the Unintended Consequences of War—Michael Neiberg.” National WWI Museum and Memorial, November 4, 2017. Accessed April 4, 2019. ­https://​­www​.­youtbue​.­com​/­watch​?­v​=NEFnkXlaah4. Nimmo, Ben. Measuring Traffic Manipulation on Twitter. Oxford, UK: Oxford University Press, 2018. Nordquist, Keith B. “The New Matrix of War: Digital Dependence in Contested Environments.” Air and Space Power Journal 32, no. 1 (Spring 2018): 109–17. “Number of Monthly Active Twitter Users Worldwide from 1st Quarter 2010 to 4th Quarter 2018.” Statistica. Accessed March 13, 2019. ­https://​­www​.­statista​ .­com​/­statistics​/­282087​/­number​-­of​-­monthly​-­active​-­twitter​-­users. Nye, Joseph S. “Nuclear Lessons for Cyber Security?” Strategic Studies Quarterly 5, no. 4 (Winter 2011): 18–38.

Bibliography209 Obama, Barack. “Remarks by the President in State of Union Address.” Obama White House, January 25, 2011. Accessed March 11, 2019. ­ https://​ ­obamawhitehouse​.­archives​.­gov​/­the​-­press​-­office​/­2011​/­01​/­25​/­remarks​ -­president​-­state​-­union​-­address. Olson, Parmy. We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency. New York: Little, Brown & Company, 2012. O’Sullivan, Donie. “Facebook Says It’s Policing Its Platform, But It Didn’t Catch a Livestream of a Massacre. Why?” CNN, March 15, 2019. Accessed March 18, 2019. ­https://​­www​.­cnn​.­com​/­2019​/­03​/­15​/­tech​/­facebook​-­new​-­zealand​ -­content​-­moderation​/­index​.­html. Paine, Albert Bigelow. Mark Twain, a Biography. Farmington Hills, MI: Gale, 2002. Accessed September 11, 2019. ­https://​­ebooks​.­adelaide​.­edu​.­au​/­t​/­twain​ /­mark​/­paine​/­index​.­html. “Panel: Defending a Nation against Cyber Attack—CyCon 2018.” June 20, 2018. Accessed  April  1,  2019. ­https:// ​­ w ww​.­y outube​ .­c om​ /­w atch​ ?­v​ = ​­ S M2h QUPcYOE. Panetta, Leon. “Remarks by Secretary Panetta on Cybersecurity to the Business Executives for National Security, New York City.” October 11, 2012. Accessed March 10, 2019. ­http://​­archive​.­defense​.­gov​/­transcripts​/­transcript​.­aspx​? ­transcriptid​=​­5136. Parker, William E., IV. Cyber Workforce Retention. Montgomery, AL: Maxwell AFB, Air University Press, 2016. Patrikarakos, David. War in 140 Characters: How Social Media Is Reshaping Conflict in the Twenty-First Century. New York: Hachette, 2017. Peters, Benjamin. How Not to Network a Nation: The Uneasy History of the Soviet Internet. Cambridge, MA: MIT, 2016. Pigman, Lincoln. “Behind Russia’s Cyberwarfare Lies a Serious Case of CyberPhobia.” Washington Post, January 17, 2019. Accessed April 3, 2019. h ­ ttps:// ​­ w ww​.­w ashingtonpost​.­c om​/­n ews​/­m onkey​-­c age​/­w p​/­2 019​/­0 1​/­1 7​ /­behind​-­russias​-­cyberwarfare​-­lies​-­a​-­serious​-­case​-­of​-­cyber​-­phobia​/?­utm​ _term​=​.­d00eeb73524c. Pohjanpalo, Kati, “Finland Detects Cyber Attack on Online Election-Results Service.” Bloomberg, April 10, 2019. Accessed April 10, 2019. h ­ ttps://​­www​ .­bloomberg​.­com​/­news​/­articles​/­2019​-­04​-­10​/­finland​-­detects​-­cyber​-­attack​ -­on​-­online​-­election​-­results​-­service. Poindexter, Dennis F. The Chinese Information War: Espionage, Cyberwar, Communications Control and Related Threats to United States Interests. Jefferson, NC: McFarland, 2018. Polyakova, Alina. “Want to Know What’s Next in Russian Election Interference? Pay Attention to Ukraine’s Elections.” Brookings, March 28, 2019. Accessed April 24, 2019. ­https://​­www​.­brookings​.­edu​/­blog​/­order​-­from​-­chaos​/­2019​ /­03​/­28​/­want​-­to​-­know​-­whats​-­next​-­in​-­russian​-­election​-­interference​-­pay​ -­attention​-­to​-­ukraines​-­elections. Pope, Billy E. A Better State of War: Surmounting the Ethical Cliff in Cyber Warfare. Montgomery, AL: Maxwell AFB, Air University Press, 2014. Popescu, Nicu, and Stanislav Secrieru, eds. Hacks, Leaks and Disruptions: Russian Cyber Strategies. Paris, France: European Union Institute for Security Studies, 2018.

210Bibliography Prier, Jarred. “Commanding the Trend: Social Media as Information Warfare.” Strategic Studies Quarterly 11, no. 4 (Winter 2017): 50–85. Rabkin, Jeremy, and John Yoo. Striking Power: How Cyber, Robots, and Space Weapons Change the Rules for War. New York: Encounter, 2017. “Report for Selected Countries and Subjects.” International Monetary Fund. Accessed March 11, 2019. ­https://www​.­imf​.­org​/­external​/­pubs​/­ft​/­weo​/­2018​/­02​ /­weodata​/­index​.­aspx. Rid, Thomas. Cyber War Will Not Take Place. Oxford, UK: Oxford University Press, 2013. Robertson, Jordan, and Michael Riley. “How Hackers Took Down a Power Grid.” Bloomberg, January 14, 2016. Accessed April 18, 2019. ­ https://​­ www​ .­bloomberg​.­com​/­news​/­articles​/­2016​-­01​-­14​/­how​-­hackers​-­took​-­down​-­a​ -­power​-­grid. Rogers, James. “Jacksonville Mass Shooting: Chilling Twitch Livestream Records Horrific Attack.” Fox News, August 27, 2018. Accessed March 18, 2019. ­https://​­www​.­foxnews​.­com​/­tech​/­jacksonville​-­mass​-­shooting​-­chilling​ -­twitch​-­livestream​-­records​-­horrific​-­attack. Rosenzweig, Paul. Cyber Warfare: How Conflicts in Cyberspace Are Challenging America and Changing the World. Santa Barbara, CA: Praeger, 2013. The Russo-Georgian War 2008: The Role of Cyber Attacks in the Conflict. Fairfax, VA: Armed Forces Communications and Electronics Association, 2012. Sambaluk, Nicholas Michael. “The Challenge of Security: West Point’s Defenses and Digital Age Implications, 1775–1777.” Cyber Defense Review 2, no. 1 (Spring 2017): 155–65. Sambaluk, Nicholas Michael. The Other Space Race: Eisenhower and the Quest for Aerospace Security. Annapolis, MD: Naval Institute, 2015. Sambaluk, Nicholas Michael, ed. Paths of Innovation: From the Twelfth Century to the Present. Lanham, MD: Lexington Books, 2018. Sanger, David E. The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age. New York: Crown, 2018. Saracco, Roberto. “Guess What Requires 150 Million Lines of Code. . .” EIT Digital, January 13, 2016. Accessed March 22, 2019. h ­ ttps://​­www​.­eitdigital​ .­eu​/­news​-­events​/­blog​/­article​/­guess​-­what​-­requires​-­150​-­million​-­lines​-­of​ -­code. Satariano, Adam. “Huawei Security ‘Defects’ Are Found by British Authorities.” New York Times, March 28, 2019. Accessed April 2, 2019. ­https://​­www​ .­nytimes​.­com​/­2019​/­03​/­28​/­technology​/­huawei​-­security​-­british​-­report​ .­html. Satariano, Adam, and Joanna Berendt. “Poland Arrests 2, Including Huawei Employee, Accused of Spying for China.” New York Times, January 11, 2019. Accessed April 2, 2019. ­https://​­www​.­nytimes​.­com​/­2019​/­01​/­11​/­world​ /­europe​/­poland​-­china​-­huawei​-­spy​.­html. Saxon, Dan, ed. International Humanitarian Law and the Changing Technology of War. Leiden: Martinus Nijhoff, 2013. Schiffman, Betsy. “Status Update: Facebook Is Letting Users Drop the ‘Is.’” Wired, November 20, 2007. Accessed March 12, 2019. ­https://​­www​.­wired​.­com​ /­2007​/­11​/­status​-­update​-­f. Schmitt, Michael N., ed. Tallinn Manual on the International Law Applicable to Cyber Warfare. Cambridge, UK: Cambridge University Press, 2013.

Bibliography211 Schmitt, Michael N., ed. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge, UK: Cambridge University Press, 2017. Schneier, Bruce. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York: W. W. Norton, 2015. Schneier, Bruce. “How to Survive a Cyberattack, Bruce Schneier on Cyberwar in the 21st Century.” Hidden Forces, September 19, 2018. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­9Oja9nngwRg. Schneier, Bruce. “Keynote by Mr. Bruce Schneier—CyCon 2018.” NATOCCDCOE, June 20, 2018. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​ /­watch​?­v​=​­9Oja9nngwRg. Schrofl, Josef, Bahram M. Rajaee, and Dieter Muhr, eds. Hybrid and Cyber War as Consequences of the Asymmetry: A Comprehensive Approach Answering Hybrid Actors and Activities in Cyberspace. Frankfurt: Peter Lang, 2011. Schwirtz, Michael, and Sheera Frenkel. “In Ukraine, Russia Tests a New Facebook Tactic in Election Tampering.” New York Times, March 29, 2019. Accessed April 3, 2019. ­https://​­www​.­nytimes​.­com​/­2019​/­03​/­29​/­world​/­europe​ /­ukraine​-­russia​-­election​-­tampering​-­propaganda​.­html. Sharwood, Simon. “Chinese President Xi Seeks Innovation Independence.” The Register, June 1, 2018. Accessed March 11, 2019. ­https://​­www​.­theregister​ .­co​.­uk​/­2018​/­06​/­01​/­xi​_xinping​_science​_technology​_policy​_speech. “Significant Cyber Incidents.” Center for Strategic and International Studies. Accessed March  19,  2019. ­https:// ​­ w ww​.­c sis​ .­o rg​ /­p rograms​ /­c ybersecurity​ -­a nd​ -­governance​/­technology​-­policy​-­program​/­other​-­projects​-­cybersecurity. Silvestri, Lisa Ellen. Friended at the Front: Social Media in the American War Zone. Lawrence: University of Kansas Press, 2015. Singer, P. W., and Emerson T. Brooking. LikeWar: The Weaponization of Social Media. Boston, MA: Eamon Dolan, 2018. Singer, P. W., and Allan Friedman. Cybersecurity and Cyberwar: What Everyone Needs to Know. Oxford, UK: Oxford University Press, 2014. Smith, Brad. “The Price of Cyber-Warfare.” RSA Conference, April 17, 2018. Accessed September 11, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­XGutGYNfEw0 Soldatov, Andrei, and Irina Borogan. The Red Web: The Kremlin’s War on the Internet. New York: Public Affairs, 2015. Solomon, Scott E. Social Media: The Fastest Growing Vulnerability to the Air Force Mission. Montgomery, AL: Maxwell AFB, Air University Press, 2017. Spade, Jayson M. Information as Power: China’s Cyber Power and America’s National Security. Carlisle, PA: Army War College Press, 2012. Springer, Paul J., ed. Cyber Warfare. Santa Barbara, CA: ABC-CLIO, 2015. Steed, Brian L. ISIS: An Introduction and Guide to the Islamic State. Santa Barbara, CA: ABC-CLIO, 2016. Stevens, Rock, and Jeffrey Trent. “Offensive Digital Countermeasures: Exploring the Implications for Governments.” Cyber Defense Review 3, no. 3 (Fall 2018): 93–113. Stiennon, Richard. There Will Be Cyberwar: How the Move to Network-Centric War Fighting Has Set the Stage for Cyberwar. Birmingham, MI: IT-Harvest Press, 2015. Stinson, Elizabeth. “Facebook Reactions, the Totally Redesigned Like Button, Is Here.” Wired, February 24, 2016. Accessed March 12, 2019. ­https://​­www​ .­wired​.­com​/­2016​/­02​/­facebook​-­reactions​-­totally​-­redesigned​-­like​-­button.

212Bibliography Stoll, Clifford. The Cuckoo’s Egg: Tracking a Spy through the Maze of Computer Espionage. New York: Doubleday, 1989. Stytz, Martin R., and Sheila B. Banks. “Toward Attaining Cyber Dominance.” Strategic Studies Quarterly 8, no. 1 (Spring 2014): 55–87. Sultan, Oz. “Combatting the Rise of ISIS 2.0 and Terrorism 3.0.” Cyber Defense Review 2, no. 3 (Fall 2017): 41–49. “System and Method for Dynamically Providing a News Feed about a User of a Social Network.” Espacenet Patent Search. Accessed March 12, 2019. ­https://​­worldwide​.­espacenet​.­com​/­publicationDetails​/­biblio​?­CC​=​­US​ &­NR​=​­7669123​&­KC​=​&­FT​=​­E​&­locale​=​­en​_EP. Thomas, Timothy L. Cyber Silhouettes: Shadows over Information Operations. Fort Leavenworth, KS: Foreign Military Studies Office, 2005. Tighe, Jan. “Cyber Warfare in the Maritime Domain.” CSIS, September 14, 2017. Accessed August 15, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​ ­qeEBiRunR0Y. “The Top 500 Sites on the Web.” Alexa. Accessed March 14, 2019. h ­ ttps://​­www​ .­alexa​.­com​/­topsites. Tresh, Keith, and Maxim Kovalsky. “Toward Automated Information Sharing: California Cybersecurity Integration Center’s Approach to Improve on the Traditional Information Sharing Models.” Cyber Defense Review 3, no. 2 (Summer 2018): 21–31. Twitter. “An Update on Our Efforts to Combat Violent Extremism.” August 18, 2016. Accessed March 13, 2019. ­https://​­blog​.­twitter​.­com​/­official​/­en​_us​/­a​ /­2016​/­an​-­update​-­on​-­our​-­efforts​-­to​-­combat​-­violent​-­extremism​.­html. Ullah, Haroom K. Digital World War: Islamists, Extremists, and the Fight for Cyber Supremacy. New Haven, CT: Yale University Press, 2017. United States Armed Forces. Joint Operations JP 3-0. January 17, 2017, changed October 22, 2018. United States Army. Offense and Defense ADP 3-90. August 2018. Valeriano, Brandon, and Ryan C. Maness. Cyber War versus Cyber Realities: Cyber Conflict in the International System. Oxford, UK: Oxford University, 2015. Waldvogel, Dieter A. “Social Media and the DOD: Benefits, Risks, and Mitigation.” Air and Space Power Journal 31, no. 2 (Summer 2017): 119–25. Wallace, David A., and Mark Visger. “The Use of Weaponized ‘Honeypots’ under the Customary International Law of State Responsibility.” Cyber Defense Review 3, no. 2 (Summer 2018): 35–42. Walls, Chris. “ILD 2012 Panel Discussion: Cyber Attacks: The Operators Perspective.” U.S. Naval War College, October 10, 2012. Accessed September 11, 2019. ­https://​­www​.­youtube​.­com​/­watch​?­v​=​­pO1a7IfKzAk. Weed, Scott A. US Policy Response to Cyber Attack on SCADA Systems Supporting Critical National Infrastructure. Montgomery, AL: Maxwell AFB, Air University Press, 2017. Weimann, Gabriel. Special Report: How Modern Terrorism Uses the Internet. Washington, DC: United States Institute of Peace, 2004. Whittaker, Zack. “Facebook Failed to Block 20% of Uploaded New Zealand Shooter Videos.” Tech Crunch, March 17, 2019. Accessed March 18, 2019. ­https://​ ­techcrunch​.­com​/­2019​/­03​/­17​/­facebook​-­new​-­zealand.

Bibliography213 Wolfe, Lahle. “Twitter User Statistics 2008 through 2017.” The Balance Careers, November 4, 2018. Accessed March 13, 2019. h ­ ttps://​­ www​ .­ thebalance careers​.­com​/­twitter​-­statistics​-­2008​-­2009​-­2010​-­2011​-­3515899. Woody, Christopher, and Jenny Cheng, “Here’s the Hardware the World’s Top 25 Militaries Have in Their Arsenals.” Business Insider, March 1, 2018. Accessed March 11, 2019. ­https://​­www​.­businessinsider​.­com​/­here​-­are​-­the​ -­worlds​-­most​-­powerful​-­militaries​-­2018​-­2. Wrenn, Christopher Fitzgerald. “Strategic Cyber Deterrence.” PhD dissertation, Tufts University, Medford, MA, 2012. Yannakogeorgos, Panayotis A. Strategies for Resolving the Cyber Attribution Challenge. Montgomery, AL: Maxwell AFB, Air University, 2013. Yannakogeorgos, Panayotis A., and John P. Geis II. The Human Side of Cyber Conflict: Organizing, Training, and Equipping the Air Force Cyber Workforce. Montgomery, AL: Maxwell AFB, Air Force Research Institute, 2016. Yannakogeorgos, Panayotis A., and Adam B. Lowther, eds. Conflict and Cooperation in Cyberspace: The Challenge to National Security. Boca Raton, FL: Taylor & Francis, 2014. Yoo, Christopher S. The Dynamic Internet: How Technology, Users, and Businesses Are Transforming the Network. Washington, DC: American Enterprise Institute, 2012. Yoran, Elad, and Edward Amoroso. “The Role of Commercial End-to-End Secure Mobile Voice in Cyberspace.” Cyber Defense Review 3, no. 1 (Spring 2018): 56–65. Zetter, Kim. “Security Manual Reveals the OPSEC Advice ISIS Gives Recruits.” Wired, November 19, 2015. Accessed October 9, 2017. h ­ ttps://​­www​.­wired​ .­c om​ /­2 015​ /­11​ /­i sis​ -­o psec​ -­e ncryption​-­m anuals​ -­reveal​ -­t errorist​-­g roup​ -­security​-­protocols. Zetter, Kim. Zero Days: Stuxnet and the Launch of the World’s First Digital Weapon. New York: Crown, 2016.

Index

Advanced Persistent Threat (APT), 16, 26, 28–32, 45, 56, 59, 62, 70, 89, 157; APT1 (PLA Unit 61398), 16, 31, 73; APT15, 31; APT16, 31; APT18, 31; APT28, 31; APT29, 31 Afghanistan, 17, 97, 159 Anonymous, 53, 77–80, 111; ISIS Trolling Day, 78; Low Orbit Ion Cannon (LOIC), 79, 81 Apple, 18–19, 159, 166 Arab Spring, 66, 100, 112, 116 ARPANET, 35, 90, 169 Artificial intelligence (AI), 43 al-Assad, Bashar, 24 Attack surface, 7, 42, 57, 86, 160, 163, 165 Attribution, viii, 12, 14, 69, 71–77 Baker, Farah, 120–121 Bot, 34, 39, 72, 153 Bring-your-own-device (BYOD) policy, 163 Bug bounty, 40 Burmese Saffron Revolution, 92 Cambridge Analytica, 96, 101, 144 Clarke, Richard, 12–13, 23, 30, 33, 57, 65 Clausewitz, Carl von, 1, 3 Cloud, 88, 165, 168

CNN, 24, 119–120, 133 Commercial-off-the-shelf (COTS) technology, 161 Critical infrastructure, 11–12, 19, 43–45 Cryptomining, 146 Cyberattack, 3, 5, 9–10, 13–14, 24–26, 32, 34, 36, 38–41, 44–45, 47–50, 52, 54–61, 63–67, 70–71, 76–77, 82–83, 89–90; Denial of Service (DoS), 5, 24, 26, 72; Distributed Denial of Service (DDoS), 26, 27, 28, 35, 39, 41, 48, 50, 52, 60, 62, 70–74, 79–81, 155, 165; Low Orbit Ion Cannon (LOIC), 79, 81; Office of Personnel Management (OPM) hack, 61, 182n; Shamoon, against Saudi Aramco, 24, 49; Sony Pictures hack, 7; Ukraine election committee hack, 52 Cybercrime, 21, 70–71, 81, 83, 89; Dark Web, 27 Cyber defense, vii, 3–5, 8–10, 17, 21, 24, 28, 33, 39–42, 44, 47, 49–50, 57–67, 69–76, 80, 82, 83, 86, 90, 122, 145, 152, 160–163, 165, 170; air-gap, 35–36; anti-virus software, 39; hack-back, 72; honeypot, 8, 9, 60, 73; resilience, cyber, 162, 165 Cyber militia, 48, 82

216Index Cyber Pearl Harbor, 13–14 Cyberweapon, 2, 5, 7–9, 19–22, 33, 37, 39, 41–42, 47, 50–51, 56, 59 Dark Web, 27 Data, 6, 10, 12, 49–50, 91; as a battlespace, viii, 24, 27, 49, 70, 73, 135–145, 150–151, 161, 165; big data, 139; censorship of, 93; collection and use of, 11, 15, 52–53, 61, 88, 93, 96–97, 101, 110, 117, 133, 136, 138–140, 144, 161, 168–169, 171; encryption of, 18–19; sharing, 41, 137; surveillance and theft of, 28–29, 32, 48, 61–62, 70, 75, 141, 160 Deep packet inspection, 17 Doxing, 53, 70 Eligible Receiver 97 test, 44–45 Encryption, 18–19 Espionage, 7, 10, 15–17, 24–25, 28, 30–31, 72 Estonia, 39, 62; DDoS attack against, 26–27, 33, 49, 73–74, 81, 88, 90; Estonian Defense League, 88 Exploit, 5, 35, 39, 41, 59, 162–163; zero-day exploit, 20, 37, 39–40, 45 Facebook, 24, 49, 88, 93, 95–102, 109– 110, 114, 117, 123, 131–133, 138, 142, 153, 159–160 Georgia (nation of), 39, 64 Germany, 32–33; East Germany, 35, 60, 142 Global War on Terrorism (GWOT), 17, 125–126 Google, 18, 29, 33, 40, 92–93, 99, 110– 111, 117, 124, 132–133, 136, 138–139, 164; Google Earth, 142; Google Maps, 136, 142; Google Translate, 123 Hacktivism, 77–78, 80, 82, 154 Hamas, 80, 120, 126 Hayden, Michael V., 3, 10, 15, 19, 25, 54, 58, 65, 80, 140 Hezbolla, 80, 126, 130 Huffington Post, 24, 119

India, 93, 99 Internet: development of, 2, 17, 35–36, 73, 90–91, 113, 136–137, 166, 169; governance of, 89–93, 100, 109, 121, 150, 157; and infrastructure, 14, 30, 32, 43, 88, 164; use of, 11, 13, 18–19, 84, 88, 91, 95, 97, 113–118, 122, 124–126, 135, 142, 147, 149, 152, 155, 158, 160–161, 164, 171; World Wide Web, 2, 158 Internet of Things (IoT), 164–165, 168–169 Internet Service Provider (ISP), 73–74 Iran, 20, 31, 38, 44, 58, 63, 153 Iraq, 18–19, 97, 107, 125, 127; Mosul, 103–104, 106 Israel, 86 Italy, 86 Jihadism, 17–18, 55, 97, 125; Abu Musab al-Zarquawi, 75; Anwar al-Awlaki, 75, 97; bin Laden, Osama, 17, 128; Cats of ISIS, 105; Dabiq, 105; Dawn of Glad Tidings App, 103, 107; Inspire, 18, 97; Islamic State of Iraq and Syria (ISIS), 18, 75, 78, 103–108, 111, 120, 125–130, 144, 150, 152; al-Qaeda, 17–18, 97, 103, 125, 128; rhetorical implications of, 192n; al-Shebaab, 106 Keystroke logger, 28 Kinetic effect cyberattack, 38, 53–54, 65, 71–72, 126; German steel mill targeted, 25, 54, 71; Idaho National Laboratory test “Aurora,” 20, 44–45; Maroochy Shire sewage treatment, 19, 20, 53, 164; Stuxnet, against Iranian nuclear enrichment, 22, 25, 36–37, 41, 44, 63, 73, 146 Kinetic warfare, 6–7, 9–12, 15, 34, 47–48, 64, 105, 106 Las Vegas Rules, 9, 64 Libicki, Martin, 3, 9, 12, 21, 27, 40–41, 43, 56–57, 59–62, 64, 66, 76, 136, 141, 156, 161, 164–165, 171, 182n Low Orbit Ion Cannon (LOIC), 79, 81

Index217 Malware, 5, 12, 24, 36, 38, 53–54; Conficker, 20; Duqu, 25, 39, 41; Flame, 25, 37; Gauss, 25; Mirai, 27; NotPetya, 35, 145; Slammer, 37; WannaCry, 7, 41, 146–147 Mandiant, 31 Microsoft, 21, 35, 41–42, 132, 162–163 Middle East, 17, 31, 80, 95, 105, 113, 117, 125, 127, 138; Arab Spring, 66, 100, 112, 116 MySpace, 100 National Security Agency (NSA), 3, 10, 25, 54, 72, 82, 137, 140 Nature of war, 1–4, 98, 171–172 New York Times, 21, 119–120 NIPRNET, 36 Norms of behavior in cyberspace, 12, 30, 86; of attribution, 73; development of, 159 North Atlantic Treaty Organization (NATO), 26, 27, 31, 60, 82, 90, 170; Cyber Security Centre of Excellence, 60, 90 North Korea, 7, 11, 31, 58, 77, 86; Sony hack (2014), 11, 86 Obama, Barack, 31, 82, 89, 141, 186n, 190n Operational Security (OPSEC), 18 Panetta, Leon, 13 Patriotic hacking, 80–82, 154 PayPal, 77–78 People’s Liberation Army (PLA), 16; Operation Aurora, 29; Unit 61398 (APT1), 30, 73 People’s Republic of China (PRC), 11, 16, 29–31, 38, 40, 55, 64, 70, 82–83, 85–86, 88–89, 92–93, 99–100, 110, 114, 131, 140, 150, 157; Great Firewall, 116, 124; “harmony,” 89, 110, 157; Honkers Union of China (HUC), 82–83; Huawei, 141; social credit system, 92 Philippines, 93, 95, 130 Putin, Vladimir, 55, 69, 92, 153, 155–157

Ransomware, vii, 7, 41, 70, 72, 75, 145–147, 160, 164–165; WannaCry, 7, 41, 146–147 Resilience, cyber, 162, 165 Rid, Thomas, 9, 21, 25, 43–45, 48, 59, 65, 115, 155, 160 Rogers, Michael S., 82 Russia, 12, 27, 31–32, 38, 49, 55, 58, 70, 76–77, 81, 85–86, 90, 98, 101, 107–108, 117, 132, 143, 150, 154–157, 161; Internet Research Agency (IRA), 108, 121, 123–124, 155; Russia Today (RT), 121, 144; sockpuppets, 151; Sputnik, 121, 144 Russian Business Network, 71 Russo-Georgian War, 9, 11, 23, 27, 48–49, 72–73, 81 Saakashvili, Mikheil, 27 Sabotage, 65 Schmidt, Eric, 29, 99, 139 Science, Technology, Engineering, and Mathematics (STEM), 83–84 Script kiddies, 39, 79 Singer, P.W., 3, 10, 42, 45, 48, 56, 85, 92, 98, 101, 113–114, 117, 126, 137–138, 142, 144, 148–149, 152, 160, 167, 170 SkyGrabber, 19 Snowden, Edward, 25 South Korea, 31, 85 Soviet Union, 9, 26, 34, 63, 88, 90–91, 98, 110, 137, 154, 156 SQL injection, 27 Strategic Studies Quarterly, 84, 104 Structured Threat Information eXpression (STIX), 90 Supervisory Control and Data Acquisition (SCADA), 22, 42–45, 53, 164 Surveillance, 19, 91–92, 165 Symantec, 21 Syrian Electronic Army (SEA), 24, 48 Tallinn Manual, 10, 51 Tallinn Manual 2.0, 15, 52, 81 Telegram App, 111

218Index Terrorist attacks: Mumbai (2008), 111, 129–130; Paris attack (2015), 18, 130; psychological effects of, 37–38; September 11 attacks (2001), 37–38, 71 The Onion Router (TOR), 69, 75, 89 Trend creation, 102–104 Trend distribution, 102–103, 111 Trend hijacking, 102–104 Troll, 123–124, 154, 156 Trump, Donald, 31, 108–109, 137 Trusted Automated eXchange of Insider Information (TAXII), 41, 90 Turkey, 86, 100, 131, 153 Twitter, 93, 100, 102–106, 109–110, 119, 123, 130, 132, 134, 153, 155, 190n; account hijacking, 126 Ukraine, 27, 35, 44–45, 49, 64, 74, 98, 124, 131, 143–144, 146; Crimea, 109; CyberBerkut, 53; Donbass separatist war, 98, 110; Malaysian Airlines MH-17, 74, 143, 150, 184n United Kingdom, 32, 86, 98 United States, 29–33, 44, 49, 55, 57, 63, 78, 113, 137, 141; armed forces of, 17–18, 33, 36, 38, 55–56, 82, 115, 125; Department of Homeland Security (DHS), 42; election campaign

intervention (2016), 31, 99, 101, 108, 117, 123; government of, 82–83, 85–86, 90–91, 98, 158–159 Unmanned Aerial Vehicle (UAV), 19 USB device, 42 VKontakte, 110, 132, 153 Vulnerability, 13, 20, 28, 35, 38–45, 57–59, 61–62, 67, 85–86, 90, 95, 137, 140, 156, 162–163, 167, 169; zero-day, 39–40 WannaCry, 7, 41, 146–147 Washington Post, 24, 120 Webcam, 28 WeChat, 110 Westphalian dynamics, 84–88 West Point, ix, x WhatsApp, 110–112 WikiLeaks, 78 World War I, 32, 34, 56, 76, 100, 170 World War II, 3, 19, 23, 26, 76, 138, 148 World Wide Web, 2, 158 Yahoo, 26 YouTube, 78, 100, 111, 124, 132, 153 Zuckerberg, Mark, 96, 101

About the Author NICHOLAS MICHAEL SAMBALUK, PhD, is an associate professor ­specializing in military history topics dealing with technology and innovation. His first book, The Other Space Race: Eisenhower and the Quest for Aerospace Security, was named “Best Air Power History Book of 2016” by the Air Force Historical Foundation. He is also editor of Paths of Innovation in Warfare: From the Twelfth Century to the Present (2018) and Conflict in the 21st Century: The Impact of Cyber Warfare, Social Media, and Technology (2019), and author of several articles appearing in Cyber Defense Review, Cold War History, and elsewhere. He has taught at the United States Military Academy at West Point, at Purdue University, and at the Air War College and eSchool of Graduate Professional Military Education; he continues to teach and serves on the editorial board of Strategic Studies Quarterly.