Disinformation And Fake News [1st Edition] 9811558752, 9789811558757, 9789811558764

This book is a collection of chapters penned by practitioners from around the world on the impact that disinformation an

432 68 2MB

English Pages 160 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents......Page 5
Notes on Contributors......Page 7
List of Figures......Page 11
Introduction......Page 13
Disinformation & Fake News: Meanings, Present, Future......Page 14
Conceptual Underpinnings......Page 16
Chapters......Page 20
Overview of Disinformation......Page 21
Disinformation in Context......Page 22
Countering Disinformation......Page 24
Notes Towards a Future Body of Work......Page 26
Overview of Disinformation......Page 32
Disinformation as a Threat to National Security......Page 33
The New Information Environment......Page 34
State Actor Influence Campaigns......Page 36
Tools and Techniques......Page 38
Response......Page 39
Conclusion......Page 41
References......Page 42
Tools of Disinformation: How Fake News Gets to Deceive......Page 44
Introduction......Page 45
Defining Fake News......Page 46
Sender Factors......Page 47
Message Factors......Page 49
Channel Factors......Page 50
Receiver Factors......Page 51
Conclusion......Page 52
References......Page 53
How News Audiences Think About Misinformation Across the World......Page 56
Methodology......Page 57
Trust in the News......Page 58
Types of Misinformation......Page 59
Public Concern Over Misinformation......Page 61
Exposure to Misinformation......Page 62
“Fake News” as a Term......Page 63
Addressing Concerns Over Misinformation......Page 64
Conclusion......Page 65
References......Page 66
Disinformation in Context......Page 67
Building Digital Resilience Ahead of Elections and Beyond......Page 68
Technological Vulnerabilities......Page 69
Exploiting Social Media Ahead of Elections......Page 71
Moldova—Doctored Video......Page 73
Mexico—Political Bots......Page 74
The United States—Sock Puppet Accounts......Page 75
Training......Page 76
Technology......Page 77
Future Threats......Page 78
References......Page 79
Fighting Information Manipulation: The French Experience......Page 81
Step 1: Disinformation Campaign......Page 82
Step 2: Data Hacking and Leaking......Page 83
Who Did It?......Page 84
Why Did It Fail?......Page 85
Luck......Page 86
Raising Awareness......Page 87
Beat Hackers at Their Own Game......Page 88
Undermining Propaganda Outlets......Page 89
Conclusion: The Road Ahead......Page 90
Framing the Debate: The CAPS-IRSEM Report......Page 91
Acting: Legislation, Media Literacy, and Internal Organization......Page 94
Disinformation and Cultural Practice in Southeast Asia......Page 96
Introduction......Page 97
How Southeast Asians Access the Internet and Why It Matters......Page 98
Media and ‘Trust’ in Southeast Asia......Page 100
The Political Context in Which Disinformation Spreads......Page 103
Conclusion and Solutions......Page 105
Hate Speech in Myanmar: The Perfect Storm......Page 107
Countering Disinformation......Page 119
NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda......Page 120
How NATO Counters Russian Disinformation and Propaganda......Page 124
Sharpening Strategic Communications......Page 127
References......Page 130
Elves vs Trolls......Page 133
Elves’ Rules......Page 134
References......Page 138
Fake News and Disinformation: Singapore Perspectives......Page 139
Introduction......Page 140
The Southeast Asian and Singapore Contexts......Page 141
Singapore......Page 143
Singapore’s Protection from Online Falsehoods and Misinformation Act......Page 146
Comparison of Singapore’s Law and Other Legislation......Page 152
Non-legal Aspects: Media and Digital Literacy, and Building Trust......Page 153
Regional/International Level: Comparing Notes......Page 157
Conclusion......Page 158
Recommend Papers

Disinformation And Fake News [1st Edition]
 9811558752, 9789811558757, 9789811558764

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Disinformation and Fake News Edited by Shashi Jayakumar Benjamin Ang Nur Diyanah Anwar

Disinformation and Fake News “Disinformation and Fake News is a book full of insights from top-notch specialists on information operations. Over last several years, various regions of the world have seen a spike in the use of information operations for geopolitical or commercial reasons. It is critical to gather and compare existing knowledge across regions and sectors in one place. Everybody interested in this fast-developing field should read this volume.” —Jakub Janda, Executive Director, European Values Center for Security Policy, Czech Republic “As ‘disinformation’ has become a common keyword widely used in politics, journalism and civil society, this volume invites readers to step back and consider it as a complex, multidimensional phenomenon. By bringing together studies made by specialists from all over the world and from various disciplines, this volume makes a useful contribution to the academic debate on the issue, grappling too with the conceptual questions surrounding disinformation as a political, cultural and technical issue. Decision makers and citizens seeking an up-to-date assessment of what disinformation means for our societies and how we should (or should not) counter it will also find within some very thought-provoking ideas on how to deal with hybrid threats.” —Kevin Limonier, Associate Professor in Slavic Studies & Geography at the French Institute of Geopolitics (University of Paris 8), and Vice Director of Geopolitics of the Datasphere (GEODE) “Tackling a topic on the radar of every world leader, this book provides the richness of global perspectives that understanding disinformation requires. For a greater understanding of disinformation and how it operates, take a read.” —Fergus Hanson, Director of the International Cyber Policy Centre, Australian Strategic Policy Institute

Shashi Jayakumar · Benjamin Ang · Nur Diyanah Anwar Editors

Disinformation and Fake News

Editors Shashi Jayakumar S. Rajaratnam School of International Studies Nanyang Technological University Singapore, Singapore

Benjamin Ang S. Rajaratnam School of International Studies Nanyang Technological University Singapore, Singapore

Nur Diyanah Anwar S. Rajaratnam School of International Studies Nanyang Technological University Singapore, Singapore

ISBN 978-981-15-5875-7 ISBN 978-981-15-5876-4 (eBook) https://doi.org/10.1007/978-981-15-5876-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: © Melisa Hasan This Palgrave Macmillan imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Contents

Introduction Disinformation & Fake News: Meanings, Present, Future Benjamin Ang, Nur Diyanah Anwar, and Shashi Jayakumar

3

Overview of Disinformation Disinformation as a Threat to National Security Janis Sarts

23

Tools of Disinformation: How Fake News Gets to Deceive Edson C. Tandoc Jr.

35

How News Audiences Think About Misinformation Across the World Richard Fletcher

47

v

vi

CONTENTS

Disinformation in Context Building Digital Resilience Ahead of Elections and Beyond Donara Barojan Fighting Information Manipulation: The French Experience Jean-Baptiste Jeangène Vilmer Disinformation and Cultural Practice in Southeast Asia Ross Tapsell Hate Speech in Myanmar: The Perfect Storm Alan Davis

61

75

91

103

Countering Disinformation NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda Barbora Maronkova

117

Elves vs Trolls Giedrius Sakalauskas

131

Fake News and Disinformation: Singapore Perspectives Shashi Jayakumar, Benjamin Ang, and Nur Diyanah Anwar

137

Notes on Contributors

Benjamin Ang is a Senior Fellow with CENS. He leads the Cyber and Homeland Defence Programme, which explores policy issues around the cyber domain, international cyber norms, cyber threats and conflict, strategic communications and disinformation, law enforcement technology and cybercrime, smart city cyber issues, and national security issues in disruptive technology. Nur Diyanah Anwar is pursuing her Ph.D. at the National Institute of Education (NIE), NTU. Previously, she was a Senior Analyst with CENS. Her research interests revolve around identity issues, multiculturalism, education, social policies, inequality, and the relations between state and society. Donara Barojan is a strategic communication and information warfare expert. She currently works at a London-based strategic communications company Zinc Network, where she leads an independent media support project in the Baltic states. Prior to joining Zinc Network, she worked for the NATO Strategic Communications Centre of Excellence and the Atlantic Council’s Digital Forensic Research Lab, where she served as an Assistant Director for Research and Development. Ms. Barojan’s commentary and research were featured by numerous media outlets, including the New York Times, BBC, Bloomberg, Al Jazeera, Reuters, The Economist, and others. Donara frequently runs training workshops on identifying, analysing, and countering disinformation and

vii

viii

NOTES ON CONTRIBUTORS

extremist narratives. To date, she has trained nearly 1000 researchers, journalists, and strategic communication experts. Alan Davis is Asia & Eurasia Director at the Institute for War & Peace Reporting and also currently works as the Chief of Party on two Global Engagement Center-funded projects focusing on disinformation. He has designed and led media, confidence-building, civil society and strategic communications’-based programs in more than 40 countries— from Afghanistan to Zimbabwe: Iran to North Korea: He has a particular interest in combating corruption through multi-sectoral public fiscal literacy-building projects and working with and inside closed states: For four years he worked as a media advisor to DFID focusing on Russia and the Former Soviet Union and was recently made an Associate Fellow and the King’s College-based International Center for the Study of Radicalism in London. Since 1989 when he first visited Cambodia to report on the Vietnamese troop withdrawal for the UK media, his primary area of interest has been South East and East Asia. Richard Fletcher is a Senior Research Fellow and research team leader at the Reuters Institute for the Study of Journalism at the University of Oxford. Shashi Jayakumar is Head, Centre of Excellence of National Security (CENS) and Executive Coordinator, Future Issues and Technology, at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Jean-Baptiste Jeangène Vilmer is the Director of the Institute for Strategic Research (IRSEM, French Ministry of Defense) and a nonresident Senior Fellow at the Atlantic Council, Washington, DC. He served previously as a Policy officer on “Security and Global Affairs” at the Policy Planning Staff (CAPS) of the French Ministry for Foreign Affairs, a Banting postdoctoral fellow at McGill University’s Faculty of Law, and a Lecturer in international relations at the Department of War Studies of King’s College London. Holding degrees in philosophy (B.A., M.A., Ph.D.), law (LLB, LLM), and political science (Ph.D.), he is the author of some one hundred articles, twenty books, and two reports on disinformation: Information Manipulation (CAPS/IRSEM, 2018) and The Macron Leaks Operation: A Post-mortem (IRSEM/Atlantic Council, 2019).

NOTES ON CONTRIBUTORS

ix

Barbora Maronkova joined the North Atlantic Treaty Organization (NATO), Public Diplomacy Division in Brussels, Belgium in 2006 as program coordinator where she designed, planned, and implemented public diplomacy campaigns in a number of NATO member states. As of September 2010, she advised several candidate countries on their national public awareness campaigns on NATO membership. From January to December 2016, she worked for the office of NATO’s Spokesperson. As of 1st March 2017, she holds the position of Director of NATO Information and Documentation Centre in Kyiv, Ukraine. In 2003, she established and headed a Slovak based NGO Centre for European and North Atlantic Affairs to contribute to public and academic debate on Slovakia’s membership to the EU and NATO. Her work included public relations and media appearances, public speaking, donors and stakeholders relations as well as the management of the NGO. A graduate of the University of Economics of Bratislava, Slovak Republic, Barbora also holds a Public Affairs diploma from the Chartered Institute for Public Relations in the UK. She served as a non-resident Research Fellow with the Centre on Public Diplomacy, University of Southern California. Barbora is a frequent contributor and speaker on topics of strategic and government communications and public diplomacy. Giedrius Sakalauskas is the Director of Res Publica. He graduated from University of Vilnius/Institute of International Relations and Diplomacy in 1995. He has worked in various industries including advertising, entertainment, e-commerce. Since 2014, he has been a civic activist coordinating cyber-resistance movements countering Kremlin trolls, in confronting and debunking hostile propaganda to develop the resilience of society. Janis Sarts is a Director of the NATO Strategic Communications Centre of Excellence (Riga),—a multinational, cross—sector organisation providing comprehensive analysis, advice and practical support in strategic communications to the Alliance and the allied nations. Prior to that, he has been the State Secretary of the Ministry of Defence, Latvia for seven years,—leading defence sector reforms during an economic crisis period, developing a new state defence concept, and encouraging regional defence cooperation within NATO and the EU. He has also led the Latvian Government’s efforts to increase its security and defence in cyberspace. As Chair of the National Cyber Security Board, he was responsible for formulating and overseeing the implementation of Latvia’s cyber security

x

NOTES ON CONTRIBUTORS

policy, as well as for overseeing the work of the National Information Technology Security Incident Response Institution—CERT.LV. J¯anis S¯arts holds a degree in History from the University of Latvia, has graduated from the NATO Defence College and has interned at the Swedish Defense Research Agency. He has received numerous state awards for his contribution to defence reforms, fostering Latvia’s membership to NATO and hosting the NATO Summit in Riga. Edson C. Tandoc Jr. (Ph.D., University of Missouri) is an Associate Professor at the Wee Kim Wee School of Communication and Information at Nanyang Technological University in Singapore. His research focuses on the sociology of message construction in the context of digital journalism. He has conducted studies on the construction of news and social media messages. His studies about influences on journalists have focused on the impact of journalistic roles, new technologies, and audience feedback on the various stages of the news gatekeeping process. This stream of research has led him to study journalism from the perspective of news consumers as well, investigating how readers make sense of critical incidents in journalism and take part in reconsidering journalistic norms; and how changing news consumption patterns facilitate the spread of fake news. Ross Tapsell is a Senior Lecturer at the Australian National University’s College of Asia and the Pacific. He is author of Media Power in Indonesia: Oligarch, Citizens and the Digital Revolution and coeditor of the book Digital Indonesia: Connectivity and Divergence.

List of Figures

How News Audiences Think About Misinformation Across the World Fig. 1 Fig. 2

Fig. 3

Fig. 4

Proportion in each country that trust most news most of the time (Source Newman et al. [2018: 17]) Proportion that say they are “very” or “extremely” concerned about what is real and what is fake on the internet when it comes to news (Source Newman et al. [2018: 19]) Proportion who say they are very or extremely concerned about each, and proportion who say they encountered each in the last week (Source Newman et al. [2018: 20]) Proportion that think that each should do more to separate what is real and what is fake on the internet (Source Author’s own, based on data from Newman et al. [2018])

50

51

53

56

Building Digital Resilience Ahead of Elections and Beyond Fig. 1

Venn diagram showing the overlapping vulnerabilities that contribute to the creation of a toxic information environment, where disinformation spreads unimpeded

63

xi

xii

LIST OF FIGURES

NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda Fig. 1

Fig. 2

Fig. 3

Model of a counter-hybrid threats strategy (Source Maronkova 2018b, NATO Amidst Hybrid Warfare—Case Study of Effective Strategic Communications. Presentation at the workshop Disinformation, Online Falsehoods and Fake News, Singapore) Map of Russian Federation and its borders (Source NATO Website. NATO Setting the Record Straight, www.nato.int/cps/ra/natohq/115204.htm. Accessed on 5 October 2018) Overview of most frequent Russian narratives about NATO’s deployment in the Baltics (Source Nimmo, Ben. 2017. Russian Narratives on NATO’s Deployment. www.stopfake.org/en/russian-narratives-on-nato-s-deployment/. Accessed on 14 October 2018)

120

123

126

Introduction

Disinformation & Fake News: Meanings, Present, Future Benjamin Ang, Nur Diyanah Anwar, and Shashi Jayakumar

Abstract Besides introducing the various chapters, Ang, Anwar and Jayakumar provide historical precedents, which form important underpinnings through which “fake news” and “disinformation” should be understood. As the authors go on to observe, terminology matters and a lack of clear grounding in meaning has lead terms such as “disinformation‚” “misinformation” and “fake news” to be conflated, hindering attempts to understand these issues. A strength of the volume lies in the bringing together of diverse perspectives across east and west. Contributors have worked on various aspects of the disinformation/fake news nexus. Some are experts on social media; others have had long experience in countermeasures and shoring up social resilience. It is only through a

B. Ang (B) · N. D. Anwar · S. Jayakumar S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] N. D. Anwar e-mail: [email protected] S. Jayakumar e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_1

3

4

B. ANG ET AL.

pooling of these collective experiences and strengths that holistic understandings can be formed about the problem at hand; it is this cooperation too that will also lend itself to effective real-world solutions. Keywords Online falsehoods · Disinformation · Resilience · Social Media

In 1274 BC, the pharaoh Ramesses II of Egypt led his army to a crushing victory over the Hittites at Kadesh—a “victory” recorded on papyri and also in glorious stone monuments. The official Egyptian version of the triumph might well have become orthodoxy in modern historiography if not for discoveries of texts, including private letters between Ramesses and the Hittite king Hattusili III, which show the latter asking why Ramesses was treating the battle fought at Kadesh as a victory even though the Hittites had “defeated the King of Egypt”. Also extant is the treaty between the two sides agreed on fifteen years after the battle, which shows that the Hittites gave as good as they got in the battle (and also shows both states acknowledging each other as equals).1 State warfare, diplomacy, civil wars, feuds and vendettas—all of them through time have been marked by degrees of deception, propaganda, pamphleteering and completely fictitious invention.2 The falsity or slant serves to cast a certain complexion on what might be regarded as “truth,” or may even in some cases serve to take its place. This obscurity deepens with the passage of time, but when done skillfully enough, may have sowed confusion, influenced minds or changed opinions of contemporary audiences too. As one modern commentator observes, in the internecine conflict between Mark Anthony and Octavian that followed the assassination of Julius Caesar in 44BC,

1 Alex Loktionov, ‘Ramesses II, Victor of Kadesh: A Kindred Spirit of Trump?’ The Guardian, 5 Dec 16. https://www.theguardian.com/science/blog/2016/dec/05/ram esses-ii-victor-of-kadesh-a-kindred-spirit-of-trump (accessed 20 Jan 2020). 2 For some examples, see Julie Posetti and Alice Matthews, ‘A Short Guide to the History of “Fake News” and Disinformation’, International Center for Journalists, 23 Jul 18. https://www.icfj.org/sites/default/files/2018-07/A%20Short%20Guide%20to%20H istory%20of%20Fake%20News%20and%20Disinformation_ICFJ%20Final.pdf.

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

5

[What followed] was an unprecedented disinformation war in which the combatants deployed poetry and rhetoric to assert the righteousness of the respective campaigns. From the outset, Octavian proved the shrewder propagandist, using short, sharp slogans written upon coins in the style of archaic tweets. His theme was that Antony was a Roman soldier gone awry: a philanderer, a womaniser and a drunk not fit to lead, let alone hold office.3

Not all false stories had a political kernel to them. Some went even deeper, into the heart of age-old animosities against peoples, ethnicities and religions. Demagogues used charisma and mass persuasion impel people to violence, taking revenge for imaginary offences.4 What is common to all of these is that the lie, whether big or small, necessarily travelled at the speed that contemporary communications or technology would allow it to; its dissemination would likewise be circumscribed by these same realities. As the technology developed, so did the speed and reach of falsehoods. The various markers in the technological revolution are known to all of us: the invention of the Gutenberg printing press, the telegraph, radio, telephone, television, the internet. All these developments have accelerated virality. As the journalist Natalie Nougayrède observes, “The use of propaganda is ancient, but never before has there been the technology to so effectively disseminate it”.5

Conceptual Underpinnings Disinformation & Fake News is a collection of essays dealing with the impact that modern disinformation and fake news has had in both online and social sphere. Written accessibly and aimed at the interested lay reader, practitioners and policymakers in the field, this volume offers a panoramic view of how fake news sprouts and spreads, how organized (dis)information campaigns are conducted, who they target, and what has 3 Izabella Kaminska, ‘A Lesson in Fake News from the Info-Wars of Ancient Rome’, Financial Times, 17 Jan 17. 4 For examples, see Jacob Soll, ‘The Long and Brutal History of Fake News’, Politico, 8 Dec 16. https://www.politico.com/magazine/story/2016/12/fake-news-history-longviolent-214535 (accessed 22 Jan 2020). 5 Natalie Nougayrède, ‘In This Age of Propaganda, We Must Defend Ourselves. Here’s How’, The Guardian, 31 Jan 18. https://www.theguardian.com/commentisfree/2018/ jan/31/propaganda-defend-russia-technology.

6

B. ANG ET AL.

been attempted by way of countermeasures—both in the digital and social spheres. Some explanation is needed as to why the burgeoning literature on these issues requires an addition. Although there are many books dealing with the topic of fake news and/or disinformation, the majority are focused on one country (such as the United States or Russia) or one specific issue. Examining a range of countries and contexts across East and West (and in addition, by bringing together research on specific countries and international data mined from questionnaires and online studies) Disinformation & Fake News goes part way towards remedying what is arguably a U.S., or certainly western at least, centric nature of research on this topic. Jean-Baptiste Vilmer, the foremost expert on disinformation in France and who authored the well-known report on the “Macron Leaks” has a contribution,6 but developments in Asia and specifically Southeast Asia are covered too (see contributions by Ross Tapsell and Alan Davis). The contribution by the editors within the volume is also the first holistic treatment of Singapore’s approach to fake news and disinformation (which includes the unique Protection from Online Falsehoods and Manipulation Act [POFMA] which was passed in Parliament on 8 May 2019), and we might hazard the most comprehensive to date. The contributions come from diverse fields: security studies, media and communications, area studies, international studies and journalism studies. These multi-disciplinary perspectives build on previous conceptual and empirical work in a predecessor volume—DRUMS: Distortions, Rumours, Untruths, Misinformation and Smears (2019).7 The latter volume had contributions from various experts who had been present at presented at a workshop organized by the Centre of Excellence for National Security in July 2017. The present volume contains contributions from practitioners and experts who were present at the subsequent edition of the workshop, held in July 2018. The contributions have been

6 J.-B. Jeang`ene Vilmer, A. Escorcia, M. Guillaume, and J. Herrera, Information Manipulation: A Challenge for Our Democracies, report by the Policy Planning Staff (CAPS) of the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research (IRSEM) of the Ministry for the Armed Forces, Paris, August 2018. https://www.dip lomatie.gouv.fr/IMG/pdf/information_manipulation_rvb_cle838736.pdf (accessed 3 Jan 2020). 7 Norman Vasu, Benjamin Ang, and Shashi Jayakumar (eds.), DRUMS: Distortions, Rumours, Untruths, Misinformation and Smears (Singapore: World Scientific, 2019).

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

7

substantially reworked, in some cases with considerable amplification, from the remarks delivered then. There have been several attempts to “unpack” what exactly fake news and disinformation mean.8 Misinformation can—according to one set of influential definitions—be “information that is false, but not created with the intention of causing harm”. This in turn can have overlapping meanings with disinformation, which is “information that is false and deliberately created to harm a person, social group, organization or country”.9 For the purposes of providing a conceptual underpinning for this volume and introduction, the editors propose to rely on the above definitions, expanding and refining them while keeping foundational precepts in mind. Disinformation can be (a) falsehoods and rumours knowingly distributed, which can be: (i) Propagated as part of a political agenda by a domestic group/relativization/differing interpretation of facts based on ideological bias—some of this can achieve viral status whether or not there is malicious intent; (ii) Part of state-sponsored disinformation campaigns, and which can undermine national security and resilience. At the same time, there exists (b) a huge class of falsehoods and rumours propagated without a broad political aim (these go by various terms including “fake news” and “misinformation”, as opposed to “disinformation”, which has acquired the flavour of something more subversive and coordinated). This former group—fake news and misinformation—can achieve viral status and even if not the product of a malice or a coordinated campaign, can harm individuals, or society at large. Finally, and oftentimes overlapping, there are (c) falsehoods distributed for financial gain. Some of these can have as a direct consequence (or sometimes, as

8 See Norman Vasu, Benjamin Ang, and Shashi Jayakumar, ‘Introduction: The Seemingly Unrelenting Beat of DRUMS’, vii–xxii. 9 See Claire Wardle and Hossein Derakhshan, Information Disorder: Towards an Interdisciplinary Framework for Research and Policymaking. Council of Europe, 27 Sep 17, p. 20. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framew ork-for-researc/168076277c (accessed 12 Nov 19).

8

B. ANG ET AL.

a purely unintended by-effect) and effect on national resilience, security, the body politic or all three.10 Past work concerning categorizations can be useful, but we accept there are many other possible ways to slice these issues; indeed, several of the contributors in this volume observe that there are varied ways in which “disinformation” can be conceptualized. Jean-Baptiste Vilmer argues for distinguishing the nuances present in “disinformation” and “fake news,” while Edson Tandoc writes how disinformation can span a spectrum of modes including political satires, news parodies, state propaganda and misleading advertising. Despite this, other contributors have posited that a broader use of these terms—including terms such as “(hostile) propaganda”—should be maintained. This allows easier comprehension for the wider audience, and access to the discussions present on disinformation and fake news. What all concerned can accept, surely, is that disinformation, rumours and false reporting has always been present in society. What then has changed? Certainly, the manner in which digital information is being shared has meant a profound shift in terms of how peoples and societies as a whole consume information. The sheer volume and velocity of data has made the current environment more conducive for facilitating the spread of inaccurate or false information. Facebook alone has shut down 5.4 billion fake accounts in 2019, but millions—perhaps hundreds of millions more— likely remain.11 Not all are directly linked to disinformation, hate speech or fake news, but this still gives some indication as to the size of what tech platforms are grappling with. This is an indication too‚ of as to the receptiveness of the attack surface when it comes to enabling disinformation campaigns or other forms of coordinated inauthentic behaviour. As one researcher and early thinker on this topic, Tim Hwang, observes, Modern information warfare is characterized by a cartographic shift: social behaviour is now directly observable at many different scales at remarkably 10 What has been presented by way of categorization represents a refinement to previous suggestions by researchers from the Centre of Excellence for National Security (CENS). See Norman Vasu, Benjamin Ang, Terri-Anne-Teo, Shashi Jayakumar, Muhammad Faizal, and Juhi Ahuja, Fake News, Influence Operations and National Security in the Post-Truth Era, RSIS Policy Report, January 2018. https://www.rsis.edu.sg/wp-content/uploads/ 2018/01/PR180313_Fake-News_WEB.pdf. 11 Brian Fung and Ahiza Garcia, ‘Facebook Has Shut Down 5.4 Billion Fake Accounts This Year’, CNN , 13 Nov 19.

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

9

low cost. One can observe social reactions to a stimulus as it occurs and compare these reactions across both time and space. These developments and the concentration of this data in a small set of platforms change the nature of information flow and open new possibilities for the strategic development of information warfare.12

Scale, size and speed are core elements of the problem. To this should be added a fourth precept—what could be referred to as the cognitive aspect. Target audiences may be completely oblivious to information campaigns which embed narratives into their cognitive landscape—undermining trust and divisions within society (refer to contribution by Janis Sarts in this volume). Some studies—and they are still emerging—suggest that false political news has the quality of travelling more deeply, and more virally, than any other type of false information. The truth, however, takes considerably longer to reach people and diffuse itself in the same depth that falsehood has (in fact, in some studies it never manages to diffuse itself in the same depth).13 All these put together give some indication as to why an actor with the resources and the motive might want to try subversion at scale in a target, with the best example being the strategies employed by Russia in its attempts to influence the outcome of the 2016 U.S. Presidential election.14 As the chapters in this volume will show, disinformation and/or fake news can have political and social effects on society and state. Whether they may be hostile information campaigns or hate speech (see Alan Davis’ contribution in this volume), the virality and velocity at which the false information spreads is the key issue‚ as it may have negative implications on the levels of trust, societal cohesion and resilience. Cultural contexts and the political environments present in various milieus can influence the kind of information spread, and have a bearing on how and why they gain quick traction.

12 Tim Hwang, Maneuver and Manipulation: On the Military Strategy of Online Infor-

mation Warfare, United States Army War College Press, Advancing Strategic Thought Series, May 2019. 13 Soroush Vosoughi, Deb Roy, and Sinan Aral, ‘The Spread of True and False News Online’, Science, Vol. 359, Issue 6380, 9 Mar 2018, pp. 1146–1151. 14 Special Counsel Robert S. Mueller, III, Report on the Investigation into Russian Interference in the 2016 Presidential Election (U.S. Department of Justice, March 2019).

10

B. ANG ET AL.

Chapters The volume is divided into three parts. “Overview of Disinformation” seeks to provide an outline from the temporal, communications and international angles. “Disinformation in Context” provides perspectives and experiences from a wide range of states. There is some focus on Southeast Asia and associated developments surrounding disinformation, due to the rapid growth of social media use in the region. The final part, “Countering Disinformation” offers insights on countermeasures (such as those by NATO, as well as civil society). Overview of Disinformation Janis Sarts explores how the contemporary information environment has created a favourable space for the spread of hostile disinformation campaigns. He offers readers a detailed analysis of “state actors that use disinformation as a part of influence campaigns, to affect targeted society,” explaining how these actors exploit social weaknesses to disrupt a country’s social cohesion. Sarts highlights how state actors use military and diplomatic means “in conjunction with information space activities to achieve desired effects”. Although many of the tools used in hostile campaigns were developed during the height of the Cold War, there is newer emphasis on exploiting opportunities created by the digital environment, such as organized trolling (hostile social media campaigns). Sarts concludes by highlighting four “layers of responsibility” in combatting hostile campaigns, adding that in order to protect societal processes like elections, “we need to discover ways of how the same technologies can be used in countering hostile influence and disinformation campaigns”. Edson Tandoc assesses why people believe in and propagate false information found online. He identifies four components of communication—namely Sender, Message, Channel and Receiver—and examines the different factors which affect each one. Tandoc’s research reveals that an increasing number of people get their news from social media instead of local news websites, which leads to various consequences on all four components of communication. He explains that individuals judge a news story’s credibility not only on who shared it on social media, but also on the number of likes, comments and shares. Fake news producers therefore often “produce clickbait content to get more people to like, share,

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

11

or comment on their fake stories, often playing into readers’ biases or interests”. Tandoc proposes that although the majority of disinformation research has focused on Facebook and Twitter, “fake news now increasingly moves through closed social media applications, such as the messaging app WhatsApp”. He also delineates how fake news and disinformation thrive in times of uncertainty and in situations of information overload—combatting this issue will require a “multi-pronged approach involving technological, economic, legal, and social interventions”. Richard Fletcher examines some of the key findings from the 2018 Reuters Institute Digital News Report. This report analysed online data dealing with news consumption from approximately 74,000 respondents internationally, with particular focus on their “level of concern over and exposure to specific types of misinformation and disinformation associated with the news”. One of its main findings was that just over half of the respondents were either “very” or “extremely” concerned about bias, poor journalism and completely made-up news. However, Fletcher explains that the level and areas of concern vary from country to country. For example, in Eastern Europe, the questionnaire showed that “misleading advertising was more of a concern than in many other parts of the world”. Fletcher analyses each key finding of the report, including the public perception of “fake news” as a term, and who news audiences think should do more to fix problems associated with misinformation. He concludes by emphasising the importance of monitoring public concern over misinformation, in order to properly address the problems it poses.

Disinformation in Context Donara Barojan explores the impact of disinformation and digital manipulation on election vulnerability. She argues that researchers, policymakers and journalists reporting on elections tend to focus solely on technological vulnerabilities, while overlooking individuals’ cognitive shortcomings and societal vulnerabilities. Barojan examines these vulnerabilities by describing the “toxic information environment” created when they interact, which in turn facilitates the spread of disinformation and fake news. This mirrors Sarts’ chapter, where he describes how the milieu can perpetuate and accelerate the spread of hostile disinformation campaigns. Case studies from elections in Moldova, Malaysia, Mexico and the United States illustrate the use of doctored videos, commercial and political bots (automated social media) and sock puppet (inauthentic social media)

12

B. ANG ET AL.

accounts. Barojan then lists a number of approaches to counter online hostile actors, which include participating in open source and forensic training events, following the research published by the DFR Lab to “inform the public of the challenge of disinformation,” maintaining transparent communication with the public and harnessing the “power of technology” to track disinformation campaigns and digital manipulation in multiple languages. Jean-Baptiste Vilmer analyses the impact of the massive data breach that was used in a large-scale disinformation campaign targeting Emmanuel Macron and his campaign team, in the run up to the 2017 French presidential elections. Despite the large data leak, the campaign was deemed a failure, as it did not succeed in significantly influencing French voters. Vilmer explores why this was the case by highlighting a number of mistakes made by the hackers, as well as the appropriate and effective strategies by various agencies involved in playing defence. These include strategies in public signalling, and raising the level of awareness around campaign staff, media and the public. Vilmer cautions that the threat of further leaks persists, and highlights a number of steps needed to tackle the threat of disinformation head on. Ross Tapsell evaluates the “cultural practice of disinformation” in Southeast Asia, arguing that in order to combat this issue, academics must take into consideration “what kinds of disinformation spread widely, and why?” This question is addressed in three parts: how Southeast Asians access the internet, the cultural background in which digitalization has entered the public sphere, and the political context in which disinformation spreads. Tapsell suggests that academics tend to focus on studying Twitter as data is easier to obtain from the platform. However, this fails to take into consideration that “millions of Southeast Asians” are using the internet for Facebook and WhatsApp instead of Twitter. A significant number of Southeast Asians are obtaining their news from social media and are not fact-checking the news that they are consuming. Tapsell also emphasizes that many Southeast Asian citizens distrust official news sources due to their experiences of political manipulation and press corruption, and instead seek alternative sources of information from what they consider more “trustworthy” sources on social media. Tapsell concludes by encouraging academics to engage in “big data” scholarship on disinformation to analyse how digital information is changing Southeast Asian societies and come up with solutions to the complex problems disinformation poses.

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

13

Looking deeper into Myanmar, Alan Davis evaluates key factors contributing to the inter-ethnic conflict in the Rakhine State. He argues that during the country’s transition from a totalitarian state to a more open society, the international community sought out easy indicators in order to “declare success quickly and then move on”. The dramatic increase in the number of media outlets in Myanmar was recognized as proof of increased levels of freedom of expression in the country. However, Davis states that with the rise of social media—mainly Facebook—traditional media is becoming less relevant. This builds on Tapsell’s study showing how there is a lack of trust in official news sources, and people are turning to alternative media outlets or platforms for information—whether they may be authentic or false. Davis highlights the increase in hate speech against Muslims in general and the Rohingya in particular (this is important too in balancing this volume, as we should be concerned with material that can be both incendiary and viral, but which is not necessarily the result of action by a state adversary). He argues that the international community should have also been more aware of the “confluence of factors, crowned by Facebook” in Myanmar, contributing to the risk of inter-communal violence and the ethnic cleansing of Rohingya Muslims.

Countering Disinformation Barbora Maronkova looks at strategies present at the state and international level, by evaluating NATO’s strategy and implementation plans for countering hybrid warfare. This strategy comprises four main pillars: defence and deterrence with high readiness forces in place, cyber defence, enhancing resilience through national civil preparedness and the protection of critical infrastructure, as well as strategic communications to fight disinformation and propaganda. Maronkova underlines the importance of proactive communication between NATO members in order to counter the “systemic use of disinformation, propaganda and fake news”. This is particularly with regard to Russia, which has deployed a number of propaganda attacks targeting its immediate neighbours and NATO. Maronkova explains how these attacks have led NATO to develop new initiatives to counter Russian disinformation, including countering propaganda with facts and information to debunk recurrent falsehoods. She concludes by emphasizing the importance of proactive and transparent communication

14

B. ANG ET AL.

between NATO members in order to deter and defend against any threat which may arise. Giedrius Sakalauskas describes his experience participating in a group of Lithuanian activists who refer to themselves as “elves”, in the fight against various falsehoods spread by Russian internet trolls (hostile social media accounts) on social media. Sakalauskas details the four different types of elves which emerged to counter these online attacks, and each of their roles in finding hostile propaganda and spreading positive news about Lithuania, the European Union and NATO. He explains the key principle in the elves’ fight against Russian trolls using examples from various campaigns, which is to never counter lies with more lies, but to instead expose and debunk the lies through civic duty and action. The editors’ contribution concerns Singapore’s multipronged responses to the perceived threats of disinformation and fake news. The authorities have watched information manipulation in the West with some sense of looking over their own shoulders, given past instances of communal and ethno-religious violence (too far in the past to be remembered by most Singaporeans, but keenly felt by policymakers nonetheless). Official thinking and inquiry have extended, unusually, to a full Parliamentary Select Committee looking into these issues in 2018. The ensuing recommendations concerning building trust, cohesion and digital literacy are perhaps not entirely eyebrow-raising, but what has attracted considerable attention—as well as detractors and supporters in equal measure—is legislation. The Protection from Online Falsehoods and Manipulation Act (POFMA)‚ passed in 2019, is unique in that it gives the Minister the power to direct (via a Correction Direction) either the maker of the statement, or the platform upon which it is posted, to publish a correction next to the original statement, while the original statement remains online. The Correction Direction is therefore analogous to a legally enforceable right to reply, which ensures that members of the public can read both the original statement and the official reply, and make their own judgement. Further study will be needed over time to evaluate the effectiveness of POFMA in dealing with online falsehoods and manipulation—for example, in which types of cases the public is more convinced by the official government correction; what types of corrections, fact-checks or debunking statements are most effective and even an analysis of whether POFMA directions give more visibility to the original statements. It is also at present too early to say if this approach will be used by other states;

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

15

but it is fair to say that the Singapore experiment is being watched closely by others grappling with these same issues. Notes Towards a Future Body of Work Nuclear war and climate change threaten the physical infrastructure that provides the food, energy, and other necessities required for human life. But to thrive, prosper, and advance, people also need reliable information about their world – factual information, in abundance. —Bulletin of the Atomic Scientists (the creators of the Doomsday Clock), January 2019.15

We began with the issue of conceptual underpinnings. There are from time to time calls for researchers to go beyond concepts and to arrive at an all-encompassing analytical framework capturing all aspects of fake news and disinformation. We are unlikely to get one because, as the contributions have shown, there is an immense canvas forming the backdrop for the issues at hand. The issues span—besides fake news—hostile information campaigns, countermeasures (both legal and non-legal), cognitive aspects (such as critical thinking and digital media literacy), particular pressure points, such as elections (and securing election integrity) and the issue of foreign interference. As the contributions show, these have implications on society at large, and can influence levels of trust and resilience even as the state and other actors grapple with the intricacies of countering disinformation and fake news. Any attempt at developing an all-encompassing framework would have to take into account of a wide range of tactics, including the use of biased interpretations and innuendo (not strictly disinformation), co-opting or manipulation of politicians, business leaders and influential people (more influence than information), or even censorship and suppression of news. The multiplicity of overlapping issues means that a whole slew of observations and suggestions for further work can be made. We select a few below.

15 “‘A new abnormal: It is still 2 minutes to midnight’. 2019 Doomsday Clock Statement, Science and Security Board, Bulletin of the Atomic Scientists, January 2019, p. 5. https://thebulletin.org/doomsday-clock/2019-doomsday-clockstatement/#”.

16

B. ANG ET AL.

The tech platforms attempting to up their game when it comes to interdicting hostile information campaigns, troll farms and account swarms that act in a coordinated, inauthentic manner.16 Governments are also stepping up in terms of finding ways to inform and educate their citizenry.17 There are now, for example, initiatives which see law enforcement, security agencies and others communicate directly to the people to give them tools and resources, to understand how adversaries might try to influence political processes.18 Partnerships are important. Some of these activities above have been unmasked, thanks only to collaboration between the platforms such as Facebook and researchers.19 Other forms of collaboration also take place outside the orbit of the platforms, with investigative websites or bodies devoted to fact-checking working either independently or synergistically to call out inauthentic behaviour.20 “Ordinary people” also have a role, particularly where a concerned citizenry takes up the fight against fake news and inauthentic narratives—the Baltic Elves being a case in point. Various studies have suggested that while fact-checking or debunking can in certain circumstances be useful, effectiveness is limited when these efforts come up against deep-seated beliefs or ideology. If the aim is to 16 Lily Hay Newman, ‘Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts’, WIRED, 28 May 19. https://www.wired.com/story/iran-linked-fakeaccounts-facebook-twitter/. 17 See for example Joint Statement from DOJ, DOD, DHS, DNI, FBI, NSA, and CISA on Ensuring Security of 2020 Elections, 5 Nov 19. https://www.dhs.gov/news/ 2019/11/05/joint-statement-doj-dod-dhs-dni-fbi-nsa-and-cisa-ensuring-security-2020-ele ctions, and ‘Overview of the Process for the U.S. Government to Notify the Public and Others Regarding Foreign Interference in U.S. Elections’, https://s.wsj.net/public/res ources/documents/Overview%20of%20the%20Process%20for%20the%20U.S.%20Governm ent%20to%20Notify%20the%20Public%20and%20Others%20Regarding%20Foreign%20Inte rference%20in%20U.S.%20Elections%20(1).pdf?mod=article_inline. 18 See for example Protected Voices. https://www.fbi.gov/investigate/counterintellig ence/foreign-influence/protected-voices. 19 See for example Operation Secondary Infektion, Atlantic Council/DFRLab (various authors), 22 Jun 19. https://www.atlanticcouncil.org/in-depth-research-reports/rep ort/operation-secondary-infektion/; also see Issie Lapowsky, ‘Inside the Research Lab Teaching Facebook About Its Trolls’, WIRED, 15 Aug 18. (On the critical work done by the Atlantic Council’s Digital Forensics Research [DFRLab] Lab in collaboration with Facebook). 20 Eliot Higgins, ‘New Generation of Digital Detectives Fight to Keep Russia Honest’, Stopfake.org, 15 Jul 16. https://www.stopfake.org/en/new-generation-of-digital-detect ives-fight-to-keep-russia-honest/.

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

17

change the minds of those with certain beliefs or who have been susceptible to information campaigns, governmental rebuttal if not done well may simply come to be seen as part of the conspiracy itself, and may simply push the recipient further into his or her echo chamber. This is the “self sealing” quality of fake news and conspiracy theories—the type that is resistant to contrary evidence and especially resistant to government rebuttal—as highlighted by some observers.21 Fresh thinking is needed in terms of how to communicate in a way that works—not simply rebuttal or exposure of inauthentic behaviour, but adaptive counter messaging, and more compelling narratives. Investments are also needed in upstream interventions: a thread running through several of the contributions is the importance of investing in the resilience of society against disinformation such as media literacy campaigns. In particular, as key experts in the field are starting to point out, more work should be done on cultivating “emotional scepticism” at an individual or societal level rather than unthinkingly believing or sharing sensational news.22 The need for this type of upstream intervention is a critical one, given the refinements in activity on the part of the malicious actor. Entities such as the Internet Research Agency (IRA) have upped operational security and made fake accounts and activity harder to detect.23 The lies are themselves becoming harder to detect.24 A particular pressure point, as some contributors (such as Donara Barojan and Jean-Baptiste Jeangène Vilmer) have noted, will be the democratic process and elections specifically. Here, a great deal will clearly rest on the efforts of platforms. Useful starts have been made. Twitter

21 Cass R. Sunstein and Adrian Vermeule, ‘Conspiracy Theories: Causes and Cures’, The Journal of Political Philosophy, Vol. 7, Issue 2, 2009, pp. 202–227. 22 Nicole Brown, ‘“Emotional Skepticism” Needed to Stop Spread of Deepfakes on Social Media, Expert Says’, CBS News, 12 Nov 19. https://www.cbsnews.com/news/dee pfakes-on-social-media-users-have-responsibility-not-to-spread-fake-content-expert-says/. 23 As the tech platforms themselves have observed in the course of their own takedowns, malicious actors are making progress in efforts to mask identities online. ‘Removing Bad Actors in Facebook’, 31 Jul 18. https://newsroom.fb.com/news/2018/07/removingbad-actors-on-facebook/. 24 As one contributor to this volume (Donara Barojan), has elsewhere remarked, “Disinformation Is Moving from Outright Falsehoods to Highly Divisive/Extreme Authentic Content That Is Inauthentically Amplified” (Donara Barojan, remarks at the NATO StratCom Centre of Excellence DigiCom19 Seminar, Riga, 30 Oct 19. The editors thank Barojan for a personal communication).

18

B. ANG ET AL.

for example has banned political advertising.25 Separately, pledging to keep the 2019 UK general election “healthy and safe,” it has unveiled various measures, including tools introduced to make it easier to report misleading content about voting processes.26 But even as takedowns of bot armies and fake accounts designed to interfere in elections gather pace, the malicious actors have evolved and adapted.27 One study by an NGO working on protecting democracies‚ published in November 2019, suggests that in the first ten months of the year‚ “politically relevant disinformation was found to have reached over million estimated views‚ enough to reach every reported registered voter in the US at least once”. Viral stories that had already been looked into a debunked by reputable fact-checking organizations were still at the time of the study drawing in vast amounts of readers.28 The latest Kremlin/IRA efforts targeting the United States have seen fake accounts posing as individuals with interests in almost all conceivable parts of the U.S. political spectrum, with assets that could divide and polarize American society in

25 Tony Romm and Isaac Stanley-Becker, ‘Twitter to Ban All Political Ads Amid 2020 Election Uproar’, The Washington Post, 30 Oct 19. Facebook’s stance at the time of writing (November 2019) is markedly different: the platform has rescinded an earlier ban on false claims in political advertising, with many classes of political advertising now also exempt from third party fact checking. Alex Hern, ‘Facebook Exempts Political Ads from Ban on Making False Claims’, The Guardian, 4 Oct 19. https://www.theguardian.com/ technology/2019/oct/04/facebook-exempts-political-ads-ban-making-false-claims. 26 Katy Minshall, ‘Serving the Public Conversation for #GE2019’, 11 Nov 19. https://blog.twitter.com/en_gb/topics/events/2019/serving-the-public-conversat ion-for-ge2019.html. 27 Amongst the many other refinements in technique, one which bears watching is “narrative laundering”, which has a long history but has been updated for the information age. Certain state narratives can be diffused from official organs or state propaganda to achieve wider currency, and believability, through “useful idiots”, other publications, social media, and other assets. These might include think tanks, and media outlets seen to be relatively unbiased, and of course fake online identities. ‘New White Paper on GRU Online Operations Puts Spotlight on Pseudo-Think Tanks and Personas’, Stanford Internet Observatory, 12 Nov 19. https://cyber.fsi.stanford.edu/io/news/potemkin-pages-personasblog; for a full exposition, see Renée DiResta and Shelby Grossman, Potemkin Pages & Personas: Assessing GRU Online Operations, 2014–2019, Stanford Internet Observatory Cyber Policy Center, https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/potemkinpages-personas-sio-wp.pdf (pp. 5–8, 13 on narrative laundering). 28 US 2020: Another Facebook Disinformation Election? Avaaz, 5 Nov 19. https://ava azimages.avaaz.org/US_2020_report_1105_v04.pdf.

DISINFORMATION & FAKE NEWS: MEANINGS, PRESENT, FUTURE

19

the leadup to the 2020 elections.29 Inauthentic accounts on social media platforms have also attacked specific Democratic candidates for the presidency, with the apparent aim of undermining the candidates, and sowing discord and dividing the Democrats.30 Platforms involved included Facebook, Twitter and Instagram, but the activity has extended to Medium and online forums. Astute observers of the disinformation scene have observed that the “ecosystem of influence”—using elements of open societies such as politics and cyberspace—is evolving, and only a matter of time before the ecosystem is used for illicit purposes in a more “sustained and strategic way”.31 Studying how technologies may be harnessed to perpetuate— and even innovate—future campaigns will be crucial. States may take advantage of the potential implications from artificial intelligence (AI), Deepfakes and audio or video manipulation to interfere or spread disinformation. It is not easy at the current stage to fully comprehend (or even predict) the tools or strategies underlying such manipulation. The current focus by researchers on state-on-state hostile information campaigns is understandable—but arguably unsustainable. Beyond this, there is a need for better understandings of nuances of state activity, including attacks on organizations, individuals or movements through‚ for example‚ smearing reputations or the muddying of facts. The smears against the White Helmets (and in particular its late co-founder, James Le Mesurier)—alleged to have been linked to the Kremlin—are a case

29 For an in-depth examination see Camille François, Ben Nimmo, and C. Shawn Eib, The IRACopyPasta Campaign, Graphika, Oct 19. https://graphika.com/uploads/Gra phika%20Report%20-%20CopyPasta.pdf. Many sleeper accounts gain trust by first posting innocuous news, only later switching to polarising messaging close to pressure point such as an election. ‘The St. Petersburg Troll Factory Targets Elections from Germany to the United States’, EUvsDisinfo, 2 Apr 19. https://euvsdisinfo.eu/the-st-petersburg-troll-fac tory-targets-elections-from-germany-to-the-united-states/. 30 Natasha Korecki, ‘“Sustained and Ongoing” Disinformation Assault Targets Dem Presidential Candidates’, Politico, 20 Feb 19. https://www.politico.com/story/2019/ 02/20/2020-candidates-social-media-attack-1176018. 31 Alina Polyakova, ‘#DisinfoWeek Brussels 2019 Storyteller: Unpacking the Toolkit of Influence’, 7 Mar 2019. https://www.youtube.com/watch?v=LhQY45aUkXI&feature= youtu.be (see 12.55–13.18; accessed 3 Jan 2020).

20

B. ANG ET AL.

in point, as are online attacks against the pro-democracy Hong Kong protestors.32 Finally, given the constantly evolving nature of the threats, are we then in for an endless arms race where the defence side is doomed to playing catch up against shapeshifting aggressors? What implications would this have on the overall societal resilience—should this in fact be considered a public health issue? These, the cognitive aspects, and overall deleterious effects of “fake news” on societies touched on in this volume, are those which especially demand further attention. Already, informed observers are beginning to remark on an apathy and “general exhaustion with news itself”; this at a time when authentic and good quality information is needed most.33 This volume has attempted to make a contribution to these conversations—the conversations are themselves part of a field that is extremely old, but in some ways just beginning too.

32 See Kim Sengupta, ‘James Le Mesurier death: Co-founder of White Helmets Besieged by Funding Worries and Russian Propaganda Campaign Against Him’, The Independent, 15 Nov 19, and Shelly Banjo and Alyza Sebenius, ‘Trolls Renew Social Media Attacks on Hong Kong’s Protesters’, Bloomberg, 4 Nov 19 (accessed 11 Nov 19). 33 Sabrina Tavernese and Aiden Gardiner, ‘“No One Believes Anything”: Voters Worn Out by a Fog of Political News’, The New York Times, 18 Nov 19 (accessed 27 Nov 19).

Overview of Disinformation

Disinformation as a Threat to National Security Janis Sarts

Abstract Janis Sarts explores how the contemporary information environment has created a favourable space for the spread of hostile disinformation campaigns. He offers readers a detailed analysis of ‘state actors that use disinformation as a part of influence campaigns, to affect targeted society’, explaining how these actors exploit social weaknesses to disrupt a country’s social cohesion. Sarts highlights how state actors use military and diplomatic means ‘in conjunction with information space activities to achieve desired effects’. Although many of the tools used in hostile campaigns were developed during the height of the Cold War, there is newer emphasis on exploiting opportunities created by the digital environment, such as organised trolling (hostile social media campaigns). Sarts concludes by highlighting four ‘layers of responsibility’ in combatting hostile campaigns, adding that in order to protect societal processes like elections, ‘we need to discover ways of how the same technologies can be used in countering hostile influence and disinformation campaigns’.

The views expressed here are solely those of the author. J. Sarts (B) NATO Strategic Communications Centre of Excellence, Riga, Latvia e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_2

23

24

J. SARTS

Keywords Campaign · Digital environment · Social weakness · Information space · Trolling

From ancient times people have sought to exert influence on each other. It has been a continuous element of internal group interaction, interaction between societies and ultimately between nations. The goal in conflict between states has always been about changing the behaviour of one party to the benefit of the other. It can be about conceding territory, surrendering to the rule of the other or giving up beliefs. Throughout history, this has been accomplished by force, but even early military theoreticians like Sun Tzu recognised the importance of influencing behaviour through non-military means.1 Increasingly in the modern world, outright military conflict is both politically and economically expensive, making influence operations beneath the threshold of classic war ever more appealing. One of the central elements of these influence operations is changing the perceptions of the other state’s target audiences through manipulative disinformation campaigns aimed at changing the behaviours of these groups by deliberate strategy.2 These behaviours can seek to bring out anger, distrust in central authorities, alter value systems and install a sense of weakness.

The New Information Environment These strategies are not new, but the contemporary information environment presents a much more favourable space for these strategies than the one we had just a decade ago. This new digital information flow is profoundly shifting societal information consumption habits. It is characterised by the constant consumption of information enabled by smartphones resulting in a continuous individual news cycle, the democratisation of information providers and the algorithm-driven emergence of informational echo chambers. This high tempo of information flow also pushes consumers to decrease the amount of time they spend

1 Tzu, Sun. 2008. The Art of War. London: Arcturus Publishing Ltd. 2 Nissen, Thomas. 2016. Social Media as a Tool of Hybrid Warfare. Riga: NATO

Strategic Communications Centre of Excellence.

DISINFORMATION AS A THREAT TO NATIONAL SECURITY

25

on any particular news to just a few seconds.3 People are increasingly consuming just the headline of an article and the visual, if attached to it, not paying attention to the source of the story, not to mention actually reading the full text of the news. Emotional processing of news items thus increases significantly, which in turn inflates the number of ‘highly emotional’ stories. This new information environment plays on elements like cognitive bias not only exposing those, but also rewarding stories that exploit these notions. As a result, fact-based, balanced and accurate news stories have less reach and impact. An MIT study published in 2018 concluded that on the social media platform Twitter, false stories get consistently much bigger reach than factually correct news.4 This then has an impact on social discourse and the ability to conduct a comprehensive dialogue in society, and enables an environment where conspiracy theories and ideas unsupported by facts continuously get more exposure than expert views and factually based ideas. This being a predominantly social media grown phenomenon, it is increasingly crossing in the traditional media field. As their traditional business models are under pressure or being disrupted, many are seeking solutions by mimicking social media approaches and opting for sensationalist and emotional content at the expense of factual and balanced content. This information environment is fertile ground for conducting hostile disinformation campaigns. Here it would be pertinent to make a distinction between misinformation and disinformation. While misinformation is creating false or misleading information that is a single—typically not malicious—act created by identifiable person, disinformation, on the contrary, is the deliberate, consistent and coordinated use of false, deceptive or distorted information across various information channels to achieve a desired effect on a specific audience. Most successful influence campaigns go unnoticed by society and many actors employ this tactic. The first group use it for financial gain, where disinformation campaigns are designed to get a reaction from an audience and clicks which can be converted into financial gain using a variety of methods. The second group seek to achieve political advantage and are ready to step into the

3 Attention Spans. Consumer Insights. 2015. Microsoft Canada. 4 Soroush, Vosoughi, Deb, Roy, Sinan, Aral. 2018. The Spread of True and False News

Online. Science Vol. 359, Issue 6380, pp. 1146–1151.

26

J. SARTS

grey zone of disinformation to advance their idea, attack political opponents or promote a specific cause. The third group are state actors that use disinformation as part of influence campaigns, to affect targeted society, typically through consistent, coordinated disinformation campaigns. In some of the cases, one can observe an interplay between these groups which is driven by mutual interests.5

State Actor Influence Campaigns The most dangerous and resourceful is this third group, as it can bring to bear all instruments of national power and communicate across that spectrum. Some parameters are typical of state actors influence campaigns. Establishing preconditions for the success of the influence campaign. Every society has specific patterns of information consumption, networks through which information flows both in traditional as well as digital media landscape. To conduct an influence campaign, a hostile actor has to establish itself as an integral part of that landscape. Once it is generally considered as part of ‘normality’ it helps to conceal these campaigns and achieve one of the critical parameters: be unnoticed. A good example here is the Crimean occupation by Russia. The most important element of success has been the long-standing domination of the Kremlin’s media channels in this part of Ukraine, which helped shape the perceptions of the Crimean audience over a long period, developing a habit of viewing Russia’s channels as the primary source of information. Eventually, in 2014, it was used as the primary vector to manage the perceptions and behaviour of the local population during the process of occupation.6 Similar tactics are used by Russia in the Baltic states and number of other countries.7 Exploitation of vulnerabilities. Case studies of influence campaigns by hostile state actors indicate consistent exploitation of societal weaknesses 5 Internet Research Agency Indictment—Department of Justice, https://www.justice.

gov/file/1035477/download, last accessed 3 October 2018. 6 Vladimir, Sazonov, Kristina,Muur, Holger, Molder (eds.). 2016. Russian Information Campaign Against Ukrainian State and Defence Forces. Riga: NATO Strategic Communications Centre of Excellence. 7 Lange-Ionatamishvili, Elina (ed.). 2018. Russia’s Footprint in the Nordic-Baltic Information Environment. Riga: NATO Strategic Communications Centre of Excellence.

DISINFORMATION AS A THREAT TO NATIONAL SECURITY

27

as a venue for the disruption of social cohesion of a country. Deepening the rifts within a country is seen as an effective way of weakening an opponent, weakening political leadership of the country and its ability to act. This is an area where a state actor might seek out relationships with local players that might benefit from such discourse. This is an effective method because it is difficult, while confronting it, delineate where it is legitimate local groups reaction to the issue and where it is hostile powers attempt to exploit a vulnerability to weaken society. Issues can range from religious, inter-ethnic, social inequality, migration, etc. The comprehensive approach. This is one of the key indicators for the hostile state activity. As the previous actor groups are limited both in resources and scope, a state actor functions across all elements of power to achieve desired effects. In the information space, it ranges from traditional media pursuing consistently specific narratives, to exploiting the social media environment through trolls, robotic networks, impersonations and leaks of hacked documents. It also involves ‘agents of influence’ to field particular narratives, through traditional and social media. This has the ultimate aim of developing a consistent, multifaceted story, creating the illusion of many voices actively engaged in discussing and promoting it. State actors, however, are not limited to the information space. They can use military and diplomatic means, or their security services in conjunction with information space activities to achieve desired effects. A good illustration is the Eston Kohver case in Estonia,8 when in September 2014, just two days after US President Barack Obama made a speech in Tallinn addressing US commitment to Baltic states’ security in the face of a growing Russian threat, an Estonian security police (KAPO) officer was abducted by Russian security services (FSB) on the Estonian side of the Estonia–Russia border. The other example is Russia’s military activity in the Baltic sea.9 During Russian military exercises, a large part of Western Latvia experienced a disruption of mobile networks services, directly affecting a significant proportion of the population and inflating the fear of the Russian military. A similar case was reported by Finnish

8 Roonemaa, Holger. 2017. How Smuggler Helped Russia to Catch Estonian Officer. Re:Baltica. https://en.rebaltica.lv/2017/09/how-smuggler-helped-russia-tocatch-estonian-officer/, last accessed 4 October 2018. 9 Mehta, Aaron. 22 November 2017. Lessons from Zapad—Jamming, NATO and the Future of Belarus, DefenseNews.

28

J. SARTS

and Norwegian authorities in 2018, attributing the loss of GPS signal in the northern parts of these countries to Russian military activity.

Tools and Techniques When we look at the toolbox of such disinformation campaigns, it is varied and diverse. Many of the tools are based on techniques developed during the height of the Cold War when two different worldviews were competing across the globe. These include more traditional media instruments like TV, newspapers and radio which are used to disseminate a particular worldview and are aiming to get audiences accustomed to and sharing specific narratives, and consequently, world views. But in moments of crisis, it can turn to more sinister methods, such as using false, emotional stories, or distorted interpretations of events to affect the behaviours of targeted audiences.10 The other well-known instruments is the use of networks of influence agents. They are used to affect public discourse on specific subjects that are important to a given hostile state actor, ranging from topics such as business, politics, international relations to subtle areas like culture and history.11 Although these old tools are very much in use in the contemporary environment, new emphasis is put on exploiting opportunities provided by the digital environment. Organised trolling12 is a very well-known instrument that creates the illusion that significant parts of society share a specific point of view and actively exploits human popularity bias, by making people believe something is true because many other people think it is true. Trolling can also be used to mute opinion leader voices during crises by swarming trolls on the social media accounts of these people. Trolls can also be used to aggressively attack specific opinions, thus discouraging people from sharing these views.

10 Lange-Ionatamishvili, Elina (ed.). 2017. Redefining Euro-Atlantic Values: Russia’s Manipulative Techniques. Riga: NATO Strategic Communications Centre of Excellence. 11 Lutsevych, Orysia. 2016. Agents of the Russian World Proxy Groups in the Contested Neighborhood. London: Chatham House. 12 Aro, Jessikka. 2016. The Cyberspace War: Propaganda and Trolling as Warfare Tools. European View Vol. 15, Issue 1, pp. 121–132.

DISINFORMATION AS A THREAT TO NATIONAL SECURITY

29

A recent sub-tool for traditional trolling is the so-called ‘Robotrolling’ or the use of social automated or semi-automated account networks. The most common use of this tool is to amplify specific stories. It leads to a number of effects: first, it serves the same human popularity bias. Second, it helps distort public sentiments, sensitivities and interests if automated social media measurement tools are used, leading to false results and thus wrong conclusions. Finally, it is an effective way of manipulating social media algorithms, tricking them to promote particular stories. It is linked to a set of methodologies that are designed to upgrade specific stories in the search engines like Google and YouTube in particular.13 Big data based on the microtargeting of citizens, especially during politically sensitive moments like elections, is another new tool available for hostile state actors to pursue their malign interests. Increasingly data crumbs that are left by people while online can be collected and used to create profiles that can include information on psychology, income, family status, beliefs and political inclinations. This information is typically used to target people with specified marketing campaigns for consumer products, but gradually it is being used to affect political behaviours. This tool makes it much easier to find particular groups in the society that are disillusioned or have adversarial perspectives or specific beliefs and target them with itemised messages for predesigned outcomes. This was demonstrated by Russia during the 2016 US presidential election campaign.14 A key objective when using these tools is to deploy specific narratives and embed them in the cognitive landscape of an audience. This is then exploited to create divisions in society, or widen existing fractures. One of the vectors of such activity is aimed at undermining the trust between different elements of society. Typically, these target trust in the central government, in the military and security systems of the country, in opinion leaders and ultimately in the very construct of the country.15

13 James,

David. 2016. How to Successfully Rank a Video on YouTube (Case Study). https://medium.com/@BGD_Marketing/how-to-successfully-rank-a-videoon-youtube-case-study-19a1f298052, last accessed 4 October 2018. 14 Internet Research Agency Indictment—Department of Justice, https://www.justice. gov/file/1035477/download, last accessed 3 October 2018. 15 Lange-Ionatamishvili, Elina (ed.). 2017. Redefining Euro-Atlantic Values: Russia’s Manipulative Techniques. Riga: NATO Strategic Communications Centre of Excellence.

30

J. SARTS

Response We face a difficult situation that is becoming more complex every day. We are facing not only the old techniques of influence but also fast evolving new ones based on digital technologies, getting more refined, sophisticated and effective every day. We see a plethora of actors with hostile intentions operating in the information environment. We see the dissolution of country boundaries in the information space making physical distances irrelevant and seemingly organic infiltration of audiences easy. What are the steps governments should take to confront such hostile influence and stay on top of the problem? As in every crisis or battle, the first essential prerequisite is to know what exactly is happening. In the modern and complex information environment, it is vital to have a clear and accurate picture of the information space. States have to invest in creating effective awareness tools and processes. Such tools should give an understanding of the audience landscape and how society is networked information wise. What kind of echo chambers exist? Are there are any non-organic influences in these chambers? What organised troll networks are operating in a country’s information space? What kind of robotic networks are running in the social media environment, and who is the most likely owner of these networks? Are microtargeting campaigns occurring that are unrelated to the selling of goods or services? These are just some of the critical questions that need to be answered to understand the information environment and be able to respond effectively to these challenges, especially during crises. As argued above, influence operations typically exploit vulnerabilities in society. Coordinated attacks can occur across the spectrum of government, including cybersecurity, media oversight, health care systems, social security systems and public transportation systems. Coordination of government institutions to confront such attacks that exploit such crosssectoral issues is paramount. This requires cross-government coordinating systems, risk assessment processes to analyse influence operation related vulnerabilities and the introduction of these scenarios in government exercises. This brings us to the issue of capability development. It is essential to have experts that are capable of understanding, responding and have the technical means to operate in confronting influence in the information space, not only in a few departments of government but in almost every

DISINFORMATION AS A THREAT TO NATIONAL SECURITY

31

sector. A comprehensive and coordinated response to disinformationcentric attacks has proven to be one of the most effective antidotes to the problem. The third layer of responsibility lies in society and its resilience. Evidence has shown that societal resilience to influence operations increases once it is exposed to the public, as was demonstrated by the ‘Lisa case’ in Germany.16 There has to be public discussion of the risks of disinformation and how they can be used by hostile actors against the interests of society. Steps to increase media literacy of citizens should be introduced and the educational system should step up the training of critical thinking, especially in the context of the digital information environment. To balance government efforts, the NGO sector—such as think tanks and investigative media—should be encouraged to develop programmes that track organised disinformation campaigns and share them publicly. In crises, NGOs can serve as a counterbalance to attacks on government credibility. An increase of public and government services personnel digital hygiene, especially related to data, should be a particular area of effort. The fourth layer of activity is cooperation with digital media platforms—social media in particular. It is increasingly hard to counter digital influence campaigns without collaboration with digital platforms. They hold the key data and understand the processes on their platform. It is of paramount importance—especially in the early stages of a crisis— that there is cooperation and platforms can understand the situation and can take necessary steps to limit the spread of organised disinformation campaigns that fuel crises. Regulatory frameworks should also be considered. Clear rules that do not limit individual freedoms but restrict non-organic, organised disinformation would strengthen crises response measures both on a state level as well as would provide better guidance for digital information platforms.

Conclusion Using disinformation campaigns to undermine countries is an increasingly effective instrument that is used by hostile state actors. Changes in the information environment brought about by technology have created 16 Meister, Stefan. 2016. The “Lisa Case”: Germany as a Target of Russian Disinformation. NATO Review. https://www.nato.int/docu/review/2016/also-in-2016/lisacase-germany-target-russian-disinformation/EN/index.htm, last accessed 4 October 2018.

32

J. SARTS

numerous new challenges. Beyond our current experience, new developments like deep fakes, better use of neuroscience discoveries coupled with better big data analyses and developments in deep machine learning will enable even more powerful influence tools upon society. But we must note, technology itself has neither good nor bad qualities. It is up to the people how we decide to use these new technological possibilities. To protect the future cohesion of our societies and social processes, like elections, we need to discover ways of how the same technologies can be used in countering hostile influence and disinformation campaigns. Governments have to focus on the challenges of tomorrow that will shape the information environment and information warfare to remain capable of confronting increasingly diverse and dangerous threats of the near future.

References Aro, Jessikka. 2016. The Cyberspace War: Propaganda and Trolling as Warfare Tools. European View Vol. 15, Issue 1, pp. 121–132. Attention Spans. Consumer Insights. 2015. Microsoft Canada. Internet Research Agency Indictment—Department of Justice. https://www.jus tice.gov/file/1035477/download, last accessed 3 October 2018. James, David. 2016. How to Successfully Rank a Video on YouTube (Case Study). https://medium.com/@BGD_Marketing/how-to-successfully-rank-avideo-on-youtube-case-study-19a1f298052, last accessed 4 October 2018. Lange-Ionatamishvili, Elina (ed.). 2017. Redefining Euro-Atlantic Values: Russia’s Manipulative Techniques. Riga: NATO Strategic Communications Centre of Excellence. Lange-Ionatamishvili, Elina (ed.). 2018. Russia’s Footprint in the Nordic-Baltic Information Environment. Riga: NATO Strategic Communications Centre of Excellence. Lutsevych, Orysia. 2016. Agents of the Russian World Proxy Groups in the Contested Neighborhood. London: Chatham House. Meister, Stefan. 2016. The “Lisa Case”: Germany as a Target of Russian Disinformation. NATO Review. https://www.nato.int/docu/review/2016/alsoin-2016/lisa-case-germany-target-russian-disinformation/EN/index.htm, last accessed 4 October 2018. Nissen, Thomas. 2016. Social Media as a Tool of Hybrid Warfare. Riga: NATO Strategic Communications Centre of Excellence. Roonemaa, Holger. 2017. How Smuggler Helped Russia to Catch Estonian Officer. Re:Baltica. https://en.rebaltica.lv/2017/09/how-smuggler-helpedrussia-to-catch-estonian-officer/, last accessed 4 October 2018.

DISINFORMATION AS A THREAT TO NATIONAL SECURITY

33

Soroush, Vosoughi, Deb, Roy, Sinan, Aral. 2018. The Spread of True and False News Online. Science Vol. 359, Issue 6380, pp. 1146–1151. Tzu, Sun. 2008. The Art of War. London: Arcturus Publishing Ltd. Vladimir, Sazonov, Kristina, Muur, Holger, Molder (eds.). 2016. Russian Information Campaign Against Ukrainian State and Defence Forces. Riga: NATO Strategic Communications Centre of Excellence.

Tools of Disinformation: How Fake News Gets to Deceive Edson C. Tandoc Jr.

Abstract In this chapter, Edson C. Tandoc Jr. assesses why people believe in and propagate false information found online. He does so through four components of communication—namely Sender, Message, Channel and Receiver—and examines the different factors that affect each one. Tandoc’s research reveals that an increasing number of people get their news from social media instead of local news websites, which leads to various consequences on all four components of communication. He explains that individuals judge a news story’s credibility not only on who shared it on social media, but also on the number of likes, comments, and shares. Fake news producers therefore often “produce clickbait content to get more people to like, share, or comment on their fake stories, often playing into readers’ biases or interests”. Tandoc proposes that although the majority of disinformation research has focused on Facebook and Twitter, “fake news now increasingly moves through closed social media applications, such as the messaging app WhatsApp”. He also delineates how fake news and disinformation thrives in times of uncertainty and in situations of information overload—combatting this issue will require

E. C. Tandoc Jr. (B) Wee Kim Wee School of Communication and Information, Singapore, Singapore e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_3

35

36

E. C. TANDOC JR.

a “multi-pronged approach involving technological, economic, legal, and social interventions”. Keywords Social media · Disinformation · Components of communication · Fake news · Viral

Introduction The worsening problem with disinformation, aggravated by the influx of fake news online, has prompted institutions around the world to take action. Governments have initiated legislation. News organisations have come together to fight fake news. Other organisations have launched and funded fact-checking initiatives. Technology companies, blamed for the rise of fake news, have also taken action by removing accounts that spread fake news, among other initiatives. Yet, the ultimate question to the problem is: What makes people believe in fake news? The answer, unfortunately, is not simple. The production and proliferation of fake news is motivated by financial and ideological gains. Initiatives to combat fake news are also not likely to stop actors with vested interests from finding new ways to spread fake news and other forms of disinformation. This is why research has tended to focus on understanding the factors that make individuals prone to being misled by fake news, for these factors are often exploited by those behind the production and proliferation of fake news. Studies have argued that the reach of fake news, at least during the US presidential elections in 2016, was limited—with only a fraction of the population exposed to fake news posts (Allcott & Gentzkow, 2017; Nelson & Taneja, 2018). However, for those who have been fooled by fake news, the effects are real. For example, a man opened fire at a pizzeria in Washington DC on 4 December 2016 after reading a viral and false conspiracy story that identified the pizzeria as the site of an underground child sex ring ran by then-presidential candidate Hillary Clinton and her former campaign manager, John Podesta (Lopez, 2016). In India, fake news posts spreading on the messaging application WhatsApp have been blamed for numerous lynching and murders of people misidentified as kidnappers (Frayer, 2018; Safi, 2018). In a small town in Mexico, a 43year-old man and his 21-year-old nephew were burned to death by a mob

TOOLS OF DISINFORMATION: HOW FAKE NEWS GETS TO DECEIVE

37

responding to a rumour that had spread through WhatsApp about child abductors roaming the village (Martinez, 2018). Such unfortunate cases make it imperative for us to understand what makes people believe in false information. By examining different stages of communication, this chapter identifies factors that can help explain why people fall for fake news. In doing so, it also identifies how the actors behind fake news take advantage of these problem areas, turning them into tools for disinformation.

Defining Fake News Fake news is not a new term, nor is disinformation a new phenomenon. In 1782, Benjamin Franklin printed a fake issue of a real newspaper in Boston and reported that British forces had hired Native Americans to kill and scalp American soldiers, women and children, aiming to push for America’s independence (Parkinson, 2016). In 1938, a million Americans panicked after listening to a radio adaptation of H. G. Well’s drama The War of the Worlds, which was narrated using a radio news report format, with actors pretending to be reporters, scientists, and government officials as they told a story of a Martian invasion (Cantril, 2005). The term “fake news” has also been used to refer to different things, from political satires and news parodies, to state propaganda and misleading advertising (Tandoc, Lim, & Ling, 2017). These definitions vary along two main dimensions: (i) the level of facticity; and (ii) the actual intent to deceive. While news parodies such as The Onion are based on made-up accounts and fictitious reports, political satires such as The Daily Show are based on real events and issues. Political satires and news parodies use fakery for the intention of humouring audiences. On the other hand, propaganda uses fakery to manipulate and deceive people (Tandoc et al., 2017). Unlike other forms of disinformation, such as love scams and phishing emails, fake news refers to a specific type of disinformation—it is false, it is intended to deceive people, and it does so by trying to look like real news. However, how scholars and policymakers define fake news and disinformation does not always reflect how members of the public define these terms. Some politicians, for example, use the term “fake news” to describe real news reports by real news organisations whose coverage they disagree with or whose accounts put them in an unfavourable light. In interviews and focus groups with Singaporean residents, we have observed different

38

E. C. TANDOC JR.

ways audiences define fake news, often based on facticity and intention. Many of them refer to whether an article is factual or not, labelling anything that is inaccurate, exaggerated or sensationalised as fake news. Some of them also considered news they perceived as biased—even if it comes from a real news organisation—to be fake news. When it comes to intention, some of those we interviewed were aware that fake news tends to sow discord or tension in society. Some also believed that fake news may be politically motivated, aimed at making citizens unhappy with their government. Still, many admit that while they were aware of the problem with fake news, they did not really know how much fake news they see. In a survey we conducted with 2000 Singaporean residents in December 2017, some 45.6% said they did not know the extent to which they were exposed to fake news. In another survey with about 1000 Singaporean residents conducted in June 2017, 17.9% admitted that they often believe posts in social media that turned out to be fake. So why do people believe in fake news? A useful framework to answer this question is Berlo’s (1960) traditional model of communication. Berlo (1960) identified four components of communication—sender, message, channel, and receiver (SMCR)—and argued that each of these components is affected by a number of factors. However, like many earlier models of communication, the SMCR has several limitations, such as excluding potential noise that could impede communication flow as well as different forms of feedback (Campbell & Level, 1985). It is also a simplistic and linear model of an otherwise dynamic and complicated process. Still, the main components of the SMCR model provide a starting framework to understand some of the different factors that make people prone to believing in fake news.

Sender Factors Source refers to where the message originates from. When it comes to news as a form of message, news organisations are traditionally identified as sources. However, this has been evolving. Social media platforms are increasingly becoming more than just a space for social networking. In Singapore, a survey by the Reuters Institute for the Study of Journalism found that in general 86% use messaging application WhatsApp while 77% use social media site Facebook (Tandoc, 2018b). The survey also found that 42% use WhatsApp and 52% use Facebook as sources of news (Tandoc, 2018b). In another survey we conducted in October 2017

TOOLS OF DISINFORMATION: HOW FAKE NEWS GETS TO DECEIVE

39

involving 1113 social media users in Singapore, 47.2% said they get their news frequently or very frequently from Facebook, compared with only 37.8% from local TV news, 37.2% from local newspaper websites, and 36% from local print newspapers. A lot of news consumption on social media is incidental—the users get exposed to news just because they happen to be on those spaces for primarily non-news purposes. Whether purposive or incidental, news consumption on social media has significantly increased over the years. In some of our focus group discussions with social media users in Singapore, participants would simply mention “Facebook” when asked about where they get their news from. Social media, however, has also exposed users to fake news. Aside from social media platforms being perceived as sources of news, the idea of the information source itself has become complex and confusing within social media. Indeed, theories of persuasion identify the information source as an important heuristic cue used in evaluating a message (Chen, Duckworth, & Chaiken, 1999). The heuristic-systematic model (HSM) of information states that a message will be perceived as credible when the source is perceived as credible (Kang, Bae, Zhang, & Sundar, 2011). This is because relying on a credible source lessens a user’s cognitive load in evaluating a particular message (Chen et al., 1999). This is particularly important in the context of social media, where users are confronted by information overload. However, the idea of “source” in social media has become murky (Sundar & Nass, 2001). Traditional news organisations promote their content on social media, individual journalists share information about their news stories using their social media accounts, and ordinary users share links to news stories or post information about events they witness first-hand. This means that when dealing with messages on social media, readers may perceive these multiple sources “as a set of layers with various levels of proximity to the reader” (Kang et al., 2011, 721). For example, a friend sharing a news story posted by a news company may be perceived as a proximate source, while the news company may be perceived as the distal source. An online experiment we conducted found that participants rate a news story more credible when it was shared on Facebook by a traditional news media account (i.e. The Straits Times ) than when the same story was shared by the participants’ own friends. However, this difference narrowed when the news article is low in relevance (i.e. not about Singapore) (Tandoc, 2018c). This result potentially indicates that when users’ motivation to process information is low, they rely on heuristics such as the source in

40

E. C. TANDOC JR.

assessing message credibility. Therefore, a friend sharing a news article makes it more interesting and potentially more convincing. This makes information sources an important factor in the believability and spread of both legitimate information and disinformation. Fake news producers have taken advantage of this, either by misattributing a made-up story to a well-known or credible source (i.e. making it appear as if the fake story is reported by Singapore’s local newspaper The Straits Times, by mimicking its layout) or by quoting supposed experts that are actually non-existent (i.e. attributing a quote to a made-up nuclear expert). Those who spread fake news inadvertently also become unwitting accomplices to this scheme. Some users who fall for fake news and subsequently share disinformation might then act as opinion leaders for their friends or family, and their credibility makes the false story more believable to some members of their social network.

Message Factors Message refers to the object of the communication which is passed from the sender to the receiver. On social media, messages do not come by themselves alone, they now come with additional information in the form of popularity ratings. These cues—number of likes, shares, or comments—have attracted scholarly attention. They function as popularity heuristics that can affect how people evaluate the message they accompany (Sundar, 2008). For example, a model of message credibility in the context of online communication referred to these as “bandwagon heuristics” (Sundar, 2008, 83). This group of cues can be “quite powerful in influencing credibility given that it implies collective endorsement and popularity of the underlying content” (Sundar, 2008, 84). Simply, social media users might judge the believability of the message based on how many other users have endorsed it—via likes, comments, and shares. Fake news producers have also taken advantage of this. Some of them produce clickbait content to get more people to like, share, or comment on their fake stories, often playing into readers’ biases or interests. Other fake news producers rely on an artificial network made up of click farms that either use paid trolls or bots to get more likes, shares, and comments on their posts (Albright, 2016; Tandoc et al., 2018). Click farms refer to businesses that provide clients—for a fee—with a large number of engagements (e.g. clicks, likes or comments) for a post or swarm social media

TOOLS OF DISINFORMATION: HOW FAKE NEWS GETS TO DECEIVE

41

with the same post or comment, by either hiring a large number of individuals or using automation (or a combination of these). For example, politically motivated fake news can register hundreds of comments, but examining those comments one by one will reveal that many accounts post the exact same comments. When users do not critically examine such popularity cues, they might take them as collective endorsement that can make the fake content more persuasive and believable.

Channel Factors Channel refers to the conduit of the communication between the sender and the receiver—it is through which the message is exchanged. Most research on disinformation in social media have focused primarily on either Twitter or Facebook. Twitter has attracted a lot of scholarly attention, primarily because its data is accessible (Jang et al., 2018). Researchers can collect tweets and user data, and many studies have traced how fake news reaches different users, from a particular set of central nodes reaching other users through retweets. Facebook has also been the subject of much research; while message and user data on Facebook are more difficult to access and therefore more challenging for network analyses, user activity can be observed by fellow users. A user sharing a fake news post on their timeline makes a relatively public act—their Facebook friends can see the post, unless they control their privacy settings. Facebook can also track posts that are going viral and there are tools that researchers can use to monitor what messages are getting traction on Facebook. This nature of Facebook—being a relatively open social media application—makes it easy for users to track the spread of fake news, and researchers have conducted surveys or interviews with users to study the spread of fake news (Tandoc et al., 2018). Facebook has also rolled out functions to fight fake news, such as allowing users to flag what they suspect to be fake news, alerting a user that what they are about to share has been verified as fake by third-party fact-checkers, and displaying related fact-checks next to posts that link out to fake news articles (Frenkel, 2018). However, fake news now increasingly moves through closed social media applications, such as the messaging app WhatsApp (Tandoc, 2018a). In these spaces, fake news flows from sender to receiver, invisible to those who are not part of the conversation. Since WhatsApp messages are encrypted, WhatsApp itself cannot monitor what messages are going

42

E. C. TANDOC JR.

viral (Funke, 2018). Thus, WhatsApp’s response to fake news has been to detect and suspend accounts that behave as bots—quickly pumping out messages or adding accounts into groups in speeds not humanly possible. In some countries, they have limited the size of groups as well as the number of times a person can forward a message (Funke, 2018). It has also rolled out a forward indicator, which shows that a message has been forwarded—although it does not show the original source of the message (Funke, 2018). At best, these functions can slow down the spread of fake news. They cannot stop a fake news producer, however, from finding ways to create new groups, add accounts, and send messages in ways that mimic the speed of a normal person, nor can they stop a user misled by a fake news post from sending screenshots of the fake news post to go around the forward indicator or the limits to forward frequency.

Receiver Factors Since individuals cannot attend to all information sources available, they need to select which ones to use. Thus, individuals usually engage in selective exposure—they actively choose which information sources to use based on particular motivations (Katz, Blumler, & Gurevitch, 1973). Specifically, scholars have defined audiences’ selective exposure as “the selection of media outlets that match their beliefs and predispositions” (Stroud, 2008, 342). This assumption is supported by the theory of cognitive dissonance, which argues that individuals want consistency in their cognitions (Festinger, 1962). Applied to the study of news consumption, scholars argued that audiences will seek information sources aligned to their beliefs in order to avoid any inconsistency. Studies have found support for this assumption, with one in particular finding that people’s political dispositions affected their selection of sources across different platforms (Stroud, 2008). Selective exposure is now facilitated by social media platforms, with users able to control who becomes part of their network and which pages they follow (Messing & Westwood, 2012). Since users get exposed to content through their friends’ posts that appear on their feeds without even visiting the individual pages, selection has shifted from choosing sources to choosing messages (Messing & Westwood, 2012). This has particular implications on news consumption. When users only choose sources and messages that are consistent with their ideological beliefs,

TOOLS OF DISINFORMATION: HOW FAKE NEWS GETS TO DECEIVE

43

they do not get exposed to other perspectives. This facilitates positive confirmation bias, or an individual’s “tendency, when testing an existing belief, to search for evidence which could confirm that belief, rather than for evidence which could disconfirm it” (Jones & Sugden, 2001, p. 59). Confirmation bias, therefore, drives news consumption (Knobloch-Westerwick & Kleinman, 2011). Users’ motivations for using social media and accessing news also play a role. It seems that news is being consumed and shared no longer primarily for its information value but for its social utility. Users may share an article to a group of friends not to specifically inform or update them, but to humour or entertain them. Some users share articles to their loved ones not to educate them specifically, but perhaps to make them feel that they are remembered or cared about. When these motivations prevail, a message’s informational value and presumably its accuracy becomes secondary. This opens up the process to fake news, which are often funny, outrageous, and entertaining—but not true.

Conclusion The SMCR model offers a framework to classify the different factors that make users prone to believing in fake news, and yet other macrolevel factors are also equally important, which the SMCR model had not accounted for. Studies seeking to understand and investigate the phenomenon of fake news also need to consider these macro-level factors. First, fake news thrives in times of uncertainty. When people feel uncertain, they crave for information. This happens in situations such as disasters or elections. These are also situations when information supply tends to be low in the beginning. Thus, when demand for information is high but supply is low, some people cling to the first pieces of information they get—even if these are of low quality and accuracy. Uncertain situations also get in the way of verification, especially when alternative information sources are not immediately available. Second, fake news also thrives in situations or contexts of information overload. With the availability of a wealth of information, social media users become inundated with details and stimuli. They will not be able to attend to all these pieces of information. They might also start valuing information less, because information is suddenly abundant. This becomes a bad combination for information integrity and opens up spaces for fake news.

44

E. C. TANDOC JR.

The rise of fake news is a real and complex problem requiring immediate but thoughtful and sustainable solutions. Solving disinformation in general and fake news in particular needs a multi-pronged approach involving technological, economic, legal, and social interventions. At the root of the problem are social media users, for disinformation withers when no one believes it. An important first step, therefore, is understanding what makes individuals believe in disinformation. Such an understanding will help us come up with potential solutions to dismantle the tools of disinformation.

References Albright, J. (2016). The #Election 2016 micro-propaganda machine. Medium. Retrieved from https://medium.com/@d1gi/the-election2016-micro-propag anda-machine-383449cc1fba. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/ 10.1257/jep.31.2.211. Berlo, D. (1960). The process of communication. New York, NY: Holt, Rinehart and Winston. Campbell, D. P., & Level, D. (1985). A black box model of communications. The Journal of Business Communication, 22(3), 37–47. https://doi.org/10. 1177/002194368502200304. Cantril, H. (2005). The invasion from Mars. Princeton, NJ: Princeton University Press. Chen, S., Duckworth, K., & Chaiken, S. (1999). Motivated heuristic and systematic processing. Psychological Inquiry, 10(1), 44–49. Festinger, L. (1962). A theory of cognitive dissonance. Stanford, CA: Stanford University Press. Frayer, L. (2018). Viral Whatsapp messages are triggering mob killings in India. NPR. Retrieved from https://www.npr.org/2018/07/18/629731693/fakenews-turns-deadly-in-india. Frenkel, S. (2018). Facebook says it deleted 865 million posts, mostly spam. The New York Times. Retrieved from https://www.nytimes.com/2018/05/15/ technology/facebook-removal-posts-fake-accounts.html. Funke, D. (2018). WhatsApp is limiting message forwarding to cut down on fake news. Poynter. Retrieved from https://bit.ly/2NFrrwh. Jang, S. M., Geng, T., Queenie Li, J.-Y., Xia, R., Huang, C.-T., Kim, H., & Tang, J. (2018). A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis. Computers in Human Behavior, 84, 103–113. https://doi.org/10.1016/j.chb.2018.02.032.

TOOLS OF DISINFORMATION: HOW FAKE NEWS GETS TO DECEIVE

45

Jones, M., & Sugden, R. (2001). Positive confirmation bias in the acquisition of information. Theory and Decision, 50(1), 59–99. https://doi.org/10.1023/ a:1005296023424. Kang, H., Bae, K., Zhang, S., & Sundar, S. S. (2011). Source cues in online news: Is the proximate source more powerful than distal sources? Journalism & Mass Communication Quarterly, 88(4), 719–736. https://doi.org/10. 1177/107769901108800403. Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications research. The Public Opinion Quarterly, 37 (4), 509–523. https://doi.org/ 10.2307/2747854. Knobloch-Westerwick, S., & Kleinman, S. B. (2011). Pre-election selective exposure. Communication Research, 39(2), 170–193. https://doi.org/10.1177/ 0093650211400597. Lopez, G. (2016). Pizzagate, the fake news conspiracy theory that led a gunman to DC’s Comet Ping Pong, explained. Vox. Retrieved from http://www.vox.com/policy-and-politics/2016/12/5/13842258/piz zagate-comet-ping-pong-fake-news. Martinez, M. (2018). Burned to death because of a rumour on WhatsApp. BBC News. Retrieved from https://bbc.in/2zPmnAb. Messing, S., & Westwood, S. J. (2012). Selective exposure in the age of social media. Communication Research, 41(8), 1042–1063. https://doi.org/10. 1177/0093650212466406. Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737. https://doi.org/10.1177/1461444818758715. Parkinson, R. (2016). Fake news? That’s a very old story. The Washington Post. Retrieved from https://wapo.st/2RwOwTN. Safi, M. (2018). ‘WhatsApp murders’: India struggles to combat crimes linked to messaging service. The Guardian. Retrieved from https://www.thegua rdian.com/world/2018/jul/03/whatsapp-murders-india-struggles-to-com bat-crimes-linked-to-messaging-service. Stroud, N. J. (2008). Media use and political predispositions: Revisiting the concept of selective exposure. Political Behavior, 30(3), 341–366. https:// doi.org/10.1007/s11109-007-9050-9. Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). Cambridge: The MIT Press. Sundar, S. S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51(1), 52–72. https://doi.org/10.1111/j.14602466.2001.tb02872.x.

46

E. C. TANDOC JR.

Tandoc, E. (2018a). Commentary: Would you know if you’ve been fed a deliberate online falsehood? Probably not. Channel News Asia. Retrieved from https://www.channelnewsasia.com/news/singapore/commentarywould-you-know-if-you-ve-been-fed-a-deliberate-online-9849066. Tandoc, E. (2018b). Singapore. Digital News Report. Retrieved from http:// www.digitalnewsreport.org/survey/2018/singapore-2018/. Tandoc, E. (2018c). Tell me who your sources are: Perceptions of news credibility on social media. Journalism Practice, 1–13. https://doi.org/10.1080/ 17512786.2017.1423237. Tandoc, E., Lim, Z. W., & Ling, R. (2017). Defining “fake news:” A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/ 10.1080/21670811.2017.1360143. Tandoc, E., Ling, R., Westlund, O., Duffy, A., Goh, D., & Lim, Z. W. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society, 20(8), 2745–2763. https://doi.org/10.1177/ 1461444817731756.

How News Audiences Think About Misinformation Across the World Richard Fletcher

Abstract In this chapter, Richard Fletcher examines some of the key findings from the 2018 Reuters Institute Digital News Report. This report analysed online data dealing with news consumption from approximately 74,000 respondents internationally, with particular focus on their “level of concern over and exposure to specific types of misinformation and disinformation associated with the news”. One of its main findings was that just over half of the respondents were either “very” or “extremely” concerned about bias, poor journalism, and completely made-up news. However, Fletcher explains that the level and areas of concern varies from country to country. For example, in Eastern Europe, the questionnaire showed that “misleading advertising was more of a concern than in many other parts of the world”. Fletcher analyses each key finding of the report, including the public perception of “fake news” as a term, and who news audiences think should do more to fix problems associated with misinformation. He concludes by emphasising the importance of monitoring public concern over misinformation, in order to properly address the problems it poses.

R. Fletcher (B) Reuters Institute for the Study of Journalism, University of Oxford, Oxford, UK e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_4

47

48

R. FLETCHER

Keywords Misinformation · Disinformation · Bias · Fake news · Social media

Concerns over so-called “fake news” have now evolved into a global debate over online misinformation and disinformation. This chapter describes some of the key findings from the 2018 Reuters Institute Digital News Report (Newman et al. 2018)—the world’s largest ongoing survey of news consumption. The focus here is on news audiences, and their level of “concern over” and “exposure to” specific types of misinformation and disinformation associated with the news, ranging from completely made-up news, through to poor journalism, misleading advertising, and satire. Although the results vary from country to country, it is clear from the survey data that people are often just as concerned about misinformation stemming from the established media as a result of perceived editorial bias and poor journalistic practices, as they are about news that is completely made-up for commercial or political purposes. Most people would like media and technology companies to do more to help them distinguish authentic and “fake” news online, but views over government intervention are more mixed.

Methodology The findings were based on an analysis of online survey data from the Reuters Institute Digital News Report project. In 2018, data was collected from around 74,000 respondents in 37 media markets. The main focus of the study is Europe, where data was collected from respondents across 24 countries, but people in markets in Asia-Pacific, Latin America, and North America were also polled. The survey was conducted by YouGov (and their partners) and the Reuters Institute for the Study of Journalism at the University of Oxford in early February 2018. It was designed to gather data on all important aspects of attitudes towards the news and news use, and included a series of questions on attitudes towards misinformation and disinformation. It is important to understand that the survey was conducted online. This means that people without internet access were not able to respond,

HOW NEWS AUDIENCES THINK ABOUT …

49

and the samples in each country were more representative of online populations than national populations. However, this is less of a limitation here because the primary focus of the survey is on online misinformation. The main strength of the dataset is that data collection was conducted at the same time in every market, using the same (translated) questionnaire. This allowed for a truly comparative analysis, and helped us build a more global understanding of the issues.1 Before going any further, it is worth briefly clarifying some terminology. First, it is clear that disinformation and misinformation are different. Misinformation refers to false information disseminated without the intention to cause harm, whereas disinformation refers to false information disseminated to cause harm knowingly (Wardle and Derakhshan 2017). However, I will refer to both misinformation and disinformation collectively as “misinformation” unless there is an important distinction to be made. Second, many observers have called for the term “fake news” to be dropped, because of the way it has been “weaponised” by politicians and other powerful people (Wardle 2017). I will avoid using the term here. However, it is also important to understand that the term has already entered the vernacular, so it is sometimes necessary to use “fake news” to describe what news audiences think (Nielsen and Graves 2017).

Trust in the News In thinking about misinformation, it is absolutely crucial to remember that debates are playing out in an environment of low trust in the news media. Simply put, the data showed that in about two-thirds of the markets surveyed, less than half think they can trust most news most of the time (see Fig. 1). There was national variation, with trust in the media consistently high in Finland (62%) and most other Scandinavian countries, but considerably lower in places like the US (33%), Greece (26%), 1 The limitations associated with using an online survey methodology are most pronounced in countries where internet penetration is relatively low. In many of the countries included, internet penetration exceeds 90%, but internet penetration was (at the time of data collection) below 75% in Brazil, Bulgaria, Croatia, Greece, Mexico, Poland, Portugal, Romania, and Turkey. In these countries, the sample is more representative of the “urban” online population, which is typically wealthier and more educated than the national average. This should be kept in mind when interpreting the results. For more information on the methodology, see http://www.digitalnewsreport.org/survey/2018/ survey-methodology-2018.

50

R. FLETCHER

Fig. 1 Proportion in each country that trust most news most of the time (Source Newman et al. [2018: 17])

and Korea (25%). Importantly, the figures did not change much when we asked specifically about the trust people had in the news they regularly consumed. Across all 37 markets, just 51% said they trusted the news they consumed most of the time. Therefore, it is not the case that people had low trust because of a few untrustworthy news sources that they mostly ignore. It is also difficult to know what causes low trust in the news. Other studies have shown that levels of trust in the news are tied to levels of trust in other institutions within society, but also that trust in the news is falling in some—although not all—countries (Hanitzsch et al. 2017). The survey also included some follow-up questions about trust in news from services like search engines and social media. These are becoming increasingly important parts of the news ecosystem for many people, but people trust them less. On average, 34% indicated they trusted news from search engines, and just 23% said they trusted news from social media. This is probably linked to the fact that in these environments, it can be unclear where the news is actually coming from (Kalogeropoulos et al. 2019), leading to uncertainty. This environment of relatively low trust in the news can leave people feeling they might struggle to separate fact from fiction online. The survey also revealed high levels of concern about what is real and what is fake online when it comes to news (see Fig. 2). In most of the markets surveyed, a majority of the online population said they are either “very” or “extremely” concerned about this issue. However, again, there was national variation across the markets surveyed. Concern is most widespread in Brazil (85%) but much lower in countries like Denmark (36%) and the Netherlands (30%).

Fig. 2 Proportion that say they are “very” or “extremely” concerned about what is real and what is fake on the internet when it comes to news (Source Newman et al. [2018: 19])

HOW NEWS AUDIENCES THINK ABOUT …

51

52

R. FLETCHER

Types of Misinformation It is also necessary to think about different types of misinformation and disinformation. As mentioned earlier, theoretical research is emerging to define different categories, but the survey used a less theoretical starting point and instead looked at the categories used by news audiences. This was primarily done using a series of eight focus groups in four countries (Nielsen and Graves 2017). Focus group sessions were conducted in Spain, Finland, the United Kingdom, and the United States in 2017. The participants were allowed to build a definition of “news” together, but after a while, the conversations turned to the issue of “fake news”. Once the transcripts were coded, it was possible for the researchers to identify five different types of misinformation participants spoke about. They were: (i) poor journalism, (ii) propaganda or bias, (iii) misleading advertising, (iv) satire, and (v) false or completely made-up news. Though intent is always difficult to discern, it seems likely that false or completely made-up news would normally be classed as disinformation, whereas the other four are examples of misinformation. Although participants did not conceive of satire and false news as part of “the news” per se, the other three—poor journalism, propaganda or bias, and misleading advertising— were seen as aspects which may come from the professional news industry. This clearly linked back to low trust, and highlighted how people do not always view there being a clear line between misinformation and—for lack of a better term—“real news”.

Public Concern Over Misinformation This typology provided the framework for a series of survey questions designed to measure levels of concern in different countries. The averages for all markets are displayed in the green bars in Fig. 3. It shows that just over half of all respondents were either “very” or “extremely” concerned about bias (59%), poor journalism (55%), and completely made-up news (58%). Two of these—bias and poor journalism—were seen by some as associated with professionally produced news content. In some countries, it was interesting to note that the figures for bias and “spin” (where particular elements of a news story are over-emphasised in order to create a misleading impression) were higher than those for completely made-up news. Figure 3 also shows that people tended to be slightly less concerned

HOW NEWS AUDIENCES THINK ABOUT …

53

Fig. 3 Proportion who say they are very or extremely concerned about each, and proportion who say they encountered each in the last week (Source Newman et al. [2018: 20])

about satire (24%) and misleading advertising (sometimes referred to as clickbait) (43%). Of course, levels of concern do vary by country—but not to the same degree seen earlier with trust. In the US, for example, the picture is similar to the average, with concern over bias and spin slightly more widespread than concern over completely made-up news. In Spain, levels of concern over all types of misinformation were among the highest overall, while the levels in Denmark were among the lowest. Across most of the markets surveyed, the rank order for each type of misinformation was usually the same. However, there are some noticeable variations. Across Eastern Europe, for example, misleading advertising was more of a concern than in many other parts of the world. This may imply that the news media in this region are more reliant on revenue streams based on generating clicks, rather than those based around paywalls.

Exposure to Misinformation As illustrated by the orange bars in Fig. 3, the survey also asked respondents to indicate whether they had seen each type of misinformation in the last week. Needless to say, there were obvious issues with using selfreports to measure exposure to some of these phenomena, and it should not be understood as a measure of how much misinformation exists within a media system. However, self-reports can be used to measure

54

R. FLETCHER

perceived exposure, which measures whether people think they have recently encountered misinformation, and can be used to understand other news attitudes and behaviours. As we might expect, levels of perceived exposure may mirror levels of concern. A relatively large share of people thought they had encountered poor journalism (42%) and bias (39%) in the previous week, but the figures were lower for misleading advertising (34%) and satire (23%). The crucial difference was that only a quarter of those surveyed thought they were exposed to completely made-up news in the previous week (26%). This created a “gap” between concern and perceived exposure that is larger than for any of the other types of misinformation. Of course, there may be good reasons why people might be especially concerned about completely made-up news. Disinformation is arguably more harmful than misinformation. This larger gap between concern and perceived exposure probably meant that concern is not the result of experience. This is reminiscent of the “third-person effect”—the belief that “I will not be influenced by the media, but other people will be” (Davison 1983). With respect to made-up news, this could be paraphrased as “I do not see made-up news, but other people do, therefore I’m concerned”. The next step for researchers will be to build models that can better explain concern over misinformation. However, this will likely be challenging, not least because the data from the Digital News Report showed that predictors of concern vary a lot between countries. For example, left– right partisanship shapes how concerned people are over misinformation in the US, but in many other countries, it makes little difference. In the US as well, people who self-identify as right-wing are more likely to be highly concerned about misinformation than those who identify as leftwing. This coincides with the way President Trump and others on the right have criticised what they refer to as the ‘mainstream media’ heavily, often using the label “fake news”, and leaving issues deeply politicised. While this is not unique to the US, it is not something we see happening everywhere. For example, there were hardly any differences in concern among those on the left and the right in the data from Denmark—a pattern we see across much of Western and Northern Europe.

“Fake News” as a Term There is one final category included in Fig. 3 that has not yet been mentioned. This is the use of the term “fake news” to discredit news media. This was also mentioned by participants in the aforementioned

HOW NEWS AUDIENCES THINK ABOUT …

55

focus group research, but is also somewhat different in character to any of the other types included in Fig. 3. Nonetheless, the survey was still used to measure “concern over”’ and perceived “exposure to” the use of the term. Across all the markets surveyed, around half of respondents (49%) said they were either “very” or “extremely” concerned about the use of the term “fake news” by powerful people to discredit the news media. This meant that concern is slightly less widespread than for bias, poor journalism, and completely made-up news, but typically more widespread than for misleading advertising and satire. On average, just under one-third (31%) of respondents said they saw the term used in the previous week. It was most widely encountered in the US (49%), but figures were also high in both Romania (45%) and Hungary (44%). However, the figure fell to just 19% in the Netherlands. In some countries, it is clear that little political leverage can be gained through attempting to denigrate the news media, leading to scant use of the term “fake news”. In countries like the Netherlands, it is also possible that some part of people’s exposure to the use of the term can be explained by the coverage of international events.

Addressing Concerns Over Misinformation Finally, the survey also contained data on who news audiences think should do more to fix the problems associated with misinformation. In other words, who do people think should address their concerns? In almost every market, a larger proportion agreed that media companies should do more to separate what is real and what is fake on the internet (75%), at least when compared to technology companies (71%) and governments (61%). Although the survey did not ask about specific actions, this may reflect a desire for the news media to do more when it comes to practices like fact-checking the claims of others, or exposing completely made-up stories produced by those outside the profession (Fig. 4). The desire for more action from media and technology companies varied relatively little from country to country. However, there were national differences when it came to the proportion of respondents that would like their respective government to do more to help distinguish real and fake news on the internet. The survey did not ask about specific types of government intervention, but there were noticeable divides. In most countries, around 60–75% of respondents would like the government to

56

R. FLETCHER

Fig. 4 Proportion that think that each should do more to separate what is real and what is fake on the internet (Source Author’s own, based on data from Newman et al. [2018])

do more, with figures highest of all in South Korea (73%) and Spain (72%). The figures were much lower in Finland (51%), Sweden (48%), and Denmark (43%), where people were less concerned about misinformation, and might be reluctant for the government to take drastic steps. Support for government intervention is least widespread of all in the US (41%). Here, concern over misinformation was higher than in the Nordic countries, but they were reluctant for the government to play any significant role in matters of free speech.

Conclusion This chapter provides an overview of how news audiences across the world currently think about some of the issues around misinformation. It emphasises how people are often just as concerned about poor journalism and bias as they are about news that is completely made-up. There are two further points that can be made about these findings. Firstly, in the same way that people can sometimes trust untrustworthy actors, they may also be concerned about issues which should not be concerning. Assessing what people should be concerned about would require a different approach, but adequately addressing the problems of misinformation nonetheless requires some understanding of what the public think. Secondly, the issues and debates around misinformation are constantly changing. Scarcely a week passes without new case studies or insights emerging, with consequences for people’s attitudes. It is therefore vital that researchers continue to closely monitor public concern over

HOW NEWS AUDIENCES THINK ABOUT …

57

misinformation, in addition to the important work which aims to quantify the nature of scale of misinformation production.

References Davison, W. P. (1983). The Third-Person Effect in Communication. Public Opinion Quarterly, 47 (1), 1–15. Hanitzsch, T., van Dalen, A., & Steindl, N. (2017). Caught in the Nexus: A Comparative and Longitudinal Analysis of Public Trust in the Press. International Journal of Press/Politics, 23(1), 3–23. Kalogeropoulos, A., Fletcher, R., & Nielsen, R. K. (2019). News Brand Attribution in Distributed Environments: Do People Know Where They Get Their News? New Media & Society, 21(3), 583–601. Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Reuters Institute Digital News Report 2018. Oxford: Reuters Institute for the Study of Journalism. Nielsen, R. K., & Graves, L. (2017). News You Don’t Believe: Audience Perspectives on Fake News. Oxford: Reuters Institute for the Study of Journalism. Wardle, C. (2017). Fake News. It’s Complicated. First Draft News. Retrieved from https://firstdraftnews.com:443/fake-news-complicated/. Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making (No. DGI(2017)09). Strasbourg: Council of Europe.

Disinformation in Context

Building Digital Resilience Ahead of Elections and Beyond Donara Barojan

Abstract Donara Barojan explores the impact of emerging soft threats, such as disinformation and digital manipulation, on election vulnerability. She argues that researchers, policymakers, and journalists reporting on elections tend to focusing solely on technological vulnerabilities, while overlooking individuals’ cognitive shortcomings and societal vulnerabilities. Barojan examines these vulnerabilities in great detail by describing the “toxic information environment” created when they interact, which in turn facilitates the spread of disinformation and fake news. Barojan also describes some of the tactics that the DFR Lab has observed while monitoring elections around the world “to make sure that disinformation and nefarious influence campaigns did not have an effect on the outcome of these elections or the post-electoral discourse”. Case studies from elections in Moldova, Malaysia, Mexico, and the United States illustrate the use of doctored videos, commercial and political bots (automated social media), and sock puppet (inauthentic social media) accounts. Barojan then lists a number of approaches to counter online hostile actors, which include participating in open source and digital forensic training events, following the research published by the DFR Lab to “inform the public

D. Barojan (B) Digital Forensic Research Lab and the NATO StratCom Centre of Excellence, Riga, Latvia © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_5

61

62

D. BAROJAN

of the challenge of disinformation”, maintaining transparent communication with the public, and harnessing the “power of technology” to track disinformation campaigns and digital manipulation in multiple languages. Keywords Elections · Vulnerability · Digital resilience · Disinformation · Social media

Elections are a vulnerable time for democracy. Especially in the digital age, democracies are signified by decentralized media landscapes, where any country’s information environment is vulnerable. In light of this, our understanding of election integrity must be expanded to include emerging soft threats, such as disinformation and digital manipulation. This chapter will explore the coalescing vulnerabilities that enable disinformation and digital manipulation to have a real-world impact ahead of elections, and beyond. These will be illustrated with examples from the Malaysian, Mexican, American, and Moldovan elections. The chapter will also explain how the Digital Forensic Research (DFR) Lab is building digital resilience to counter hostile foreign influence and disinformation through research, communication, training, and technology. It will conclude with an assessment of future threats and challenges, stemming from the nefarious use of technological advancements that governments, civil society, militaries, and international organizations must prepare for.

Coalescing Vulnerabilities Researchers, policymakers and journalists tend to focus on one of three overlapping vulnerabilities—namely technological, cognitive, or societal vulnerabilities—with most focusing only on the rapidly changing digital landscape. However, they tend to overlook two other crucial factors, namely our cognitive shortcomings and societal vulnerabilities. This approach is not holistic enough. It only covers one-third of the challenge, leaving the other two dimensions out of the conversation. The challenge of disinformation and digital manipulation sits at the heart of three coalescing vulnerabilities (Fig. 1).

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

63

Fig. 1 Venn diagram showing the overlapping vulnerabilities that contribute to the creation of a toxic information environment, where disinformation spreads unimpeded

Technological Vulnerabilities Technological vulnerabilities refer to algorithmic bias, challenges that come with online anonymity, unchecked automation, and gaps in data privacy. We often speak of technology platforms as neutral conduits, but they are persuasive by design (Dunagan et al. 2018, 3). The main goal for any website, mobile application, or social network is to ensure users stay on their platform for as long as they can persuade them to. What social media algorithms have discovered over the years is that the kind of content encouraging us to spend more time on platforms like YouTube, Twitter, and others is increasingly inflammatory and often extreme (Fisher et al. 2018). Behavioural studies show that humans strongly favour content which provokes negative emotions—especially fear or anger—when they are engaging on social media networks. It should, therefore, not come as a surprise that social media algorithms amplify incendiary content. This practice is not only polarizing political debates, but also has far-reaching consequences in other areas, such as public health. For example, when a social media user starts out their digital journey by looking up content on the flu vaccine, they will soon be recommended anti-vaccination content (Tufekci 2018). DFR Lab ran an experiment on Twitter and found that if a user follows one hyperpartisan news site on the platform, the recommendation algorithm will recommend they like other news outlets with a similar bias. Apart from that, social networks have revolutionized publishing and digital advertising sectors, which in turn have lowered the barriers to enter the information environment for state and non-state actors alike— making disinformation and influence campaigns a high-impact, low-cost

64

D. BAROJAN

means of exploitation, or attack. As an illustration: for the price of an F22 fighter jet, a state or a non-state actor could serve every single person in a constituency of three million people 500 social media advertisements. Cognitive Vulnerabilities Technological vulnerabilities are powered by our cognitive shortcomings, more specifically, our decision-making, belief, and behavioural biases. Some examples of these include confirmation bias, popularity bias, bias blind spot, or shared information bias. These biases affect how individuals perceive and engage with information, oftentimes rejecting factual content that contradicts their preexisting worldviews and beliefs. These cognitive biases preclude people from rationally engaging with the facts and render fact-checking useless, further exacerbating the challenge of disinformation. Societal Vulnerabilities Lastly, it is crucial to acknowledge and address our own societal vulnerabilities, such as socio-economic inequality, ethnic, racial, or religious tensions, lack of social cohesion, and political chasms, which create divisions which may be exploited by adversaries. The Kremlin interference in the US Presidential elections of 2016 is a case in point. Russian operatives were exploiting existing racial and political tensions in the US to advance their agenda by deepening those divisions and sowing further distrust in state institutions (Nimmo 2018b). When these three vulnerabilities interact, they help create a toxic information environment where disinformation and fake news can spread very quickly—chipping away at the foundations of cohesive democratic societies.

Exploiting Social Media Ahead of Elections To date, the DFR Lab has monitored more than a dozen elections around the world to make sure that disinformation and nefarious influence campaigns did not have an effect on the outcome of these elections or the post-electoral discourse. What the DFR Lab has discovered is that the threat of disinformation and nefarious influence campaigns ahead of elections is nearly universal,

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

65

and the way social media has been exploited ahead of elections is very similar across the globe. Some of the most common tactics the DFR Lab has observed over the years are as follows: Type

Description

Publishing political leaks on social media

Sharing illegally (through hacking or theft) acquired documents from political parties, candidates, or campaigns on social media in an attempt to undermine a candidate Videos that have been edited or taken out of context to create a false impression about a political party or a candidate False stories about politicians and political parties or politically charged divisive issues Automated social media accounts run by algorithms, rather than real people (Nimmo 2018a). There are two distinct types of bots—political and commercial. Commercial bots amplify whatever content their handlers are paid to promote. Commercial bots can be hired to promote political content, but that is not the only type of content they may be promoting at any given time. Political bots, on the other hand, are created for the sole purpose of amplifying political content of a particular party, candidate, or interest group. They do not promote commercial content Engagement generated by “like farms”—businesses that provide Facebook users with a number of likes for a fee. Those likes come either from automated Facebook accounts, or real users rewarded to participate in the scheme Fake grassroots movement directed for political or financial gain Social media accounts run by users who pretend to be someone else online People who intentionally initiate online conflict or offend other users to distract and sow divisions, by posting inflammatory or off-topic posts in an online community or a social network

Doctored or out-of-context videos

Made-up stories Bots

Inorganic engagement

Astroturfing Sock puppet accounts Trolling

(continued)

66

D. BAROJAN

(continued) Type

Description

Election fraud claims

Electoral fraud claims after an election, often questioning the entire electoral process. The underlying goal of such claims is usually to incite violence or protest Political advertisements run by foreign countries in an attempt to sway the election outcome by supporting a particular candidate, amplifying divisive issues or dissuading a particular demographic from voting in an election

Advertisements run by foreign entities

Four case studies from DFR Lab’s work below show how some of these tactics, namely doctored videos, commercial and political bots as well as sock puppet accounts, have been used in real-world election scenarios.

Case Studies Moldova—Doctored Video In June 2018, Moldova held local elections to elect new local governments all around the country. The elections in the capital of Moldova, Chisinau, were targeted by disinformation campaigns (Barojan 2018b). One week before the election, a video started trending on Facebook and a Russian social network platform known as Odnoklassniki. It was a clip from Al-Jazeera news in Arabic, with Romanian subtitles provided by the uploader. According to the subtitles, the news anchor presented a story about a pro-European mayoral candidate in Chisinau, who agreed to lease the Moldovan capital to the United Arab Emirates for 50 years. DFR Lab tracked down the original clip and observed several visual discrepancies between the original video, and the version which went viral in Moldova. Apart from that, the Romanian translation did not match the news anchor’s monologue in Arabic. The news anchor was—in actual fact—discussing the bilateral relationship between the UAE and Yemen, and did not mention Moldova or the local elections in any way. Although the two copies of the viral video were watched more than half a million times across Facebook and Odnoklassniki, they did not appear to

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

67

have had an impact on the outcome of the election as the pro-European candidate ended up winning. Malaysia—Commercial Bots The Malaysian general election, which took place on 9 May 2018, was targeted by a large network of commercial bots (Barojan 2018a). Two weeks before the elections, DFR Lab was contacted by a Reuters reporter, who noticed two hashtags trending on Malaysian Twitter accounts which appeared to be amplified by bots. DFR Lab looked into it and found there were indeed two hashtags, both of which targeted Pakatan Harapan (PH), which at the time was the opposition coalition. The two hashtags were #SayNoToPH and #KalahkanPakatan (translation from Malay: Defeat Pakatan). Between 12 and 18 April, 21,000 accounts posted 41,600 thousand tweets using the two hashtags. DFR Lab analyzed the 21,000 accounts and found that 98% of them were bots. Most of the bot accounts followed the exact same pattern of speech, using two hashtags and then tagging between 13 and 16 real users in an attempt to encourage them to join the campaign. Furthermore, most bot accounts had alphanumerical usernames, which is an easy bot giveaway as algorithms creating bots do not (yet) know the difference between a real person’s name and “Q45fg78”. Out of ten most active bot accounts tweeting out the two hashtags, nine had Cyrillic screen names. The prevalence of bots with Cyrillic screen names did not, however, suggest that Russian social media users or the Kremlin itself were meddling in the Malaysian elections. It did, however, indicate that the individual or a group behind the campaign purchased bots likely created by Russian-speaking bot creators. Most of the tweets posted by bots were accompanied by a short video, which appeared to show a political rally. DFR Lab found that the imagery came from the then-ruling Barisan Nasional party. In fact, most tweets using the two anti-PH hashtags shared Barisan Nasional’s promotional content. Impact indicators available from the social media listening tools used by the DFR Lab revealed that the campaign was largely unsuccessful. Although it reached more than 2 million accounts, it failed to spread the hashtag beyond the bot clusters.

68

D. BAROJAN

Mexico—Political Bots Like the Malaysian election, Mexican general election was also targeted by bots. However in the case of Mexico, the bots were political, not commercial (Barojan 2018c). Ahead of the Presidential and Senate elections in Mexico in August 2018, DFR Lab discovered a medium-sized botnet promoting Institutional Revolutionary Party’s (PRI) Senate and Chamber of Deputies candidates in the state of Puebla. The bot accounts were created between 8 and 16 May 2018, and had their location set to the state of Puebla. This was the first indicator the accounts were political, not commercial. Political bot herders had assigned the location where the candidate or the party they were promoting was running in, to make sure the content promoted by the bot accounts trended in that constituency. The botnet in question amplified tweets coming from two PRI candidates—Senate candidate Juan Carlos Lastiri Quiros and Enrique Doger, running to represent Puebla in the Chamber of Deputies. Unlike commercial bots, which promote a variety of brands and services as well as politicians, the Puebla bots promoted the two candidates and the PRI party’s campaign materials exclusively. The botnet did not have much success promoting the two candidates on Twitter, as neither had won the seat they were running for. The United States—Sock Puppet Accounts The United States (US) Presidential election of 2016 was targeted by a large number of sock puppet accounts—Russian agents pretending to be American voters on Twitter and other social networks. The true scope of their activities became clear only recently, when Twitter released an archive of more than nine million tweets posted by Russia’s troll farm between 2013 and 2018 on 17 October 2018 (Brookie et al. 2018). Although the majority of the content targeted Russia’s domestic audience in Russian, at least 30% of all tweets were in English—with the vast majority of those tweets targeting the US ahead of the Presidential election in 2016. Twitter’s data revealed that Kremlin trolls created comprehensive online personalities and produced engaging digital content, focusing heavily on creating and disseminating memes and other visual materials.

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

69

This allowed them to successfully infiltrate American activist communities and lead conversations about American politics online. One such account, @TEN_GOP, is a case in point (Nimmo 2018a). The account shared pro-Trump messaging and was active between 2015 and 2017. In its two years on Twitter, it amassed more than 13,000 followers and became a prominent conservative influencer. It was frequently cited in alt-right and conservative media outlets, and was even retweeted by Lieutenant General Michael Flynn, months before he became the US National Security Advisor in President Trump’s administration. According to Twitter’s own data, at least 700,000 people were exposed to social media posts from accounts linked to the Russian government. This includes the aforementioned @TEN_GOP, in the lead up to the US Presidential election. Although the electoral impact of such exposure is difficult—if not impossible—to measure, societal repercussions are more evident. There is a growing trend among social media users to accuse those they disagree with of being bots or Russian trolls despite not having any evidence to support such accusations. This strongly undermines online political discourse, and only entrenches polarization within the American society.

Building Digital Resilience In a bid to counter hostile actors using the tactics outlined above, the DFR Lab has embarked on a mission to build digital resilience around the globe. Digital resilience refers to boosting digital communities’ ability to cope with internal and external challenges in the information domain. In more pragmatic terms, we want to equip digital communities with the skills necessary to verify digital content effectively, while keeping their biases and prejudices in check. Training The DFR Lab has run dozens of immersive Open Source and Digital Forensic training events, attended by journalists, civil society activists, and social media influencers from across Europe, North and South America. The training focuses on two distinct parts of digital information environment monitoring—content verification and automation detection. Content verification consists of identifying trending content on

70

D. BAROJAN

social media and verifying it, by identifying and analyzing its source, narrative, and desired impact. Automation detection centres on bot and botnet identification, analysis, and attribution. The goal of these training workshops is to equip participants with the skills to identify, analyze and counter disinformation and malign influence campaigns when they happen. On top of that, the DFR Lab trains journalists in communicating their research and visualizing their reporting, to make it more accessible and relatable to the general public. Research DFR Lab is an operationalized research organization and as such, regularly reports on disinformation targeting elections worldwide. Unlike other think tanks and research organizations, which publish their findings in the aftermath of an election, DFR Lab follows disinformation campaigns in real time and reports on them as they happen, to ensure disinformation does not have an impact on the outcome of the election. Our research is used by journalists to inform the public of the challenge of disinformation and policymakers, who use our reporting to design effective regulatory measures to prevent electoral interference. Communication As important as quality research is, communicating it effectively is paramount. DFR Lab’s target audience are journalists and policymakers because they are force multipliers with audiences of their own. With our communication, we want to ensure that policymakers and journalists know how to verify information online and can identify disinformation independently. From the very beginning, the DFR Lab has embraced a transparent model of reporting, where we show our readers how we arrive at any given conclusion by walking them through each step in our thought process and methodology. Apart from that, we have produced a number of articles explaining our methodology in an easy-to-understand manner, providing visual aids and shareable infographics. This allows our audience to countercheck our work, and most importantly, teaches them to use our methods to verify information independently.

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

71

Technology At the DFR Lab, we understand that to counter the threat of disinformation and digital manipulation at scale, we must harness the power of technology. We have developed several in-house tools to track networks of Kremlin disinformation in a number of languages, including English, Spanish, Polish, and German. Apart from that, we have recently launched a game on Facebook, “The News Hero”, which allows players to run their own media houses and encourages them to evaluate the accuracy of news stories. It also teaches them basic verification techniques, such as reverse image search and bias detection.

Future Threats The advancement of artificial intelligence presents a lot of opportunities for hostile actors to expand their operations and improve their tactics. There are two key threats associated with the advancement of artificial intelligence and technology more broadly in the context of disinformation and digital influence campaigns. One of them is the rise of chatbots. These are computer programmes capable of conversing with people online, which—if weaponized—will be able to engage with other users in one-on-one conversations, making the dissemination of disinformation or radicalization even more personal, targeted, and nearly impossible to track. The second challenge is the inevitable proliferation of deepfake video technology, which allows users to superimpose someone’s face or entire body onto another video, making it impossible to distinguish between real and computer-generated videos (Schellmann 2018). The code allows users to superimpose a person’s face into an existing video, and is already freely available online. Although now used mostly to create adult films featuring celebrities that never partook in them, the weaponization of this technology in a political context is likely mere months away. Therefore, malign influence operations will continue to become more sophisticated and difficult to identify. It is paramount that actors countering these threats harness the power of technology—namely Artificial Intelligence (AI)—to help identify and overcome these challenges. The three areas that could benefit from AI solutions in particular, are information environment monitoring, visual content verification, and factchecking. AI could be used to assist human analysts monitoring the digital

72

D. BAROJAN

information environment, and over time be trained to flag emerging threats. Technology could also be utilized to verify visual content online as algorithms have long been better at detecting fraudulent patterns in images and videos than the human eye. Lastly, fact-checking could benefit from increased application of AI, which could help researchers find accurate information faster and automate some parts of the process and the reporting. These threats can only be countered using the same technology that made them possible in the first place. Left unchecked, the challenge of disinformation will proliferate and start sowing public discord at an industrial scale.

References Barojan, Donara. 2018a. #BotSpot: Bots Target Malaysian Elections. https:// medium.com/dfrlab/botspot-bots-target-malaysian-elections-785a3c25645b. Accessed October 2, 2018. Barojan, Donara. 2018b. #ElectionWatch: Doctored Video Goes Viral Ahead of Elections in Moldova. https://medium.com/dfrlab/electionwatch-doctoredvideo-goes-viral-ahead-of-elections-in-moldova-5d52b49f679d. Accessed October 2, 2018. Barojan, Donara. 2018c. #ElectionWatch: Down Ballot Bots in Mexico. https://medium.com/dfrlab/electionwatch-down-ballot-bots-in-mexico-e1b ee023291d. Accessed October 2, 2018. Brookie, Graham, Kanishk, Karan and Nimmo, Ben. 2018. #TrollTracker: Twitter Troll Farm Archives. https://medium.com/dfrlab/trolltracker-twi tter-troll-farm-archives-8d5dd61c486b. Accessed October 22, 2018. Dunagan, Jake, Pescovitz, David and Rushkoff, Douglas. 2018. The Biology of Disinformation. http://www.iftf.org/fileadmin/user_upload/images/Dig Intel/SR-2002_IFTF_Biology_of_disinformation_tk_042618.pdf. Accessed October 2, 2018. Fisher, Max and Taub, Amanda. 2018. How Everyday Social Media Users Become Real-World Extremists. https://www.nytimes.com/2018/04/25/ world/asia/facebook-extremism.html. Accessed October 2, 2018. Nimmo, Ben. 2018a. #BotSpot: Twelve Ways to Spot a Bot. https://med ium.com/dfrlab/botspot-twelve-ways-to-spot-a-bot-aedc7d9c110c. Accessed October 2, 2018. Nimmo, Ben. 2018b. How A Russian Troll Fooled America. https://medium. com/dfrlab/how-a-russian-troll-fooled-america-80452a4806d1. Accessed October 2, 2018.

BUILDING DIGITAL RESILIENCE AHEAD OF ELECTIONS AND BEYOND

73

Schellmann, Hilke. 2018. Deepfake Videos Are Getting Real and That’s a Problem. https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-isdemocracy-next-1539595787. Accessed October 15, 2018. Tufekci, Zeynep. 2018. YouTube, the Great Radicalizer. https://www.nytimes. com/2018/03/10/opinion/sunday/youtube-politics-radical.html. Accessed October 2, 2018.

Fighting Information Manipulation: The French Experience Jean-Baptiste Jeangène Vilmer

Abstract Jean-Baptiste Jeangène Vilmer analyzes the impact of the massive data breach that was used in a large-scale disinformation campaign targeting Emmanuel Macron and his campaign team, in the run up to the 2017 French presidential elections. Despite the large data leak, the campaign was deemed a failure, as it did not succeed in significantly influencing French voters. Vilmer explores why this was the case by highlighting a number of mistakes made by the hackers, as well as the appropriate and effective strategies by the National Commission for the Control of the Electoral Campaign for the Presidential Election (CNCCEP) and the National Cybersecurity Agency (ANSSI). Vilmer cautions that the threat of further leaks may persist, and emphasizes a number of steps and legislation to tackle the threat of disinformation head on. Keywords Elections · Campaign · Disinformation · Hacking · Data leak

J.-B. J. Vilmer (B) Institute for Strategic Research (IRSEM), Paris, France e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_6

75

76

J.-B. JEANGÈNE VILMER

France is a newcomer in the study of information manipulation (compared to Eastern, Northern, and Central European States which developed an expertise over the years). Our awareness is the product of, on the one hand, electoral interference that has occurred since 2014 in countries ranging from Ukraine, Germany, the UK (Brexit), the US, and, on the other hand, the attempted interference in the 2017 French presidential election—the so-called “Macron Leaks” incident.

The Macron Leaks1 “The Macron Leaks”, refers not only to the release just two days before the second and final round of the presidential election (Friday, May 5, 2017) of gigabytes of data (including 21,075 emails) that were hacked from Emmanuel Macron’s campaign team, but more generally to the orchestrated campaign against him that started months earlier with disinformation operations. It succeeded neither in interfering with the election (with Macron defeating the far-right candidate, Marine Le Pen), nor in antagonizing French society. For this reason, it is especially interesting to study. Step 1: Disinformation Campaign The Disinformation campaign began with rumors and insinuations in January and February 2017, intensifying as months passed. Attacks were both political (calling Macron an aristocrat who despises the common man, a rich banker, a globalist puppet, a supporter of Islamic extremism and in favor of uncontrolled immigration, alleging that a vote for him was a vote for another five years of President Hollande, etc.) and personal (salacious remarks about the age difference between him and wife, rumors that he was having an affair with his step-daughter, speculation that he is 1 This chapter was written in 2018 as an oral presentation. Previous versions or parts of it have been previously published in Nicu Popescu and Stanislav Secrieru (eds.), Hacks, Leaks and Disruptions: Russian Cyber Strategies, EUISS Chaillot Paper n°148, October 2018, pp. 75–83 and as a CSIS Brief (Successfully Countering Russian Electoral Interference: 15 Lessons Learned from the Macron Leaks, Washington, DC, 21 June 2018). Since then, the complete analysis was also published: Jean-Baptiste Jeangène Vilmer, The “Macron Leaks” Operation: A Post-Mortem, IRSEM/Atlantic Council, June 2019 (https://www.atlanticcouncil.org/wp-content/upl oads/2019/06/The_Macron_Leaks_Operation-A_Post-Mortem.pdf).

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

77

gay, etc.). Last but not least was the “#MacronGate” rumor spread two hours before the final televised debate between Emmanuel Macron and Marine Le Pen, on Wednesday, May 3 at 7 p.m.2 A user with a Latvian IP address posted two fake documents on the US-based forum 4Chan, suggesting that Macron had a secret offshore account. It was quickly spread by some 7000 Twitter accounts, mostly pro-Trump, often with the #MacronGate and #MacronCacheCash hashtags. During the televised debate, Le Pen herself mentioned it live. The rumor was quickly debunked and several media sources decisively proved these documents to be fake.3 Step 2: Data Hacking and Leaking Interestingly, the same people who posted the fake documents on 4Chan on Wednesday announced on Friday morning that more were coming.4 Those responsible for “MacronGate” thereby provided evidence that they were the same people responsible for the “Macron Leaks” that came out later that day. The hack came with phishing attacks. Macron’s team confirmed that their party had been targeted since January 2017.5 Several attacks were carried out by email spoofing: in one instance, for example, campaign staffers received an email apparently coming from the head of press relations, providing them with “some recommendations when [talking] to the press” and inviting them to “download the attached file containing talking points”.6 In total, the professional and personal email accounts of at least five of Macron’s close collaborators were hacked, including those of his speechwriter, his campaign treasurer and two MPs.7 The hackers waited until 2 All timestamps in this article are in GMT+2 (Paris time). 3 “How We Debunked Rumours That Macron Has an Offshore Account,” France 24—

The Observers, 5 May 2017. 4 Chris Doman, “MacronLeaks—A Timeline of Events,” AlienVault, 6 May 2017. 5 Michel Rose and Eric Auchard, “Macron Campaign Confirms Phishing Attempts, Says

No Data Stolen,” Reuters, 26 April 2017. 6 Mounir Mahjoubi, interviewed in Antoine Bayet, “Macronleaks: le responsable de la campagne numérique d’En marche! accuse les ‘supports’ du Front national,” France Info, 8 May 2017. 7 Frédéric Pierron, “MacronLeaks: 5 victimes et des failles de sécurité,” fredericpierron.com blog, 11 May 2017.

78

J.-B. JEANGÈNE VILMER

the very last moment to leak the documents: May 5, 2017, only hours before official campaigning stopped for the “election silence,” a 44hour political media blackout ahead of the polls’ closure. The files were initially posted on Archive.org, then on PasteBin and 4Chan. Pro-Trump accounts (William Craddick,8 Jack Posobiec9 ) were the first to share the link on Twitter, with the hashtag #MacronLeaks, quickly followed by WikiLeaks. Overall, the hashtag “#MacronLeaks reached 47,000 tweets in just three and a half hours after the initial tweet”.10 The Macron Leaks saw the following pattern of operation: first, the content is dumped onto the political discussion board of 4Chan (/pol/). Second, it is brought to mainstream social networks like Twitter. Third, it is spread through political communities, notably the US alt-right and French far-right, with catalyst accounts, and retweeted by both real people and bots. The use of bots was obvious given that some accounts posted almost 150 tweets per hour.

Who Did It? Attribution is a complex and sensitive issue. At the time of writing, a year and a half after the incident, the authorities in France still have not publicly attributed the attacks to any particular perpetrator. On June 1, 2017, Guillaume Poupard, the head of the French National Cybersecurity Agency (ANSSI), declared that “the attack was so generic and simple that it could have been practically anyone”.11 The expert community, however, has pointed to the Kremlin. Trend Micro, the Japanese cybersecurity firm, has attributed the phishing attempts to Fancy Bear (also known as APT28 or Pawn Storm), a cyberespionage group linked to the Russian military intelligence agency 8 Founder of Disobedient Media, Craddick is notorious for his contribution to the “Pizzagate” conspiracy theory that targeted the Democratic Party. He was one of the first to spread the rumor about Macron’s supposed secret bank account on Twitter at 9:37 p.m. It was quickly retweeted by some 7000 Twitter accounts, mostly pro-Trump. 9 An Infamous American Alt-Right and Pro-Trump Troll: https://en.wikipedia.org/

wiki/Jack_Posobiec. 10 Ben Nimmo, Naz Durakgolu, Maks Czuperski, and Nicholas Yap, “Hashtag Campaign: #MacronLeaks. Alt-Right Attacks Macron in Last Ditch Effort to Sway French Election,” Atlantic Council’s Digital Forensic Research Lab, 6 May 2017. 11 Andrew Rettman, “Macron Leaks Could Be ‘Isolated Individual’, France Says,” EU Observer, 2 June 2017.

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

79

GRU.12 All of the Excel bookkeeping spreadsheets that were leaked contained metadata in Cyrillic and indicate that the last person to have edited the files is “Roxka Georgi Petroviq” (Roshka Georgiy Petrovich), reportedly an employee of the St-Petersburg-based information technology company Evrika ZAO. Among the company’s clients are several government agencies, including the FSB.13 The Russian independent newspaper The Insider found Roshka in the participants list for a 2016 conference where he was registered as “Military unit No. 26165, specialist,” and the Mueller investigation established that this Unit 26165 is the GRU unit involved in the Kremlin’s election interference in the American election, i.e. Fancy Bear.14 Moreover, Putin’s confidant Konstantin Rykov, sometimes nicknamed the “chief troll,” who boasted of his role in Trump’s election, also acknowledged having failed in the case of France: “We succeeded, Trump is president. Unfortunately Marine did not become president. One thing worked, but not the other.”15 None of these facts prove anything, but the available evidence, taken together, does point in the direction of Moscow. It should be noted however that the user responsible for “Macron Gate” two days before the leak may in fact be an American neo-Nazi hacker, Andrew Auernheimer.16 Given the well-known alliance that exists between the Kremlin and American far-right movements,17 these two hypotheses are not incompatible.

12 Feike Hacquebord, Two Years of Pawn Storm: Examining an Increasingly Relevant Threat, A Trend Micro Research Paper, 25 April 2017, p. 13. 13 Sean Gallagher, “Evidence Suggests Russia Behind Hack of French President-elect,” Ars Technica, 8 May 2017. 14 Kevin Poulsen, “Mueller Finally Solves Mysteries About Russia’s ‘Fancy Bear’ Hackers,” The Daily Beast, 20 July 2018. 15 Konstantin Rykov in a mediametrics.ru interview, in the documentary “La guerre de l’info,” op. cit. 16 David Gauthier-Villard, “U.S. Hacker Linked to Fake Macron Documents, Says Cybersecurity Firm,” The Wall Street Journal, 16 May 2017. 17 Casey Michel, “America’s Neo-Nazis Don’t Look to Germany for Inspiration: They Look to Russia,” The Washington Post, 22 August 2017.

80

J.-B. JEANGÈNE VILMER

Why Did It Fail? In sum, the whole operation did not significantly influence French voters. Such outcome can be attributed to a combination of structural factors, good anticipation, and reaction by the Macron campaign staff, the government, and civil society, and especially the mainstream media. And some luck. Structural Reasons Compared with other countries, especially the US and the UK, France presents a less vulnerable political and media environment for a number of reasons. First, the election of the president is direct, making any attempt at interference in the election in favor of one of the candidates more obvious. Furthermore, the French election has two rounds, which creates an additional difficulty for the attackers as they do not know in advance who will make it to the second round. The two round system permits the population to “correct” an unexpected result after the first round. In addition, the French media environment is robust: there is a strong tradition of serious journalism, the population refers mostly to mainstream sources of information, with tabloid-style outlets and “alternative” websites much less popular than they are in the US and UK. Finally, Cartesianism plays a role: rationality, critical thinking, and a healthy skepticism are part of the French DNA and are encouraged from primary school and throughout one’s professional life. Luck The Macron team was lucky that the hackers and other actors involved in this operation made a number of mistakes. First, they were overconfident. They overestimated their ability to shock and mobilize online communities, underestimated the resistance and the intelligence of the mainstream media and, above all, did not expect that the Macron campaign staff would react—let alone react so well (see below). They also overestimated the interest of the population in an operation that ultimately revealed nothing. They assumed that creating confusion would be enough, and that the content of the leaks would somehow be secondary. But, as it became obvious that the thousands of

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

81

emails and other data were, at best, boring and, at worst, totally ludicrous, the public lost interest. Second, the idea to launch the offensive just hours before the electoral silence period was a double-edged sword: the goal was certainly to render Macron unable to defend himself, and to mute the mainstream media. But the timing of the release did not leave provocateurs long enough to spread the information. Third, the attack also suffered from cultural clumsiness. Most of the catalyst accounts (and bots) were in English because the leaks were first spread by the American alt-right community. This was not the most effective way of penetrating a French population not known to have exceptional foreign language skills. It also likely alienated some French nationalist voters who are not inclined to support anything American. Learning from Others The authorities learned from the mistakes witnessed during the American presidential campaign: disdain for and benign neglect of disinformation campaigns, reluctance to address and frame the hacking of the DNC, a delayed response, to name some facets. In January 2017, French Defense Minister Jean-Yves Le Drian acknowledged that “our services have the necessary exchanges on this subject, if only to draw lessons for the future”.18 Relying on the Relevant Actors Two bodies played a crucial role. First, the National Commission for the Control of the Electoral Campaign for the Presidential Election (CNCCEP), a special body set up in the months preceding every French presidential election and serving as a campaign watchdog. Second, the National Cybersecurity Agency (ANSSI), whose mission was twofold: to ensure the integrity of electoral results and to maintain public confidence in the electoral process. In addition, the campaign managers were quick to reach out to law enforcement and reporting the hack. As a consequence only hours after the initial dump and just as the leak was underway, the public prosecutor’s office in Paris opened an investigation.

18 Jean-Yves Le Drian, interviewed in Le Journal du Dimanche, 8 January 2017.

82

J.-B. JEANGÈNE VILMER

Raising Awareness ANSSI and the CNCCEP alerted the media, political parties, and the public about the risk of cyberattacks and disinformation during the campaign. ANSSI proactively offered to meet and educate all campaign staffs at very early stages of the election: in October 2016, it organized a workshop. All but the Front National agreed to take part. Showing Resolve and Determination From the start of the electoral campaign, the French government signaled its determination to prevent, detect and, if necessary respond to foreign interference, both publicly and through more discrete channels. The Minister of Defense said that “by targeting the electoral process of a country, one undermines its democratic foundations, its sovereignty” and that “France reserves the right to retaliate by any means it deems appropriate”.19 A similar message was conveyed directly by the Minister to his Russian counterpart and by President Hollande to President Putin. Beat Hackers at Their Own Game ANSSI raised the level of security for voting infrastructure by securing the whole electoral process chain to ensure its integrity. Following ANSSI’s recommendation, the Ministry of Foreign Affairs announced at the beginning of March 2017 the end of electronic voting for citizens abroad because of an “extremely high risk” of cyberattacks. As the hacks could not be avoided, Macron’s En Marche! team put in place traps: “You can flood the emails of your employees with several passwords and logins, real, fake, so that the [hackers] spend a lot of time trying to understand them” explains Mounir Mahjoubi, the digital manager of Macron’s campaign team.20 This diversion tactic, which involves creating fake documents to confuse the attackers with a number of irrelevant and even sometimes deliberately ridiculous information, is called cyber or digital blurring. It was successful in reverting the burden of proof on to the attackers: the

19 Jean-Yves Le Drian (Minister of Defense), interviewed in Le Journal du Dimanche, op. cit. 20 Mounir Mahjoubi, interviewed in Raphaël Bloch, “MacronLeaks: comment En Marche a anticipé les piratages”, Les Echos, 10 May 2017.

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

83

Macron campaign staff did not have to justify potentially compromising information contained in the Macron Leaks; rather, the hackers had to justify why they stole and leaked information which seemed, at best, useless and, at worst, false or misleading. The Importance of Strategic Communication During the entire campaign, the En Marche! team communicated openly and extensively about its vulnerability to hacking. When the leaks happened, En Marche! reacted in a matter of hours. At 11:56 p.m. on Friday, May 5, only hours after the documents were dumped online and four minutes before the electoral silence went into effect, the campaign staff issued a press release.21 The forceful presence of the Macron campaign staff on social media enabled them to respond quickly to the spread of the information. They systematically responded to posts or comments that mentioned the “Macron Leaks”. The campaign’s injection of humor and irony into their responses increased the visibility and the popularity of their response across different platforms. The En Marche! press release stated that the leaked documents “reveal the normal operation of a presidential campaign”. As a matter of fact, nothing illegal, let alone interesting, was found. On Friday night, Macron’s team referred the case to the CNCCEP which issued a press release the day after, asking “the media not to report the content of this data, especially on their websites, reminding them that the dissemination of false information is a breach of law, notably criminal law”.22 The majority of traditional media sources responded to this call by choosing not to report on the content of the leaks. Undermining Propaganda Outlets On April 27, Macron’s campaign confirmed that it had denied RT and Sputnik accreditations to cover the rest of the campaign. Even after the election, both outlets have sometimes been banned from the Élysée presidential palace and Foreign Ministry press conferences. The decision is justified on the ground that these outlets are viewed as propaganda outlets for the Kremlin. This is not only Macron’s position, expressed clearly

21 https://en-marche.fr/articles/communiques/communique-presse-piratage. 22 http://www.cnccep.fr/communiques/cp14.html.

84

J.-B. JEANGÈNE VILMER

during the campaign and most famously in front of President Putin at the Versailles press conference only weeks after the election, but it has been the position of the European Parliament as early as November 2016.23

Conclusion: The Road Ahead Overall, structural factors as well as an effective, responsive strategy allowed the French to successfully mitigate the damage of the Macron Leaks. But the threat persists and indeed will grow for three reasons. First, our adversaries will learn from their mistakes. They will adapt, improve, and professionalize their techniques. They have already reduced the language gap by launching RT France, which—while careful in what it publishes at the moment as it is under scrutiny—is progressively growing a network of like-minded, French-speaking, pro-Russian individuals from across the political spectrum. The pattern of 2016–2017—massive leaks preceded by an RT /Sputnik propaganda campaign—will probably not repeat itself as it has become too obvious. Instead we should expect more discrete action under the cover of legitimate viral movements (such as #MeToo) targeting specific individuals or infiltrating mainstream media outlets that are not (or, at least, less obviously) linked to the Kremlin. In other words, meddlers will not launch the same disinformation campaign twice: they will employ a tailored approach to each situation, so we should be ready with tailor-made responses. To be prepared, we need to maintain flexibility and adaptability, and not be dependent on rigid models. The challenge is to balance institutional anticipation with a certain degree of creativity. Second, information manipulation will be a staple of political campaigns to come. They will also be used between elections to sow mistrust and division within our societies. Their process, their form and their target will change. We will need to better adapt to and anticipate these information environments as well as strengthen our resilience in the long-term. Third, because of technological developments and the rise of artificial intelligence, manipulations will become more sophisticated: improvement in voice and video editing will be such that it will be nearly impossible

23 European Parliament resolution of 23 November 2016 on EU strategic communication to counteract propaganda against it by third parties (2016/2030(INI)).

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

85

to detect disinformation. This could mean the end of trust within every society. Aware of these challenges, France is preparing itself. Less than a month after his election, at a press conference with President Putin at the Versailles palace, Macron was asked why RT and Sputnik were banned from his headquarters at the end of the campaign. He responded: Russia Today and Sputnik were organs of influence during this campaign that repeatedly produced counterfeit truths about me and my campaign… And it’s worrying that foreign media outlets — under the influence of some other party, whomever that may be — interfered by spreading serious untruths in the midst of a democratic campaign. And to that [behavior] I will not concede anything, nothing… Russia Today and Sputnik have not behaved like members of the press or like journalists, but instead have behaved like organs of influence and deceitful propaganda, no more, no less.24

This exchange made a powerful impression abroad. It signaled the determination of the new French president to tackle the disinformation issue head-on. Since then, the government has leapt into action. In the remaining pages, I will explore several steps taken in recent months. Framing the Debate: The CAPS-IRSEM Report In September 2017, the Foreign Ministry’s Policy Planning Staff (Centre d’analyse, de prévision et de stratégie, CAPS) and the Defense Ministry’s Institute for Strategic Research (Institut de recherche stratégique de l’Ecole militaire, or IRSEM, of which I am the Director), launched an interministerial working group on what we refer to as “information manipulation”. Together, we visited 20 countries and conducted about 100 interviews with national authorities and civil societies. The main result is a 200-page report entitled Information Manipulation: A Challenge to our Democracies, launched at the beginning of September 2018.25

24 Emmanuel Macron, Press Conference with Vladimir Putin, Versailles, 29 May 2017, my translation. See James McAuley, “French President Macron Blasts Russian State-Owned Media as ‘Propaganda’,” The Washington Post, 2 May 2017. 25 https://www.diplomatie.gouv.fr/IMG/pdf/information_manipulation_rvb_cle838 736.pdf.

86

J.-B. JEANGÈNE VILMER

This report rejects the word “fake news,” for being both vague and itself manipulated by populist leaders, who call “fake” all news they dislike. It prefers the term “information manipulation,” which we have been advocating for in our internal memoranda since the beginning of 2018. This term is more useful because sometimes the problem is not that the information is fake or false—just that it is exaggerated, biased, or presented in a very emotional way, tabloid-style. Most of the time, the manipulator does not even care about what is true and what is false, he is simply trying to produce an effect. It is no accident that in May 2018, an amendment was made to the proposed bill (see below) which allowed its name to be changed from “Against false information” (contre les fausses informations ) to a law “relating to the fight against information manipulation” (relative à la lutte contre la manipulation de l’information). The report explores the causes of information manipulation, their means, the responses, and future challenges. It concludes with 50 recommendations. There is a strong focus on Russia. In the interviews we conducted just like in the academic literature, Russia is considered the main threat. All the recent electoral interferences seem tied, directly or indirectly, to Russia. The report identified recurring vulnerability factors: the presence of minorities (especially in the Baltic states); internal divisions (political tensions, exploited in Poland for example); and external divisions (tensions between neighboring countries are also exploited, unresolved historical issues for example); vulnerable media ecosystems (tabloids are popular in the UK, conspiracy websites in the US); contested institutions (in Ukraine or the Baltic states, which is fertile ground for the narrative of the failed states to take root). Vulnerabilities are not enough. For manipulation to succeed, it needs means. Russia has many levers and vectors: government bodies, fake NGOs, other organizations, religious, political or economic relays abroad, and individuals (the so-called “useful idiots”). They can use white (overt), black (covert) or grey propaganda. Using these vectors, Russia can create calibrated narratives: anti-EU, anti-NATO, anti-Americanism; exaggerating immigration, crime; social, historical tensions, etc. Most of the time these narratives are mutually exclusive (there are dozens of contradictory explanations of the Salisbury affair, or the MH17 crash for instance). Their priority is not to be consistent, but efficient. One tactic is pitting communities against each other,

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

87

supporting both sides of a social debate (race, LGBT rights, refugees, etc.). The only objective is to divide and therefore weaken. The CAPS/IRSEM report then deals with responses. A general question is: is it better to respond at all? Or simply ignore the attack? Ignoring is a tempting option, given that refuting is repeating and may help to keep the story alive. “Strategic silence” may therefore, in some cases, be the preferred option. Yet, this also comes with the risk of allowing such false and potentially dangerous ideas to sink into the minds of the population. Ignorance as a strategy should therefore only be reserved for minor and inoffensive forms of information manipulation. A key part of the report looks ahead to future challenges. We distinguish technological challenges (deepfakes, artificial intelligence) and geopolitical challenges, which are the future trends of Russian information warfare. The latter bears comment and can be grouped into four categories: (1) Kinetization (the growing interest in the physical layer of internet, especially submarine cables and satellites); (2) personalization (examples: sending text messages to Ukrainian soldiers to undermine their resilience and will to fight, or hiding personal attacks in legitimate movements such as #metoo); (3) mainstreamization (the Kremlin is likely to go beyond RT and Sputnik and invest more in mainstream personalities, journalists and mainstream media outlets to legitimize its narratives); (4) proxyzation (the use of other territories, most notably Africa and Latin America, where the population is less educated and therefore easier to influence, highly connected, and ripe with ethnic and religious tensions, as well as postcolonial resentment—all this can be easily instrumentalized to hurt European values and interests). The reports end with 50 recommendations, including 20 for States. These include • Avoiding heavy handedness: Civil society (journalists, the media, online platforms, NGOs, etc.) must remain the first shield against information manipulation in liberal, democratic societies. The most important recommendation for governments is that they should make sure they retain as light a footprint as possible—not just in keeping with our values, but also out of a concern for effectiveness; • Creating a dedicated structure, inside the government, for the detection and countering of information manipulation;

88

J.-B. JEANGÈNE VILMER

• Transparency. Consider making registration compulsory for foreign media following the American example (this is only a transparency measure); conduct parliamentary inquiries; hold the platforms accountable (make the sources of their advertising public; make them contribute to media literacy and quality journalism, etc.); • Go international. States must increase their participation in existing initiatives such as the EU StratCom task force, the European Center of Excellence for Countering Hybrid Threats in Helsinki, the NATO StratCom Center of Excellence in Riga. They should also send experts to compare notes and experiences in important annual meetings in Prague, Riga, Washington DC, and Singapore. • Media literacy and critical thinking should be for adults as much as for children; states could also support research (increase funding) on this issue, etc. Acting: Legislation, Media Literacy, and Internal Organization First, in January 2018, the president announced his intention to pass legislation on the issue of “fake news” by the end of the year. The proposed bill stirred controversy, and was rejected twice by the senate before being finally passed by the parliament on 20 November 2018. It gives the CSA (the French Media Regulatory Authority), in the three months preceding the ballot, the power to suspend television channels “controlled by a foreign state or under the influence” of that state if they “deliberately disseminate false information likely to affect the sincerity of the ballot”. It empowers judges, at the request of the public prosecutor’s office, of any candidate, any party or political group or any person with an interest in acting, to order “any proportionate and necessary measures to stop such dissemination”. There is a caveat, however, as this is limited to “allegations or allegations that are inaccurate or misleading of a fact of a nature to alter the sincerity of the forthcoming vote are disseminated deliberately, artificially or automatically and massively through an online public communication service”. These cumulative conditions may be so difficult to satisfy in practice that they both make ridiculous the widespread accusations that such a law will threaten the freedom of the press, and make the entire thing quite inefficient. Overall, this law was necessary, it is better than nothing but, limited to electoral periods only, it is also very sectional and therefore insufficient, as the threat is permanent.

FIGHTING INFORMATION MANIPULATION: THE FRENCH EXPERIENCE

89

Second, another important initiative is media literacy. In March 2018, the Minister of Culture committed to double her Ministry’s allotted budget for media and information literacy, increasing it from three to six million Euros.26 These funds will be used to support civil society actors (associations, journalists) working with schools and libraries; to create a “civic service program” with the aim of mobilizing at least 400 young people to work on media literacy with libraries and media professionals throughout the country; and to support public broadcasting companies in their educative role. Last but not least, the government also started to change its internal organization, making it less compartmentalized. As the CAPS/IRSEM report notes, there is an international consensus on the need for a more coordinated approach. In France as well, there is the beginning of an understanding that we need to adapt and to change our bureaucratic culture, if we want to fight efficiently against information manipulation. The traditional compartmentalized approach has shown limited efficacy to face an adversary using a full-spectrum or “hybrid” strategy, blurring the line between war and peace. To adapt to this fluid threat, we should be prepared to use, as we cooperate internally within agencies, subversive methods which break down traditional walls.

26 Françoise Nyssen, Speech at the “Assises du journalism”, Tours, March 15, 2018.

Disinformation and Cultural Practice in Southeast Asia Ross Tapsell

Abstract Ross Tapsell evaluates the ‘cultural practice of disinformation’ in Southeast Asia, arguing that in order to combat this issue, academics must take into consideration ‘what kinds of disinformation spread widely, and why?’ This question is addressed in three parts: how Southeast Asians access the internet, the cultural background in which digitalisation has entered the public sphere, and the political context in which disinformation spreads. Tapsell suggests that academics tend to focus on studying Twitter as data is easier to obtain from the platform. However, this fails to take into consideration that ‘millions of Southeast Asians’ are using the internet for Facebook and WhatsApp instead of Twitter. A significant amount of Southeast Asians are obtaining their news from social media, and are not fact checking the news that they are consuming. Tapsell also emphasises that many Southeast Asian citizens distrust official news sources due to their experiences of political manipulation and press corruption, and instead seek alternative sources of information from what they consider more ‘trustworthy’ sources on social media. Tapsell concludes by encouraging academics to engage in ‘big data’ scholarship

R. Tapsell (B) The Australian National University, College of Asia and the Pacific, Acton, ACT, Australia e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_7

91

92

R. TAPSELL

on disinformation to analyse how digital information is changing Southeast Asian societies, and come up with solutions to the complex problems disinformation poses. Keywords Disinformation · Digitalisation · Social media · Southeast Asia · Source

Introduction Southeast Asians with internet access are ranked as some of the most ubiquitous users of social media platforms, thus recent global debates over the role of online disinformation is extremely relevant to the region. Yet it is true that in Southeast Asia as elsewhere, scholars are still trying to understand how ‘the informational underpinning of democracy have eroded, and no one has explained precisely how’.1 In this chapter I outline the rise of disinformation practices in the digital sphere in Southeast Asia, in particular during recent elections. Much has already been published on the rise of paid social media campaigners who produce and disseminate politically skewed content.2 In each country these campaigners have localised terms; ‘trolls’ in the Philippines; ‘buzzers’ in Indonesia; and ‘cybertroopers’ in Malaysia. This research is important, but it generally tells how disinformation is created and who pays for it (although the latter is more difficult to ascertain). Thus, the research conducted previously is ‘top-down’ in its approach, generally using aggregated social media data to show a large swathe of online campaigners who create and disseminate disinformation. Much of the contemporary writing on social media and elections thus adopts the language of war—social media has been ‘weaponized’ by ‘online armies’.

1 Madrigal, Alexis. 2017. ‘What Facebook Did to American Democracy’, The Atlantic, 12 October 2017. Accessed 13 October 2018: https://www.theatlantic.com/technology/ archive/2017/10/what-facebook-did/542502/. 2 Cabanes, Jason Vincent and Cornelio, Jayeel. 2017. ‘The Rise of Trolls in the Philippines (And What We Can Do About It)’, in A Duterte Reader: Critical Essays on Rodrigo Duterte’s Early Presidency, edited by Nicole Curato, pp. 231–250. Bughaw, Manila and Lim, Merlyna. 2017. ‘Freedom to Hate: Social Media, Algorithmic Enclaves, and the Rise of Tribal Nationalism in Indonesia’, Critical Asian Studies, 49 (3), 411–427.

DISINFORMATION AND CULTURAL PRACTICE IN SOUTHEAST ASIA

93

In this chapter, I argue that scholars need to think more about the cultural practice of disinformation, which means we must examine what kinds of information society is emerging in the new, digital public sphere. I hope to prompt the reader to think more about a ‘bottom-up’ approach to disinformation, and to ask: what kinds of disinformation spread widely, and why? Why do people share some disinformation practices and not others? Do people believe what they share? In much of the ‘topdown’ scholarly analysis, these questions are either ignored completely, or presumed to be obvious: that citizens are being regularly ‘duped’ by disinformation campaigners, and they only share material because they believe it. To answer the above questions more comprehensively, we need to think more about: a. the different ways citizens access the internet, and under what circumstances they engage with it b. the historical and cultural backgrounds in which digitalisation has entered the existing public sphere c. the political context in which disinformation spreads. We need to analyse disinformation practices and audiences in its regional and national context, indeed even its local context, to better understand the shifting information societies being forged by digitalisation.

How Southeast Asians Access the Internet and Why It Matters We often think of the ‘digital divide’ as a clear distinction between those who have access to the internet and those who have no access at all. In this frame, roughly 30% of Indonesians, 65% of Filipinos and 75% of Malaysians have regular access to the internet. Much of this access occurs via mobile phones (estimated at 70% of internet users in the region) because of the massive expansion of the smartphone market through cheap, Chinese-made Android handsets. But these larger statistics do not tell the full story, as in Southeast Asia these social media ‘communities’ are not the same. When answering professional surveys many citizens answer

94

R. TAPSELL

‘yes’ to having Facebook but ‘no’ to having internet access.3 As Elizabeth Pisani wrote: ‘Millions of Indonesians are on $2 a day and are on Facebook’.4 It is this group of people ‘on the digital divide’ who have minimal access to the internet—that is generally understudied in literature on disinformation. We need to think more about how the type of social media platform citizens use results in differing forms of disinformation. For example, the Philippines has been described as the ‘Facebook nation’ because of the prominence of Facebook. Because Filipinos use Facebook messenger regularly, alternative messenger sites like Whatsapp are not growing rapidly in the same way as neighbouring Malaysia and Indonesia. In Indonesia Facebook is becoming less interesting for younger people because of its ubiquity—they see their parents, extended family and other people they have never met, all on the home page and want a more ‘exclusive’ site where they can post material more to their friends. This explains the early 2010 rise of social media platform Path, and, subsequently the rapid rise of Instagram amongst Indonesian urban youths. Cutting-edge scholarship in Indonesia and Malaysia examines the role of celebrity Instagram Muslim preachers in Indonesia, and how they are increasingly crucial in shaping political discourse.5 For example, scholarship which focuses predominantly on Twitter because data is much more easily available for ‘big data’ analysis. Indeed, much of the work on disinformation in the Philippines (including the Oxford Internet Institute)6 show a rising trend of ‘bots’ and paid, fake accounts via Twitter. In Southeast Asia, Twitter is declining and generally used by urban elites. For example, in Malaysia, 64% of Malaysians said their gathered news from Facebook, and 54% from Whatsapp, far higher 3 Jurriens, Edwin and Tapsell, Ross. 2017. ‘Challenges and Opportunities of the Digital “Revolution” in Indonesia’, in Digital Indonesia: Connectivity and Divergence, edited by Edwin Jurriens and Ross Tapsell, pp. 1–17, ISEAS, Singapore, p. 5. 4 Pisani, Elizabeth. 2014. Indonesia Etc: Exploring the Improbable Nation, Granta, London, p. 3. 5 See Asiascape: Digital Asia, Volume 5, Issue 1–2, 2018, ‘Online Publics in Muslim Southeast Asia: In Between Religious Politics and Popular Pious Practices’. 6 Bradshaw, Samantha and Howard, Philiip. 2012. ‘Troops, Trolls and Troublemakers: A Global Inventory of Organised Social Media Manipulation’, The Computational Propaganda Project, Oxford Internet Institute Working Paper 12. Accessed 2 October 2017: http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-Tro lls-and-Troublemakers.pdf.

DISINFORMATION AND CULTURAL PRACTICE IN SOUTHEAST ASIA

95

than Twitter which is at 25%. This is not to say Twitter is unimportant. Rather, that scholars are providing significant weight to studying Twitter because data is easier, and thus the body of literature which has emerged scholars generally study urban, elite usage of disinformation practices and need to think more about the millions of Southeast Asians who are really only using ‘the internet’ for two things: Facebook and Whatsapp. Unlike Twitter, gather big data algorithms is more difficult on these platforms, particularly Whatsapp and other encrypted messaging services like WeChat, LINE and Telegram, but are no less important. Internet speeds are also vitally important in understanding disinformation practices in the region too. In the Philippines, for example, internet speed stood at 15.2 megabits per second, far below the global average of 40.7 Mbps, while mobile internet speed was 13.5 Mbps, below the global average of 21.3 Mbps. In Malaysia it is not much better, at 15.96 Mbps speed via a mobile device, so for example only 56% of Malaysian internet users said they watched an online video everyday.7 This matters because it means Southeast Asians with internet access are more likely to use platforms which require slower internet speeds for effective usage, such as Facebook’s ‘free basics’ and other simple messenger sites like Whatsapp. Thus, many of these citizens are not loading full websites, not watching lengthy videos, nor spending time going through fact-checking sites which various government and civil society organisations increasingly urge them to do. Thus, disinformation tactics are not uniform, and depend much on what device citizens access the internet, what social media platform they use regularly and therefore potentially trust more, and what internet speeds they endure in order to receive and share various forms of political and other disinformation material. Only by studying these communities more closely can we come to broader conclusions about which disinformation practices work for which group of people and why. In this regard, scholarship perhaps needs to shift focus from the yearning of big data analytics, and undertake more traditional forms of media anthropology, which require living amongst rural communities observing and engaging with their digital media practices.

7 We Are Social. 2017. ‘Time Spent on the Internet, Global Webindex Based on a Survey of Internet Users Aged 16–64’. Accessed 21 September 2017: https://weares ocial-net.s3.amazonaws.com/uk/wp-content/uploads/sites/2/2017/01/Slide034.png.

96

R. TAPSELL

Media and ‘Trust’ in Southeast Asia A large body of scholarly literature is concerned with the way in which trust in traditional sources is being manipulated in the digital era. Scholars, particularly political scientists, generally like to use national surveys to understand recent trends. For example, in a 2017 survey on trust in major institutions in Indonesia, the lowest ranked were political parties (45%), the parliament (55%), the courts (65%) and the mass media (67%). Even the notoriously corrupt Indonesian police (70%) ranked higher than the press.8 A 2017 survey found that Filipinos who have internet access trust social media more than mainstream media, with 87% of respondents purporting to ‘trust’ information on social media.9 These surveys are good starting points, but we need more depth to understand why people are moving away from mainstream media and towards alternative sources of information. Below I propose a few brief answers. In Indonesia, a feature of the mainstream media in Indonesia is politically motivated owners encourage highly partisan reports.10 For example, on the night of the closely fought 2014 presidential election, for example, TVOne treated the losing candidate (Prabowo Subianto, whom the network’s owner had supported) as if he had won, and produced fake polls to back its claims. Skewed coverage like this has led to a general belief that all mainstream media (and polling institutes) are partisan, when in fact credible ones do exist. Thus, as trust in mainstream media declines, citizens turn to alternative sources of online information, news and views when regularly consuming media content, which is shaped by increasingly ubiquitous social media usage. In the Philippines, a new media ecosystem has arrived with the prominence of numerous pro-Duterte bloggers. Mocha Uson, a popular singer and model, campaigned strongly for Duterte through her blog, which attracted four million likes and records around 1.2 million engagements each week. R. J. Nieto, with his blog Thinking Pinoy and with the high

8 https://www.iseas.edu.sg/images/pdf/TRS10_17%20(002).pdf. 9 Ballaran, Johanna. 2017. ‘Online Filipinos Trust Social Media More Than Traditional Media—Poll’, Inquirer.net, August 29, 2017. Accessed October 1, 2017: http://techno logy.inquirer.net/66402/filipinos-online-trust-social-media-traditional-media-poll?utm_ campaign=Echobox&utm_medium=Social&utm_source=Twitter#link_time=1503991006. 10 Tapsell, Ross. 2017. Media Power in Indonesia, Rowman and Littlefield, London and New York, p. 95.

DISINFORMATION AND CULTURAL PRACTICE IN SOUTHEAST ASIA

97

engagement of his 470,000 Facebook followers, claims to have been ‘kicking mainstream media’s arse’ in terms of engagement with readers. ‘The tables have turned,’ he wrote on his blog in 2017, ‘[and] social media has taken over the task of shaping public discourse’. Martin Andanar’s Martin’s Man Cave podcast also grew in popularity. Duterte was a guest on the show as early as June, 2015, where Andanar introduced him as a ‘rockstar’ and ‘one of the best local executives we have in the country’. Sass Rogando Sasat (@senyora with 3.91 million Twitter followers) became prominent interlocuters during the 2016 Presidential election campaign. Many Duterte supporters online attacked professional journalists and continued to decry the mainstream media as ‘biased’, allowing for alternative sources of information to become more popular.11 In the Southeast Asian context, it’s clear that citizens are not particularly trustful of official sources. There are a number of conclusions we could draw as to why this is so. I have argued elsewhere that the legacy of authoritarianism and state-sponsored propaganda has a lasting impact in understanding trust in Southeast Asia.12 Put simply, when citizens grow up on a diet of slanted pro-government official information they are more likely to turn to unofficial sources of information. This perhaps explains why young people in Southeast Asia often complain that their parents are sharing hoax news on Whatsapp groups and Facebook. They do not always believe it, but they do think sharing all forms of information— whether they are unsure if the information is accurate or not—is better than not sharing any information at all. There is a cultural argument here too; that Southeast Asian society is shaped by a more open attitude towards the sharing of information. Villagers in Southeast Asia will regularly ask each other ‘where are you going?’ or ‘who are you meeting?’ without it being considered rude. In some ways this culture of openly ‘sharing’ society and extensive chatter has simply moved online, explaining in part why Southeast Asians vigorously adopt social media at such high rates of engagement, while other countries like Japan, where society is generally considered more ‘private’, have comparably lower levels of social media engagement. 11 Cabanes, Jason Vincent and Cornelio, Jayeel. 2017. ‘The Rise of Trolls in the Philippines (And What We Can Do About It)’, in A Duterte Reader: Critical Essays on Rodrigo Duterte’s Early Presidency, edited by Nicole Curato, pp. 231–250. 12 Tapsell, Ross. 2018. ‘Disinformation and Democracy’, New Mandala, 12 January, 2018. Accessed https://www.newmandala.org/disinformation-democracy-indonesia/.

98

R. TAPSELL

Thus, Southeast Asia’s culture of sharing disinformation did not originate with digital technologies. In 1994 article on the role of rumour during Suharto’s military dictatorship (1965–1998) in Indonesia, the anthropologist James Siegel wrote that ‘rumor is subversive in the New Order not when its content is directed against the government, but when the source is believed not to be the government’.13 Under authoritarian rule, the practice of passing on information, rumours and gossip became a heightened aspect of being an Indonesian citizen, in order to understand the real story, the extra information. A non-government source, particularly if it is someone you trust, became more believable. As we have heard, in many ways this practice continues in Southeast Asia, simply moving online. The digital era is changing the avenues which society receives produces, disseminates and receives news and information, but it does not necessarily change the cultural practices of information-gathering and sharing of news and views, especially in rural contexts. We are only just beginning to understand the complexities around the way in which new technologies re-shape political discourse.

The Political Context in Which Disinformation Spreads A third question scholars need to consider more deeply with regard to disinformation is why some material is so widely believed by ordinary citizens and other material is ignored. If political parties and candidates are increasingly hiring professional online campaigners to produce ‘hoax news’ about their opponents, which mudslinging sticks, and why? The answer to most political scientists is that ‘identity politics’ now prevails in election campaigns. That is, campaigns which target voters on ethnic and religious lines are more likely to engage voter emotions, and thus likely to have a lasting effect on the decision whether to vote for a candidate or not. Indeed, there is much evidence to suggest that a ‘post-truth’ society, fuelled by disinformation tactics, is increasingly part of elections in Southeast Asia. The Philippines election of 2016 fits into current debates surrounding post-truth politics of campaigns of emotions over policy details. As Curato has observed ‘Duterte put on a spectacle

13 Siegel, James. 1993. ‘I Was Not There, But…’, pp. 59–65, Archipel. http://www. persee.fr/doc/arch_0044-8613_1993_num_46_1_2937.

DISINFORMATION AND CULTURAL PRACTICE IN SOUTHEAST ASIA

99

that pushed the boundaries of traditional political practice’.14 She further cites sociologist Randy David (2016) as Duterte’s campaign as ‘pure theatre -a sensual experience rather than the rational application of ideas to society’s problems’. Central to Duterte’s election campaign was their depiction of the Philippines as a ‘narco state’. At the start of the campaign, Duterte stated that the country would become a narco state due to illegal drugs, which he said was ‘a clear national security threat’. The key to the ‘narco state’ message was the Duterte team’s ability to link the issue of drugs to the broader anxieties of crime, corruption and lack of faith in law enforcement in the Philippines.15 In this regard, the ‘narco state’ become something emotively bigger and cast Duterte as the outsider vigilante—a ‘Dirty Harry’ authoritarian strong-man who could overcome the ‘narco state’ cronies. In Indonesia, the 2017 Jakarta election saw a highly febrile, sectarian campaign where voters were encouraged to ally themselves with candidates who shared their religious affiliation. There became two election campaigns: the official campaign where candidates talked of policies (such as education and entrepreneurship) and the more emotive ‘unofficial campaign’ online (where material asked voters not to vote for the incumbent candidate because he insulted the Koran, or more covertly, because he had Chinese heritage). For their part, victorious candidate Anies Baswedan’s team was also concerned with a ‘black campaign’ against him such as memes or messages stating he was a Shia Muslim (90% of Indonesians are Sunni Muslim), or that he wanted to introduce Sharia law. In the case of the Philippines and Indonesia, it seems social media does not support the advancement of nuanced policy debates during election times and democratic discourse suffered. However, Malaysia provides an interesting contrast to this argument. In the 2018 elections, the smartphone, Facebook and Whatsapp were used as a ‘weapon of the weak’ in order to question the legitimacy of increasingly authoritarian rule of the corrupt Prime Minister Najib and to help undermine the rule of the semi-authoritarian Barisan Nasional government. When Malaysians discussed the election, invariably they began to talk about Najib and the 1MDB wealth fund controversy. When 14 Curato, Nicole. 2017. ‘Flirting with Authoritarian Fantasies? Rodrigo Duterte and the New Terms of Philippine Populism’, Journal of Contemporary Asia, 47 (1), 104. 15 Curato, Nicole. 2016. ‘Politics of Anxiety, Politics of Hope: Penal Populism and Duterte’s Rise to Power’, Journal of Current Southeast Asian Affairs, 35 (3), 91–109.

100

R. TAPSELL

subsequently asked where they got their information (given Malaysia’s mainstream media mostly avoids reporting this issue), they would almost always say ‘Facebook’ or ‘Whatsapp’. Details of Najib’s 1MDB corruption scandal, repressed in the mainstream media, spread frequently on social media and messenger applications. While there was a general belief in Malaysia that the details of 1MDB were too complex to resonate in rural towns and villages, the message of government corruption was clearly widespread in large part due to social media.16 While Facebook and Whatsapp triumphantly undermined government-controlled mainstream media in semi-authoritarian Malaysia, at the same time these platforms are assisting in the declining trust of professional journalism in democratic countries like the Philippines and Indonesia. How do we reconcile these different cases? Clearly, winning the ‘social media war’ is not simply a matter of amount of official campaign material being disseminated by political parties and their social media professionals, but producing the type of content that makes citizens share with their friends and family ‘organically’. We need more studies examining how it is that some disinformation tactics work and others do not and in what political context, in order to fully understand the environment in which society may or may not be ‘threatened’ by these new practices. Furthermore, the 2018 ‘fake news law’ would have been a disaster for press freedom and freedom of expression in Malaysia, but police crackdowns on disinformation companies who use racism and bigotry to gain clicks for their site (and thus money) elsewhere in Southeast Asia may be welcome. Much depends on the political context and the level of media freedom already present in each society, the type and intention of the government constructing the laws, and the varying power and will of the security forces who implement them.

Conclusion and Solutions The digital era is changing the avenues which we receive information. Given the prominence of the smartphone for news and information in Southeast Asia, we should look to this region to see what these new ‘communities’ look like, and how society is changing. To do so, we need a wide 16 Tapsell, Ross. 2018. ‘The Smartphone as “Weapon of the Weak”: Assessing the Role of Communication Technologies in Malaysia’s Regime Change’, Journal of Current Southeast Asian Affairs, December.

DISINFORMATION AND CULTURAL PRACTICE IN SOUTHEAST ASIA

101

range of scholarly fields to engage more deeply with ‘big data’ scholarship on disinformation. Anthropologists, historians, gender and cultural studies scholars, sociologists and other scholars of the humanities have much to contribute to the field, despite knowing little about big data algorithms. The fields of political science and media studies, and social media studies in particular, has become excited by the possibilities big data analytics can bring. Quantitative studies of politics and social media are plentiful in Western universities. The ethical questions for scholarship raised by the recent Facebook and Cambridge Analytica scandal has led many to pause to think more about privacy issues around collecting information. Meanwhile, encrypted messenger sites provide new challenges for scholars of disinformation. The challenges are many, but given its increasingly ubiquitous use of social media platforms there is no reason Southeast Asia cannot lead the world in finding solutions to these complex problems, rather than waiting for instructions from Silicon Valley.

Hate Speech in Myanmar: The Perfect Storm Alan Davis

Abstract Alan Davis evaluates key factors contributing to the inter-ethnic conflict in the Rakhine State in Myanmar. He argues that during the country’s transition from a totalitarian state to a more open society, the international community sought out easy indicators in order to ‘declare success quickly and then move on’. The dramatic increase in the number of media outlets in Myanmar was recognized as proof of increased levels of freedom of expression in the country. However, Davis states that with the rise of social media—mainly Facebook—traditional media is becoming less relevant. Davis highlights the increase in hate speech against ‘Muslims in general and the Rohingya in particular’. He argues that the international community should have also been more aware of the ‘confluence of factors, crowned by Facebook’ in Myanmar, contributing to the risk of inter-communal violence and the ethnic cleansing of the Rohingya Muslims. Keywords International community · Rohingya Muslims · Totalitarian · Facebook · Hate speech

A. Davis (B) Institute for War and Peace Reporting, London, UK e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_8

103

104

A. DAVIS

Once more the rallying cry ‘never again’1 coined in the wake of the Holocaust has proved hollow as the UN found genocide and ethnic cleansing has been committed against the Muslim Rohingya by elements in the Tatmadaw, the Buddhist Bamar-dominated Myanmar army.2 Such atrocities are however never spontaneous3 and the events in August and September 2017 did not simply happen out of nowhere. On top of structural realities and dynamics clearly apparent to dispassionate observers of Myanmar, there were additional key factors whose contributory impact and significance is only now being fully understood and whose presence probably ensured the darkening clouds around inter-ethnic conflict ultimately developed into a perfect storm that has all but wiped the Rohingya off the map of Myanmar for now and possibly for a generation or more to come. These include the country’s 2012/3 telecommunications’ revolution that took place as a direct by-product of the transition process in Myanmar; Facebook’s decision to piggyback on this through its Free Basics4 service that saw the social media network application preloaded onto virtually every smartphone sold in the country from 2013 onwards that provide a totally free but very limited access to the Internet in Myanmar (Facebook effectively was the internet)—and finally and not insignificantly, the impact of Islamic State’s terror attacks on shaping attitudes and supporting propaganda inside the country. All these factors and how they also combined and fed off each other will be examined in further detail below—but we were all too slow to appreciate and understand their likely effect: There were others—historical and structural—that were far more obvious and should have been picked up, factored in and mitigated against well before the events of August 2017. After all, it is not as if Myanmar has been off the international radar. Rather it has been the focus of a huge amount of attention and the country and its pro-democracy leaders were very much the darlings of the international diplomatic and

1 https://www.nytimes.com/1990/11/19/opinion/l-but-meir-kahane-s-message-ref uses-to-die-source-of-never-again-080490.html. 2 https://www.jpost.com/Diaspora/Never-Again-From-a-Holocaust-phrase-to-a-univer sal-phrase-544666. 3 Ban Ki Moon, Former UN General Secretary, Framework of Analysis for Atrocity Crimes: A Tool for Prevention, UN 2014. Forward, p. iii. 4 https://www.mmtimes.com/business/technology/20685-facebook-free-basics-landsin-myanmar.html.

HATE SPEECH IN MYANMAR: THE PERFECT STORM

105

development community. Nobody then should really have been surprised by what happened in Rakhine State. Yet we were. Again. Why? To begin with, it is probably fair to say that key drivers of the international response to Myanmar from 2011 onwards had a lack of emotional distance from country’s political leadership and thus did not undertake sufficient due diligence or test assumptions about what they believed was most likely to happen. Secondly, a related question needs to be asked: What was Myanmar really transitioning to? Were we ever really sure the objectives of the international community and those of Daw Aung San Suu Kyi and the National League for Democracy (NLD) were fully aligned? Future historians may conclude there was a mismatch between the level of democracy and freedom that Suu Kyi believed in and the level the donor community expected. In due course, it may be determined that this transition was really only ever about a move from military Bamar rule to civilian Bamar rule. A subsequent transition to a fully inclusive democracy may well happen in due course—but it was surely very naïve to imagine such a transformation could be delivered in a handful of years. History shows us how long it takes for real democracies to root and their measure is not elections or number of media outlets. Much more accurate indicators are surely how inclusive the country is and how it protects the rights and interests of the weakest in society? Transitioning countries and developing democracies will always have serious problems—but they shouldn’t have genocides—and certainly not when they are under such close support of and guidance from the West. Essentially then, I would suggest there was a fundamental mismatch between the interests and objectives of Suu Kyi and the international community alongside the latter’s incredible naiveté in terms of where Myanmar was at the start of the transition process and where it was ever likely to be within a few short years of international support. These mistakes, I believe, helped found the perfect storm that led to genocide and ethnic cleansing of the Rohingya. So the clouds were already darkening. But on top of this, we need to see how problematic these other pre-existing factors would turn out to be. These include Myanmar’s recent history; the key dynamics shaping the society—and too, the structural challenges that must always be factored in when a totalitarian country finally opens up and its people are suddenly freed. Even the most rudimentary understanding of these issues will have

106

A. DAVIS

exposed the extent to which, without appropriate intervention, there was always going to be a significant risk of inter-communal violence in Myanmar as part of any transition process from a closed to an opening society. As we all know, the country is an historically contrived and unstable Union of 135 different ethnicities—136 if you include the Muslim Rohingya which the majority Bamar population refuse to do. Artificially created by outsiders—the British—by force, the Bamar narrative has largely been that the country’s Buddhist identity and culture was deliberately and significantly weakened by the British who considered and ruled it as part of ‘Greater India’. A great deal of suspicion of outsiders, if not outright xenophobia stems from this experience and period. From 1962 until 2011, the country was only marginally less successful than North Korea in cutting itself off entirely from the rest of the world. Like North Korea, the Tatmadaw generals enforced their own version of Juche 5 or self-reliance on the rest of the population—and the fear and psychological pressures people lived under when I first visited the country in 1991 as an undercover journalist appeared immense. Now let us imagine the challenges North Korea would face if Pyongyang suddenly started to transition to a democracy and censorship relaxed? North Korea however, is a clearly delineated country with a population that is 99.8% ethnically the same.6 Myanmar’s demography is as far away from this as you can possibly imagine. What might we expect if North Korea had a similar ethnic mix complete with many unresolved internal conflicts and disputes over names, borders and land as Myanmar? The country was effectively a highly pressurized steam cooker brutalized by the military for almost half a century: It was always going to boil over when the lid was suddenly whipped off. Once again, given these factors, the international community should have been far more prepared to act and alleviate the effects of what was always highly likely to happen. In another context, former Secretary of State Colin Powell famously claimed that if the United States and its allies invaded Iraq and ‘broke it’ to depose Saddam Hussein, it would end up ‘owning it’.7 And yet, the

5 http://www2.law.columbia.edu/course_00S_L9436_001/North%20Korea%20mate rials/3.html. 6 https://study.com/academy/lesson/north-korea-ethnic-groups.html. 7 https://en.wikiquote.org/wiki/Colin_Powell.

HATE SPEECH IN MYANMAR: THE PERFECT STORM

107

international donors who took it upon themselves to be the hand-maidens of the democratic transitional process seemed very much to underplay or ignore the combative forces and dynamics that have long been swirling under the surface in Myanmar. It should have been obvious that a long subjugated people suddenly given the right of free expression will not necessarily use that right responsibly to begin with: Given people had lived for more than half a century without much in the way of free will, what did anybody really know about responsibility? How could they? When a very poorly educated people who have been brutalized, threatened and indoctrinated for decades are suddenly granted freedoms, they will inevitably use these to promote and defend their own interests against all others. This is a simple fact and this is exactly what happened. It is like breaching a dam: The water will be torrential at the point of release and only much further downstream will it finally become placid. Lest we forget, ultra-nationalists including the monk and anti-Muslim activist Wirathu who had been arrested and imprisoned under the military junta for inciting hatred against Muslims were released as part of the overall transition process towards freedom of expression and ‘democracy’. There are also clear historical lessons outside Myanmar that should have been much more obvious to the key international supporters of the process: My own organization, the Institute for War & Peace Reporting (IWPR), was established in Europe in 1992 and in the midst of the inter-ethnic conflict in Yugoslavia. As a journalist reporting the initial war between Serbia and Croatia the year before, I had been a direct observer of the rapid rise of hate speech and how former neighbours turned from being former classmates into deadly foes in the space of just a few weeks as the physical and figurative shutters came down and the barricades went up.8 Reporting from inside the besieged town of Vukovar in September 1991, it was almost impossible to fathom how rapidly the situation had deteriorated to the extent that people who had lived peacefully side by side and even intermarried for decades under the military rule of Marshall Tito were now shooting at each other across the sunny cornfields of southern Europe. There is a danger in trying to make too much of a link between the Former Yugoslavia and Myanmar—but equally there is also a danger 8 Introduction, p. 5. https://iwpr.net/sites/default/files/download/publication/region almedia_0.pdf.

108

A. DAVIS

in ignoring the very clear parallels and warnings. Post World War II Yugoslavia was composed of a handful of different ethnicities, religions and states: It was held together under the grip of the military which was dominated by the country’s latest group and region, Serbia. The death of Marshall Tito in 1988 ushered in dynamic changes across Yugoslavia— the most dominant ones being the end of military rule, the rise of free expression—and a rising sense of nationalism. As in the former Yugoslavia, the very attributes of free expression and association granted in the wake of the transition from military rule in Myanmar were used by the majority Bamar to target others—particularly those like the Rohingya who were at the very bottom of the ethnic pile and least able to argue or push back. We had a first clear indication of this back in 2012 in Rakhine State when, less than four months after censorship was relaxed and a year after the military junta was officially dissolved, anti-Rohingya riots in Maungdaw Township left 200 dead and thousands displaced. The inherent structural challenge and clearly rising threat to minorities should have been addressed by the powers that be: The problem was though there was a political vacuum and absent leadership from the time of retired general Thein Sein’s rule through to the leadership of the NLD when it came to addressing issues not of primary concern to the majority Bamar. For its part, the international donor community was unable to respond quickly enough to fill that vacuum given it had been too deferential and hesitant from the start and—as determined above—having already misjudged the dynamics and processes that were actually happening. For well over 20 years before the transition began, the simplified narrative peddled by key players, including diplomats, and adopted by international public opinion and donors was of a peaceful Buddhist people held hostage by a vindictive military hell-bent on retaining power at any cost. The army persecuted students and monks—and an Oxfordeducated symbol of resistance who ironically, just so happened to be the highly articulate and photogenic daughter of the country’s assassinated independence leader popularly known as Bogyoke—the General. Her own personal story mirrored that of ‘her’ people: And yet ‘her’ people were the Buddhist Bamar—the very same people who made up all the positions of power in the Tatmadaw who were responsible for the brutality of the past 50 years. That there were many dozens of minority ethnicities inside the country who thought very little of Suu Kyi or her father’s legacy was very largely left unreported to the world outside.

HATE SPEECH IN MYANMAR: THE PERFECT STORM

109

When the transition from military to civilian rule finally started happening in 2011, the international community did what it typically does in transitional countries: It sought the path of least resistance and best optics in order to declare success quickly and then move on. To be fair, there are always huge cost implications in transitioning states and nobody has endless pockets—and yet even so—in the case of Myanmar, the most basic of indicators were chosen in an attempt to prove ‘mission accomplished’—namely the victory of Suu Kyi’s NLD in national elections held in September 2015. Given some of the famous Generation 889 activists who had suffered at the hands of the military for years inside Yangon’s notorious Insein Jail were now elected MPs and government officials—surely that was proof enough of a successful transition? Not only this, but the country’s economic transformation was plain to see: After decades of being South East Asia’s economic basket case10 the country was finally joining the modern world: Banknotes were no more issued in dominations that added up to multiples of nine because it just happened to be the lucky number of former leader General Ne Win, and billboards lining the road into Yangon from the airport no longer warned visitors in English that ‘unruly elements would be crushed’. Instead they advertised Myanmar Beer and offered the latest Audi and five-star condominium opportunity on Inya Lake. To be fair, donors have invested heavily in improving basic services and infrastructure: Again, these improvements were very tangible. The ‘success’ of the transformation could then be easily shown off and measured in the number of new hospitals and classrooms being built and new businesses opened. All this was however done at the expense of investing in the much more challenging area of investing in democratization. That the NLD was unable to field a simple Muslim candidate in the hugely anticipated and celebrated 2015 elections should have given a total red light to the West and given all embassies and donors pause for thought: If it did, they all certainly kept extremely quiet about it. Again, many people looking for easy indicators of success made a simple mistake in believing that the dramatic increase in the number of media outlets and civil society organizations since the transition was proof

9 https://en.wikipedia.org/wiki/88_Generation_Students_Group. 10 Economic History of Myanmar. http://factsanddetails.com/southeast-asia/Mya nmar/sub5_5g/entry-3126.html.

110

A. DAVIS

of the increase in the levels of freedom of expression and association overall. And it was—so far as the majority Buddhist Bamar community was concerned. Beyond the Bamar majority however, the degree to which there has been any kind of transition to democracy in terms of human rights and protections is a real question: Most certainly there has never been any benefit for the Muslim Rohingya who have never been recognized or accepted by the majority. Even the term Rohingya itself is flatly rejected and deemed unacceptable and to use it inside the country in public is to risk abuse and expose yourself to potential physical danger. In May 2016—more than a year before the ethnic cleansing and reported genocide started targeting them, Nobel Peace Prize Laureate Suu Kyi even went so far as to publicly ask the United States not to use the term ‘Rohingya’.11 Inside Myanmar they have long been referred to as Bengalis —that is, people from Bengal who do not belong and thus deserve no rights or protections. In March 2015 and more than a year before Suu Kyi was finally condemned in the West for denying the universal right of the Rohingya to self-identify, philanthropist George Soros raised a red warning flag when he wrote a piece in Newsweek entitled ‘As a Jew in Budapest, I too was a Rohingya’.12 A long-standing supporter of the pro-democracy movement and a very frequent visitor to the country once it started opening up, he wrote back in January 2015 that ‘the parallels to the Nazi genocide are alarming’. ‘Fortunately’, he added, ‘we are not there yet.’ Around this time, IWPR was finalizing our very modest proposal to track and publicly monitor and report on hate speech in Myanmar that we had been discussing with a particular donor over the previous 12 months. Our project which was as much about public outreach, debate and education as it was about media monitoring was finally approved in the summer of 2015 and launched in the final weeks of the country’s first fully free general election campaign in more than half a century. In the time between conceiving the project in the wake of the first anti-Rohingya riots and the pitiful media coverage of events in 2012— and funding for our work finally being secured and the work launched, a

11 https://www.nytimes.com/2016/05/07/world/asia/myanmar-rohingya-aung-sansuu-kyi.html. 12 https://www.newsweek.com/soros-jew-budapest-i-too-was-rohingya-337443.

HATE SPEECH IN MYANMAR: THE PERFECT STORM

111

good deal had changed. Despite the celebratory mood and sense of expectation—even euphoria—among most observers and commentators in the run-up to the elections of September 2015, the trends—by contrast— were taking the country in a new and ultimately a negative direction in terms of inter-communal relations. The key issues we found were that while the behaviour of local media had dramatically improved since 2012—in large part because of the level and quality of international media training given to local journalists— traditional media was becoming less and less relevant. Audiences were declining just as its quality was improving: Moreover, as mentioned above, in Myanmar society, a social revolution was accompanying the political transition—and it all came courtesy of telecommunications—and Facebook. A very tangible element of the transition was the revolution in telecommunications inside the country that started in 2013/4 and saw 3G mobile networks introduced across the country. The price of a SIM card collapsed from more than $1500 to less than $1 in a matter of months, as did the sudden arrival of smartphones that cost just a few thousand kyats (less than $10). As a very frequent visitor to Myanmar around this time, it suddenly seemed that everybody in Myanmar was connected to and speaking to and texting each other: According to Ericsson,13 in 2015, Myanmar was the fourth fastest-growing cellphone market in the world— and today it seems that every second shop in Yangon is selling cellphones or their covers: Lest we forget, this is a society that for more than half a century had been brutalized and people had long been taught and warned to keep all opinions to themselves. Suddenly all that changed—and no longer could people and communities talk among themselves—but the deliberate introduction of Facebook’s Free Basics programme meant they could talk to anybody and everybody. Facebook did deals with local mobile phone providers which meant users did not need to access the internet or ramp up charges on their cellphone provider. Instead the Facebook Free Basics programme which was preinstalled in virtually all smartphones on sale in Myanmar allowed users to freely access a very limited version of the internet just so long as they had a telecom signal. This was why so many commentators

13 https://www.mmtimes.com/business/technology/17727-myanmar-named-fourthfastest-growing-mobile-market-in-the-world-by-ericsson.html.

112

A. DAVIS

have subsequently stated that the internet is Facebook in Myanmar and Facebook the internet. So in Myanmar, as the country was undergoing a huge transition from a totally closed to an open one; moving from no expression to free expression; moving from a country and society totally closed off from the rest of the world to one that was suddenly wired up and totally inter-connected; given the reality of Myanmar’s abysmal education sector and teaching methods that have never known let alone encouraged any kind of critical thinking; given the fact that the country has long been an unhappy and forced Union of so many different ethnicities and religions; given a brutal past and the lack of any tradition of civics or civility—and given finally the lack of ethical leadership or direction from above—is it any wonder that when the $10 smartphone and Facebook suddenly appeared, the most vulnerable and least liked section of society would be targeted and demonized in the way they were? Just as in Yugoslavia in the early 1990s, sudden free expression and the means to exploit it as a form of power by skilled and strategic planners from the majority group in Myanmar was used to target and attack the Muslim Rohingya population. But equally, IWPR during our monitoring and reporting project14 saw hate speech effectively used by many different groups against ethnicities and communities that were simply different to their own. It is not then the case that the Muslim Rohingya were the only ones targeted: All communities—including the Bamara Buddhists themselves were targeted by hate speech according to our own monitoring.15 The sad fact is that from 2014 to 2017 in Myanmar, IWPR found hate speech being used by many ethnic groups in their communication with ‘others’. The Rohingya just happened to be the ethnic group that was most ostracized and flatly rejected by the majority by the fact

14 https://www.facebook.com/NoHateSpeechProject/ (known in Burmese as Ah Mone Mae Sagar). 15 IWPR’s project established a network of 24 monitors (all were Bamar Buddhists) across the country who worked over 18 months to collect, monitor and analyse media content as follows: (a) the most dominant themes or categories in identified hate speech narratives, both in traditional and online media sources; (b) the most dominant mediums or platforms—as well as the least used ones—to spread hate speech; (c) the demographics of the community of netizens engaged around hate speech issue through our Facebook pages—and their direct links to hate speech-related events in Myanmar; (d) actual ways and language used by citizens to engage hate speech purveyors and (e) the local and international developments which impacted hate speech in Myanmar.

HATE SPEECH IN MYANMAR: THE PERFECT STORM

113

that nobody—including so many in the NLD—recognized their rights nor accepted any responsibility to protect them. And yet to be fair, and as an illustration of how complicated the situation has been in Myanmar, the NLD and Suu Kyi herself have repeatedly been accused of being ‘Muslim Lovers’ by the extreme ultra-nationalist Buddhist movement according to our own monitoring.16 During the first few weeks of IWPR’s monitoring which formally started in January 2016, we saw how outlandish, creative and inciteful some of the hate speech we found on Facebook was against Muslims in general and the Rohingya in particular17 : There were also repeated attempts to suggest ISIS was actively seeking to incite violence in Myanmar through them: We also recorded how popular such slurs and propaganda was: The hate speech—at least initially however, while some of it suggested Muslims and the Rohingya were the ‘enemy within’ and plotting attacks on Buddhist cultural icons, didn’t appear to be coordinated. That seemed to happen over the course of 2016—and it certainly paralleled ISIS terrorist attacks in Europe and elsewhere. For sure, after every international incident, such as the Nice ISIS-inspired attack, we noted a flurry of hate speech in Myanmar. Over time, it seemed to become more coordinated and even militaristic18 : Hate speech against the Rohingya in particular (and Muslims in general) was becoming weaponized. We also noted and reported that there was a danger that the 16 NLD Under Attack by Nationalists on Social Media (FB post as monitored and analysed by IWPR’s Project). See Post of 27 June 2017. https://www.facebook.com/ NoHateSpeechProject/posts/812069418969272:0?__xts__%5B0%5D=68.ARBU0NnDO x0coUybsrgQZrX9hM6IAi536MrfJ7wwGyWZv_0P-IboETFvvptykJMYOWgS9cNb9zEd uCXwU28YwHb3PxbkWURbgXImk1L6NckpPys4PacU7v6KiXoKeLQU7E7v7a5nq2q HnLyEGuHEj-cDrspziT1GHqRRdR7SooIj7zMjco72hzv4t1iuZXz_bnqzEoDpsnkFdQU KF3pV8tgBIzJLFggGpjMS7BDRNZ4pFJcccufj8i9lzibkK7P-1cRV-FIz4vUeAk16O-zR6 SUTM0x7nAjMXINRQLK2TBoOOKLbEeavNcyrGiHmgHDxJERlRPiPRnMmgn7ABFmYn0&__tn__=K-R also: https://www.facebook.com/NoHateSpeechProject/posts/ 812069418969272:0?__tn__=K-R. 17 See Driving Man in IWPR’s project launch bulletin: https://www.facebook. com/NoHateSpeechProject/app/190322544333196/ and 17 February 2017, Mabatha Journal Preaches Racial Hatred Against Muslim Minority in Rakhine State. 18 Anti-Muslim Account Claims Muslims Have a Secret Plan to Get Rid of Myanmar.

Ibid. See post 19 June 2017, and Ma Ba Tha Journal Says Bengalis Will Swallow Up Buddhists Like ‘Water Hyacinth’. See additional posts 19 June 2017, June 15, Nationalist Account Seeks to Scare And Inflame by Sharing Videos of Muslim Girls Performing Martial Arts, and 24 April 2017, Facebook Users Allege Muslims Use Arabic Schools to Recruit Mujahedeen.

114

A. DAVIS

peddling of falsehoods and myths and the claims made across Facebook of insurgent groups planning attacks could well end up creating their very own reality. In the end, the signs and the pathway towards genocide and ethnic cleansing of the Rohingya were being very clearly set and laid during 2016 and 2017.19 Moreover, it was being done so in full view of the international community and in a political and moral vacuum. We know now very well how social media and Facebook can be used to demonize and destabilize even the most stable of Western democracies—we should have been far more prepared in Myanmar for the confluence of factors, crowned by Facebook, and the storm that engulfed an entire race of people in the summer of 2016.

19 The assassination of NLD’s general counsel U Ko Ni, a Muslim, on 29 January 2017 in Yangon Airport—and the lack of an effective political response to it by Suu Kyi and other leaders, was, in IWPR’s opinion, a key tipping point in the building momentum towards the ethnic cleansing of the Rohingya. IWPR monitored many popular posts on Facebook that called for his assassins to be treated as national heroes.

Countering Disinformation

NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda Barbora Maronkova

Abstract Barbora Maronkova evaluates NATO’s strategy and implementation plans for countering hybrid warfare. This strategy comprises four main pillars: defence and deterrence with high readiness forces in place, cyber defence, enhancing resilience through national civil preparedness and the protection of critical infrastructure, as well as strategic communications to fight disinformation and propaganda. Maronkova underlines the importance of proactive communication between NATO members in order to counter the “systemic use of disinformation, propaganda and fake news”. This is particularly with regard to Russia, which has deployed a number of propaganda attacks targeting its immediate neighbours and NATO. Maronkova explains how these attacks have led NATO to develop new initiatives to counter Russian disinformation, including countering propaganda with facts and information to debunk recurrent falsehoods.

Disclaimer: The views and opinions expressed in this article are the author’s own and do not represent NATO’s official position. B. Maronkova (B) NATO Information and Documentation Centre, Kyiv, Ukraine e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_9

117

118

B. MARONKOVA

She concludes by emphasizing the importance of proactive and transparent communication between NATO members in order to deter and defend against any threat which may arise. Keywords Hybrid warfare · Russia · Globalization · Disinformation · NATO

All War is Based on Deception. Sun Tzu

Many military historians argue that the emergence of hybrid threats is nothing new, as deceptive methods in warfare have been used as long as human kind. The deception was historically employed by the disadvantaged side, and was used in order to achieve both strategic and tactical advantages on the battlefield. The twenty-first-century emergence of asymmetric, or hybrid, threats can be traced back to the first covert terrorist attacks conducted by nonstate actors such as Hezbollah, followed by the Taliban in Afghanistan and the so-called Islamic State in Iraq and Syria (Hoffman 2009). The new stage in the hybrid warfare era began when a state actor, Russia, engaged in a well planned and executed series of attacks in the winter and spring of 2014 against its neighbour, Ukraine. This included the illegal annexation of Crimea by the “little green men” (i.e. Russian troops without insignia), a bogus referendum on the annexation, together with wide-spread propaganda and disinformation about attacks of Ukrainian nationalists on Russian-speaking citizens both in Crimea and Donbas, as well as a bogus distortion of modern history and cyber-attacks combined with energy blackmail due to Ukraine’s dependency on Russian gas supplies (Maronkova 2018a). New characteristics of the modern version of hybrid warfare are technological advances of societies, globalization and the interconnection of key supply chains between countries. All of these have greatly enhanced the intensity of threats, requiring a complex set of answers to ensure efficient defence and deterrence. NATO acknowledged this need immediately after the events in Ukraine in early 2014. At the Wales Summit on 5 September 2014, NATO Allies

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

119

set out a number of areas for NATO to develop relevant policies as an effective response to hybrid warfare. NATO describes hybrid warfare as a “wide range of overt and covert military, paramilitary, and civilian measures employed in a highly integrated design” (NATO Wales Summit Declaration 2014). NATO Secretary General Jens Stoltenberg declared in March 2015 that NATO must be ready to deal with every aspect of this new reality from wherever it stems “[and] that means we must look closely at how we prepare for; deter; and if necessary defend against hybrid warfare”. In order to be prepared, NATO must be able to observe and analyse what is happening; to see the patterns behind events, which appear isolated and random; and to quickly identify the actor or actors behind these events. Just as hybrid warfare tactics are a complex web of interlinked actions, so must the counter-strategy of defence and deterrence be. Various areas such as cybersecurity, situational awareness and countering disinformation must be addressed. At the Warsaw Summit in July 2016, NATO adopted a strategy and actionable implementation plans for its role in countering hybrid warfare. The primary responsibility to respond to hybrid threats or attacks rests with the targeted nation. NATO is prepared to assist an ally at any stage of a hybrid campaign. The North Atlantic Alliance and allies are prepared to counter hybrid warfare as part of a collective defence. The North Atlantic Council could decide to invoke Article 5 of the Washington Treaty1 (Warsaw Summit Declaration). At its Brussels Summit on 11 July 2018, NATO made further advances in its hybrid warfare strategy by creating counter-hybrid support teams, which will provide tailored targeted assistance to allies upon their request. This follows an earlier decision to set up similar cyber response teams. The exact details of how these teams will operate are still under deliberation. Hybrid warfare is by default complex and goes beyond national borders. This is why a comprehensive approach is pursued by NATO in working together with the European Union and its partner countries such as Finland, Sweden, Ukraine and others. NATO’s response to hybrid warfare rests on the following four pillars:

1 Article V of the Washington Treaty describes an attack on one Ally as an attack on

all.

120

B. MARONKOVA

Fig. 1 Model of a counter-hybrid threats strategy (Source Maronkova 2018b, NATO Amidst Hybrid Warfare—Case Study of Effective Strategic Communications. Presentation at the workshop Disinformation, Online Falsehoods and Fake News, Singapore)

1. Defence and Deterrence in order to have high readiness forces in place and credible deterrence on land, air and sea. 2. Cyber defence to protect NATO and individual allies from cyberattacks. 3. Resilience to enhance national civil preparedness and ensure protection of critical infrastructure. 4. Strategic communications to fight disinformation and propaganda. These pillars are linked together by regular exercises and increased situational awareness (Maronkova 2018b) (Fig. 1).

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

121

Areas such as economy and trade, including energy, can be important elements of a national strategy for individual nations. For NATO, a political military organization, these areas remain outside its scope.2 According to NATO Secretary General, hybrid warfare is a test of a country’s resolve to resist and to defend itself. It can also be a prelude to a more serious attack; behind every hybrid strategy, there are conventional forces, increasing the pressure and ready to exploit any opening. NATO and its partners need to demonstrate that they can and will act promptly whenever and wherever necessary (Stoltenberg 2015).

New Approach to Strategic Communications How NATO Counters Russian Disinformation and Propaganda The new security environment requires new approaches to communication; it is of particular relevance for an organization such as NATO, which recognizes the importance of proactive communication with its members. A significant element of hybrid warfare conducted by the Russian Federation against its neighbours—Ukraine, Georgia, Moldova and the three NATO members in the Baltics—is a systemic use of disinformation, propaganda and fake news. From 2004 to 2014, Russia focused its propaganda attacks on its immediate neighbours; however after the Revolution of Dignity in Ukraine in the winter of 2014 and the illegal and illegitimate annexation of Crimea, NATO and its allies also became targets. Individual experts, civil networks and non-governmental organizations have done plenty of documented research on Russia’s spread of propaganda across most of the NATO members states and partners. The purpose of this article is to highlight the ones that were specifically targeted at NATO, and in turn, NATO’s response to them. The most widespread narratives deployed by Russian official, semiofficial and unofficial channels regarding NATO are: • NATO deploys close to Russia’s borders, threatening strategic stability. • NATO broke its promise to not expand eastward.

2 Energy security is considered by NATO an important element of national security primarily in the area of protection of critical infrastructure.

122

B. MARONKOVA

• Missile defence is aimed at Russia and undermines the strategic balance. In a similar pattern, NATO enlargement process and its potential member states were targeted by Russian propaganda and disinformation. Amongst the most vivid examples are the following narratives: • “Montenegro is being dragged into NATO against people’s will”— this narrative was often used during the 2016–2017 period of accession talks of Montenegro to NATO. Russia has expressed its opposition to Montenegro’s membership to NATO and threatened with retaliatory measures towards the tiny Balkan country. These included official statements of Russian officials before and after the accession talks, Russian media reports about NATO forcing Montenegro to membership, support of Russian spy agencies to an unsuccessful coup d’ état attempt against the then Prime-Minister of Montenegro in October 2016, financial support to pro-Russian and anti-NATO political parties, NGOs and media and other actions aimed at discouraging Montenegro from its NATO membership. (Bajrovic, Garcevic, Kraemer 2018; Brunnstorm 2017) • A similar narrative is now being used against the 2018 deal between Skopje and Athens, which would see Macedonia3 become known as the Republic of Northern Macedonia, and becoming the 30th member of the Alliance. Russian officials have publicly stated their opposition to further NATO enlargement in the Balkans. Domestic and foreign leaders feared possible Russian meddling into a Macedonian referendum on the name issue that took place on 30 September 2018. Russian media reports have quoted Russian officials that the results of the referendum were not valid and that NATO has dragged Skopje into its orbit (Marusic 2018; Metodieva 2018). The NATO Press Office has in the past four years documented an increased international media interest in NATO (by 300% from the pre2014 level). There has also been an increase in the number of false stories and distorted headlines both from Russian and Western media. Key NATO social media accounts have also seen cyber and troll attacks. 3 Turkey recognizes the Republic of Macedonia with its constitutional name.

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

123

with as many as 10,000 bot accounts following the Twitter account of the NATO spokesperson. This has led NATO to refocus its communication activities and objectives, sharpen its strategic communication system and develop new initiatives in order to counter Russian disinformation and propaganda. NATO’s approach to countering propaganda is not with its own propaganda but instead with facts and information. To debunk Russia’s most recurrent myths, NATO created a dedicated webpage titled “Setting the Record straight”. This site contains factual information, videos, infographics and maps that help to showcase the misleading information often spread about NATO. As an example, to address the above mentioned narrative on NATO encircling Russia, the team prepared a geographical map that shows that from Russia’s 20,000 km of borders with 14 countries, NATO borders Russia with five of its members, which represents only 1/6 of Russia’s overall border length (Fig. 2). Countering hybrid threats cannot be done alone; it must involve the cooperation of other partners. That is why NATO has undertaken additional initiatives with other international organizations and actors in improving its situational awareness, sharing knowledge and best practices.

Fig. 2 Map of Russian Federation and its borders (Source NATO Website. NATO Setting the Record Straight, www.nato.int/cps/ra/natohq/115204.htm. Accessed on 5 October 2018)

124

B. MARONKOVA

Besides the set of measures stemming from the joint NATO-EU declaration, NATO is also actively engaged with the NATO Centre of Excellence on Strategic Communications in Riga, Latvia; the NATO Centre of Excellence on Cyber in Tallinn, Estonia; and is a member of the European Center of Excellence on Countering Hybrid Warfare in Helsinki, Finland. NATO provides assistance and carries consultations with a number of partner countries that are particularly affected by Russian hybrid warfare and disinformation such as Ukraine, Georgia and Moldova, in addition to partners who have had experience with building strong resilience such as Finland and Sweden. It provides platforms enabling practical exchanges in information and best practices in countering Russian propaganda such as the Hybrid Warfare Platform established between NATO and Ukraine. Sharpening Strategic Communications Both NATO’s policymakers and communicators understand the importance of well-coordinated communication that accompanies all strategic decision-making. NATO has developed its own robust Strategic Communications system—implemented within NATO and used in the daily advancement of its political and operational priorities. In addition to NATO’s Strategic Communications, individual allies have established their own national systems and processes that reflect their national realities and priorities. Robust and well-functioning strategic communication is an important element in the fight against propaganda and disinformation. NATO defines strategic communications as “the coordinated and appropriate use of NATO communications activities and capabilities in support of Alliance policies, operations and activities, and in order to advance NATO’s aims” (NATO Centre of Excellence on Strategic Communication 2010). Taking into account the complexity of NATO’s structure, a functioning system of coordinated messaging stemming from agreed and approved political decisions is one of the most important objectives of its strategic communication efforts. In order to use its resources in the most impactful ways, the Public Diplomacy Division at NATO Headquarters in Brussels has adopted new approaches to its communication by linking communication campaigns to concrete policy objectives. NATO also introduced programme evaluation and advanced its information environment assessment (IEA). These efforts are coordinated with the strategic communication activities of

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

125

NATO’s military command structures and wider NATO family and are linked together through a variety of both formal and informal networks, operational level working group and strategic policy boards. IEA is a new important element of strategic communications that has emerged as a tool in response to disinformation and propaganda. Its capabilities are now being developed across many of NATO’s allies and partners as well as at NATO itself. IEA helps to assess the information environment in order to contribute to indications and early warnings of hybrid activity and to improve the Alliance’s own communications planning and execution. IEA is able to develop an understanding of the information space, which is overcrowded and in constant flux due to recent advances in technology. In a dense information space, competing narratives and voices are fighting for the attention of recipients of information. In the era of disinformation, this capability is even more important than ever before. By monitoring, reporting and analysing both the friendly and adversarial activities and intentions in real time, it can provide a useful indication of the information environment in which an organization operates and thus help to adjust its communication posture. Similar to a classical communication campaign structure, IEA focuses on setting SMART4 objectives, identifying key narratives/themes/messages, choosing its target audiences and then monitoring and identifying main channels of communication (Lithuanian Armed Forces 2017). These can be as diverse as an organization wishes or is able to cover from mainstream media to obscure websites, various social media channels, chat rooms and other online platforms. By collecting information and data over a set period of time, important trends can be identified. One well-researched NATO example offers insights into both friendly and hostile information environments in connection with NATO’s decision at the Warsaw Summit in 2016 to deploy four multinational battalion-size battlegroups to Poland and the Baltic states as part of NATO’s strengthened defence and deterrence posture. On 15 October 2017, The Digital Forensic Research Lab (DFRLab), a project of the United States Atlantic Council specializing in tracking and monitoring disinformation, warned about a massive disinformation

4 SMART objectives—Specific, Measurable, Actionable, Relevant and Time bound.

126

B. MARONKOVA

campaign regarding the upcoming US deployment to Poland as part of NATO’s Enhanced Forward Presence. RIA Novosti reported, based on the statement of the Russian Ministry of Defence’s spokesperson: “Amid the hysteria over Russia’s planned military incursion right from the Zapad-2017 drills, the 2nd Armored Division of the US arrived quietly in Poland and was deployed there [Boleslawiec, Drawsko Pomorskie, Torun, Skwierzyna, Zagan] with its armored vehicles… Contrary to the NATO and the US statements about the ‘insignificance’ of the troops being pulled towards the Russian border, there is now a de facto US Armed Forces division, not a brigade.” (Barojan 2017) In reality, the United States deployed a brigade combat team, which usually consists of 1500–3500 soldiers, whilst a division consists of at least 10,000 troops. The U.S. Army Europe Command has published a publicly available fact sheet providing exact information about the planned deployment. The DFRLab also conducted detailed research relating to Russian narratives about the deployment of NATO forces to Poland and the Baltic states within the Enhanced Forward Presence. From February to March 2017, the DFRLab monitored media coverage in Estonian, Latvian, Lithuanian, Polish and Russian (Fig. 3). Several major negative narratives were detected such as:

Fig. 3 Overview of most frequent Russian narratives about NATO’s deployment in the Baltics (Source Nimmo, Ben. 2017. Russian Narratives on NATO’s Deployment. www.stopfake.org/en/russian-narratives-on-nato-s-deploy ment/. Accessed on 14 October 2018)

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

127

• NATO is unwelcome • NATO is supporting terrorism • NATO cannot protect the Baltic States. The analysis was that despite a relatively low overall impact, some of the narratives could have had a negative influence on local populations. Such analysis is extremely useful to NATO and individual nations that are both hosting NATO troops and deploying their own. It allows them to better prepare their own communication efforts, re-adjust narratives and key messages where necessary and monitor their information sphere. The IEA capability at NATO and individual member states continues to be adapted and improved as more and more analyses, reports and trends are being provided to senior leadership both on the political and military side and fed into the decision-making process.

Conclusion Proactive and transparent communication is key to successfully countering disinformation and propaganda. A better understanding of the information environment in which international organizations and national governments operate can contribute to more effective strategic communication and decision-making. NATO has been an essential source of stability in an increasingly unpredictable world for the past 70 years. The greatest responsibility of the Alliance is to protect and defend its territory and its populations against any attack, as set out in Article 5 of the North Atlantic Treaty. Faced with a highly diverse, complex and demanding international security environment, NATO maintains a full range of capabilities necessary to deter and defend against any threat to the safety and security of our populations, whatever form they might take—including hybrid.

References Barojan, Donara. 2017. Disinformation Deployed Against ‘Atlantic Resolve’. https://medium.com/dfrlab/disinformation-deployed-against-nato-enh anced-forward-presence-c4223f6d7466. Accessed on 16 October 2018. Bajrovic, Reuf, Garcevic, Vesko, and Kraemer, Richard. 2018. Hanging by a Threat: Russia’s Strategy of Destabilization in Montenegro. Foreign Policy

128

B. MARONKOVA

Research Institute. https://www.fpri.org/wp-content/uploads/2018/07/kra emer-rfp5.pdf. Accessed on 12 October 2018. Brunnstorm, David. 2017. Russia Threatens Retaliation of Montenegro Becomes 29th Member of NATO. Reuters. https://www.reuters.com/article/us-usanato-montenegro/russia-threatens-retaliation-as-montenegro-becomes-29thnato-member-idUSKBN18W2WS. Accessed on 4 November 2018. Hoffman, F. G. 2009. Hybrid Warfare and Challenges. JFQ , Issue 52, first quarter. Lithuanian Armed Forces. 2017. Training to Ukrainian Officials on Development of IAE Capabilities, Kyiv, Ukraine. Maronkova, Barbora. 2018a. NATO in the New Hybrid Warfare Environment. UA: Ukraine Analytica. Issue 1(11). Maronkova, Barbora. 2018b. NATO Amidst Hybrid Warfare—Case Study of Effective Strategic Communications. Presentation at the workshop Disinformation, Online Falsehoods and Fake News, Singapore. Marusic, Sinisa Jakov. 2018. Macedonia Dismisses Russian ‘Threat’s to Name Deal’. Balkan Insight. http://www.balkaninsight.com/en/article/macedo nia-dismisses-russian-threat-on-name-agreement-10-03-2018. Accessed on 4 November 2018. Metodieva, Asya. 2018. How Disinformation Harmed the Referendum in Macedonia. Blog Post GMFUS. http://www.gmfus.org/blog/2018/10/02/ how-disinformation-harmed-referendum-macedonia. Assessed on 4 November 2018. NATO Website. Setting the Record Straight. www.nato.int/cps/ra/natohq/115 204.htm. Accessed on 5 October 2018. NATO Website. Wales Summit Declaration. www.nato.int/cps/ic/natohq/off icial_texts_112964.htm. Accessed on 15 October 2018. NATO Website. Warsaw Summit Declaration. www.nato.int/cps/en/natohq/off icial_texts_133169.htm?selectedLocale=en. Accessed on 15 October 2018. NATO Website. Brussels Summit Declaration. https://www.nato.int/cps/ic/nat ohq/official_texts_156624.htm. Accessed on 15 October 2018. NATO Stratcom Centre of Excellence. Definition of Strategic Communications. https://www.stratcomcoe.org/about-strategic-communications. Accessed on 8 October 2018. Nimmo, Ben. 2017. Russian Narratives on NATO’s Deployment. www.stopfake. org/en/russian-narratives-on-nato-s-deployment/. Accessed on 14 October 2018. Stoltenberg, Jens. 2015. Keynote Speech at Allied Command Transformation Seminar. www.nato.int/cps/ic/natohq/opinions_118435.htm. Accessed on 11 October 2018.

NATO AMIDST HYBRID WARFARE THREATS: EFFECTIVE STRATEGIC …

129

Washington Post. 2018. Macedonia is a Tiny Country with a Giant Russia Problem. https://www.washingtonpost.com/opinions/global-opinions/ macedonia-is-a-tiny-country-with-a-giant-russia-problem/2018/09/20/ 47a674d2-bb6b-11e8-a8aa-860695e7f3fc_story.html?noredirect=on&utm_ term=.d2e4ab92e32b. Accessed on 4 November 2018.

Elves vs Trolls Giedrius Sakalauskas

Abstract Giedrius Sakalauskas describes his experience participating in a group of Lithuanian activists who refer to themselves as “elves,” in the fight against various falsehoods spread by Russian internet trolls (hostile social media accounts) on social media. Sakalauskas details the four different types of elves which emerged to counter these online attacks, and each of their roles in finding hostile propaganda and spreading positive news about Lithuania, the European Union, and NATO. He explains the key principle in the elves’ fight against Russian trolls using examples from various campaigns, which is to never counter lies with more lies, but to instead expose and debunk the lies. Keywords Trolls · Campaign · Activist · Online attack · Social media

In 2014, Russia attacked Ukraine and occupied parts of its territory. Meanwhile, a constant stream of online Russian propaganda was being circulated in Lithuania. This prompted a small group of volunteers to act, and they began their fight against so-called Russian internet trolls, countering Kremlin propaganda and disinformation on the internet.

G. Sakalauskas (B) Res Publica - Civic Resilience Center, Vilnius, Lithuania © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_10

131

132

G. SAKALAUSKAS

Most online activists were already participating in some online groups fighting the trolls on Facebook and on the main Lithuanian news portals, addressing various lies being spread by Russia. We needed to call our group of activists something, but what to name it? Well, we were fighting trolls. So somebody suggested, “Let’s be elves.”

Elves’ Rules • Elves do not exit. • Do not love yourself in the elves but love the elf in yourself. • Elves are never alone in the battlefield. There were about 40 elves at first, when the trolls began a targeted campaign of leaving nasty comments about the Lithuanian government and society, usually pegged to a hatred of NATO, the European Union and, of course, the United States. It is not easy to calculate how many elves we have today, as there are different tribes of elves; for example, there are the following main types of elf groups fighting on Facebook. – Debunkers. Finding and debunking hostile propaganda, fake news, and fake information. – Blame and shame. Fighting trolls and their information using humor, irony, and sarcasm as a weapon. – Troll killers. They are looking at the key words used by trolls on their posts and reporting them to Facebook when they are violating Facebook rules in order to stop them from poisoning people minds. – Motivators. They are spreading positive news about Lithuania, EU, and NATO in order to make people proud of their own country and international organizations in which Lithuania participates. Some of these groups are open, while others are closed or labeled as “secret”. The elves do not know each other’s real names or meet up in person; more often than not, they remain anonymous, in order to avoid being identified by the other side. But when we have a target, we are together. At the beginning of our activities, the main battlefield was in the comment section of 5 main Lithuanian news portals (delfi.it, ru.delfi.lt,

ELVES VS TROLLS

133

15min.lt, alfa.lt and ltytas.lt). Here, trolls released a constant flow of falsehoods and complaints, each comment helping to construct an alternate reality version of life in Lithuania. Some of the main narratives about Lithuania propagated online by trolls include domestic failure as a result of Western integration, low pensions, high prices because of the Euro, not an independent state and Russophobic. Negative comments by trolls were quickly diluted by the massive response of elves posting positive comments. The main principle was to never use lies against lies. Of course Lithuania has its problems; however our only focus was to expose the lies. Fortunately, society caught on to the trend, making the daily workflow for the elves much lighter. There was no longer a pressing need to rebut each and every negative comment; ordinary people joined the online battlefield, taking on the trolls themselves. Currently the main battles are happening on the most popular social network in Lithuania—Facebook. No one knows how many trolls are polluting the Lithuanian Facebook, or how many are single persons posing under different handles as multiple people. Online armies can only be met with opposing armies. Elves’ activities launched by a small group of enthusiasts has now grown into a massive online network of over 4000 activists who participate as needed. The elves’ army is made up of different Facebook groups and individuals who form blockchain-like based solutions that are not centrally managed but where each member decides whether to participate in each individual action. Individual people connect the groups and the information is disseminated when needed for the good of Lithuanian society, moving on the principle of the network. Of course there are a few active members on the network who initiate action by themselves, but the majority are activated on a message-by-message basis. Today, the aggregate force of individual elves is a considerable resource for responding to any action against Lithuania, be it Russian troll attacks on social media and commentaries or the need to support the dissemination of positive messages on the internet. Additionally, they act as the eyes and ears of the network to detect and respond to all kinds of anti-Lithuanian activities. In 2015, Stratcom of the Lithuanian Armed Forces eventually noticed the elves and was impressed. They shared this news with colleagues at the NATO summit in Riga, explaining the elves phenomenon as a new

134

G. SAKALAUSKAS

breed of partisan resistance fighters for the twenty-first century. Eventually reports of this discussion leaked to the Baltic media. It was “a big shock” that the world’s largest military alliance had discovered the elves. The not so pleasant surprise was to lose the most prized possession—anonymity. However, the coverage gave the elves a chance to promote themselves in the public sphere and spread the message about elves movement internationally. They decided to elect a number of public speakers to cooperate with media outlets and participate in international events such as forums, conferences, and training exercises. The best defense is offence. In 2017 a group of elves mounted a campaign against Sputnik. We visited Sputnik’s international Facebook group and spread memes, putting #MomentOfTruth and rating their page. The result was immediately caught by Ukrainian and Lithuanian news portals. In six days Sputnik’s rating went from 4.3 to 2.6 and on the last day of the month, they closed down the page’s rating system. Just before the FIFA World Cup, a group of elves mounted a successful international campaign against Adidas.1 The campaign was designed to stop the production of t-shirts with Soviet insignia, some of which were being sold by Adidas online as well as on the website for the popular US chain department store Walmart.2 We created a campaign called #StopAdidas using an image from Lithuanian Stratcom, distributing the hashtag #StopAdidas and relevant memes to all of the different elf tribes. The #StopAdidas campaign timeline: 5 May 13:07 Lithuanian Stratcom of Ministry of Foreign Affairs shared on Twitter the news that ADIDAS placed products with the Soviet Union’s symbols on its international e-shop. 6 May 09:21 Lithuanian Elves created the hashtag #StopAdidas on Facebook calling their tribes to action. The Elves began visiting Adidas’ Facebook page and posting their hashtag. 6 May 22:09 An invitation was posted on Facebook calling foreign activists to join the campaign. 7 May Groups of likeminded Ukrainian activists joined in on the action.

1 https://www.boredpanda.com/adidas-ussr-themed-sports-collection/collection. 2 https://www.nytimes.com/2018/09/19/world/europe/walmart-soviet-shirts-lithua nia.html.

ELVES VS TROLLS

135

7 May In the evening, the German sports news agency SID announced that Adidas is removing the product with the Soviet Union’s symbol from its international e-shop—a small but tangible victory. The #StopAdidas campaign statistics: This campaign resulted in more than 6000 #StopAdidas comments on the Adidas Facebook page. The vast majority of these comments originated from Lithuania and Ukraine, with a small number from other countries. Twitter had about 5900 tweets with #StopAdidas. A Google search revealed 116 articles from 19 countries about the campaign: Ukraine— 55, Russia—17, Latvia—9, Lithuania—6, USA, Mexico, Estonia, Poland, Georgia and Germany—3, Austria, Spain 2, United Kingdom, Czech Republic, Argentina, France, Venezuela, Romania, and Finland—1.

136

G. SAKALAUSKAS

The Lithuanian elves movement has now become popular and many individuals want to connect with this virtual community and contribute to the network’s efforts. Today it is no longer possible to measure the size of the group, but the need for it has disappeared because it has become something much bigger and more powerful than initially planned. It has morphed into an independent and self-styled force on the internet in Lithuania, which can no longer be stopped by the disappearance of individual members. The question as to whether this type of movement could be hijacked by Russian trolls or local extremist groups will always exist. Such problems exclude the possibility for single members to decide whether to participate in the action or not. To support a good initiative, you do not necessarily have to consider yourself an elf, a dwarf, or some other type of super hero; having your heart in the right place is enough.

References Giedrius Sakalauskas: “Training Program on September 20 in Prague Modern Mythology 2018”. Michael Weiss: “The Baltic Elves Taking on Pro-Russian Trolls”, The Daily Beast, 20 March 2016. Priit Talv: “The Lithuanian Elves Movement as a Patriotic Block Chain”, Propastop, 19 October 2019.

Fake News and Disinformation: Singapore Perspectives Shashi Jayakumar, Benjamin Ang, and Nur Diyanah Anwar

Abstract Shashi Jayakumar, Benjamin Ang and Nur Diyanah Anwar analyse Singapore’s efforts combating online falsehoods and disinformation. They highlight the recently passed Protection from Online Falsehoods and Misinformation Act (POFMA) which makes it “an offence to intentionally communicate a false statement of fact, with the knowledge that it would cause the harms listed”, and which uniquely enables the authorities to compel social media platforms to publish rebuttals of specific falsehoods, called “Targeted Corrections Directions”. Additionally, they highlight the non-legal initiatives the Singapore government has taken to counter disinformation and safeguard social cohesion, via programmes such as the National Framework on Information, Media and Cyber Literacy and S.U.R.E. This creates guidelines for public organizations to spot fake news, and fact-checking resources teaching individuals to better judge the reliability of news sources. Their chapter concludes by highlighting the need to review the strategies currently used, and how laws, programmes and policies must be future-proof.

S. Jayakumar (B) · B. Ang · N. D. Anwar S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] B. Ang e-mail: [email protected] © The Author(s) 2021 S. Jayakumar et al. (eds.), Disinformation and Fake News, https://doi.org/10.1007/978-981-15-5876-4_11

137

138

S. JAYAKUMAR ET AL.

Keywords Online falsehoods · Disinformation · POFMA · Initiative · Legislation

Introduction This chapter seeks to present and analyse key aspects of the “Singapore Way” in combatting online falsehoods and disinformation. It tracks recent developments locally, including the passage of the bill on Protection from Online Falsehoods and Manipulation (POFMA). While not offering silver bullet prescriptions, the paper lays out how and why measures (both legal and non-legal) taken in Singapore to combat online falsehoods and disinformation must be seen in the context of a wider international ecosystem, with various governments and international bodies attempting to grapple with the issue. Deliberate untruths, distortions and rumours have always been present in societies. These can be malicious falsehoods and rumours knowingly distributed to undermine a nation (by actors working either from within or without); falsehoods and rumours propagated without a broad political aim, either with or without malicious intent that achieves viral status; falsehoods used in parody, satire, or seemingly humorous pieces; and finally, falsehoods distributed for financial gain.1 It is technology, and the advent of social media, in particular, that has given an attack surface propagandists and experts in mass persuasion of an earlier age could have only dreamed of. States can leverage on social media mounting sophisticated disinformation campaigns and influence operations in other states. For example, Bots can influence public opinion and discourse by manipulating social media algorithms and causing specific content to trend, or inundating social media hashtags with repeated and automated messages. Botnets were used, for example, by Russian-linked entities in the lead-up to and during the 2016 US Presidential campaign,

1 For an unpacking and exploration of these, see Norman Vasu, Benjamin Ang, TerriAnne Teo, Shashi Jayakumar, Muhammad Faizal, and Juhi Ahuja, Fake News: National Security in the Post-Truth Era, RSIS Policy Report, January 2018. https://www.rsis.edu. sg/wp-content/uploads/2018/01/PR180313_Fake-News_WEB.pdf.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

139

which saw the amplification of partisan and polarizing content which could have influenced the outcome of the elections.2 The Southeast Asian and Singapore Contexts Besides Russian interference in the 2016 US elections and fresh attempts to meddle with the November 2018 mid-term elections,3 alleged attempts by Russia to interfere in the internal affairs of European countries have been widely documented.4 But there is a growing realization that fake news and disinformation is not an issue that only the West has had to contend with. In Southeast Asia, it has become increasingly recognized that fake news poses potential national security risks too— besides rumours and supposition, fraught electoral campaigns may make conducive conditions for online political manipulation, either by domestic actors attempting to secure a particular outcome, or by entities further afield. Online hoax campaigns have plagued Indonesia’s high-profile national and regional elections for most of the present decade. In the lead-up to the 2012 gubernatorial election in Jakarta, Basuki Tjahaja Purnama (known as “Ahok”) were the subjects of such campaigns. Basuki Tjahaja Purnama (known as “Ahok”). Ahok’s ethnicity (Chinese) and his religion (Christianity) made him a subject of various online smears—both he and Jokowi (a Javanese Muslim) were accused of being communists, with Jokowi’s Javanese Muslim identity questioned and he and his family

2 See for example Scott Shane, ‘The Fake Americans Russia Created to Influence the Election’, The New York Times, 7 September 2017. https://www.nytimes.com/2017/ 09/07/us/politics/russia-facebook-twitter-election.html; Gabe O’Connor, ‘How Russian Twitter Bots Pumped Out Fake News During the 2016 Election’, NPR, 3 April 2017. https://www.npr.org/sections/alltechconsidered/2017/04/03/522503844/howrussian-twitter-bots-pumped-out-fake-news-during-the-2016-election. 3 In July 2018, Facebook announced it had identified a political influence campaign that was potentially built to disrupt the midterm elections, removing dozens of profiles and accounts attempting foment social divisions. This campaign bore the hallmark of the Kremlin-linked Internet Research Agency, allegedly also responsible for attempts to influence the 2016 US presidential election. 4 There is a vast, and still growing literature on this topic. See generally Keir Giles, ‘Russia’s “New” Tools for Confronting the West: Continuity and Innovation in Moscow’s Exercise of Power’, Chatham House, March 2016. See also the reports regularly published by the European Values Think Tank. https://www.kremlinwatch.eu/#briefing.

140

S. JAYAKUMAR ET AL.

labelled as “Chinese” and “Christians”.5 Five years later, online campaigns featuring viral rumours and untruths polarized public opinion in the 2017 Jakarta gubernatorial elections and are believed to have played a part in the defeat of Ahok, who was standing for gubernatorial position.6 Organized attempts to smear candidates have also been present in the 2019 Indonesian presidential campaign, with some of the messaging threatening to widen existing cleavages in society. Some of the activity, carried out by “buzzers” (influencers, and informal campaigners who can spread hashtags and viral messages quickly), appear linked to the circle of the candidates (including Jokowi, the eventual winner) themselves, or to their proxies.7 There are occasional suggestions that organized entities, including states, may coordinate disinformation campaigns with the aim of influencing societies, and elections results, elsewhere.8 Consider the baffling and still-unexplained rise of a Twitter bot army throughout several parts of Southeast Asia in early 2018.9 Its appearance may or may not have been tied to the general elections taking place in several Southeast Asian nations at around that time. The Malaysian Twittersphere for example was flooded by bot activity spewing out pro-government and anti-Opposition

5 ‘Saracen Hanya Satu dari Ribuan Kelompok Penyebar Hate Speech’ (Saracen is a hate speech group), Kompasiana, 28 August 2017. https://www.kompasiana.com/ opajappy/saracen-hanyasatu-dari-ribuan-kelompok-penyebar-hate-speach_59a41c18d59a 26574176a002; see also ‘Video: Jokowi Cina-Kristen? Ini Buktinya (Video: Jokowi is Chinese-Christian? Here’s proof)’, Tribun News, 12 June 2014. http://www.tribunnews. com/nasional/2014/06/12/video-jokowi-cina-kristen-ini-buktinya. 6 Merlyna Lim, ‘Beyond Fake News: Social Media and Market-Driven Political Campaigns,’ The Conversation, 5 September 2017. https://theconversation.com/beyondfake-news-social-media-and-market-driven-politicalcampaigns-78346. 7 Ross Tapsell, ‘When They Go Low, We Go Lower’, The New York Times, 16 April 2019. https://www.nytimes.com/2019/04/16/opinion/indonesia-election-fakenews.html; ‘Backstory: Hunting for Fake News and Trolls in Indonesia’s Elections’, Reuters, 29 April 2019. https://www.reuters.com/article/us-indonesia-election-backst ory/backstory-hunting-for-fake-news-and-trolls-in-indonesias-elections-idUSKCN1S50KA. 8 G. Haciyakupoglu, ‘Southeast Asia’s Battle Against Disinformation’, The Diplomat, 12 February 2019. https://thediplomat.com/2019/02/southeast-asias-battle-against-dis information/. 9 Jon Russell, ‘Twitter Doesn’t Care That Someone Is Building a Bot Army in Southeast Asia’, TechCrunch, 20 April 2018. https://techcrunch.com/2018/04/20/twitter-doesntcare-that-someone-is-building-a-bot-army-in-southeast-asia/.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

141

messages.10 While these do not seem to have had pivotal outcomes on the eventual vote tallies, the scale and speed at which a more calculated (and more disguised) influence campaigns can spread disinformation and fake news can spread online should give us pause for thought. Singapore The potential fault lines in Singapore are many—ethnicity and religion to name two, but compounding this is how technologically dependent, connected and “wired” Singapore has become in the course of its SMART Nation push. As of January 2018, about 84% or 4.83 million Singaporeans use the Internet.11 Singapore has in the past seen websites intentionally purveying fake news that could seed dissension and division. One example was the case of the website The Real Singapore (TRS). The couple behind the website was charged and jailed in 2016 for deliberately sowing discord between Singaporeans and foreigners through spurious (but eyeball-catching) content on the website.12 The viewership in turn proved extremely lucrative for the couple in terms of the ad revenue generated.13 Concerns over the potential of fake news to exacerbate and widen existing social fault lines in Singapore’s multicultural society began to be especially pronounced from early 2017.14 As Home Affairs and Law Minister K. Shanmugam observed on 19 June 2017, “it [fake news and misinformation] undermines the very fundamentals of a democratic

10 ‘#BotSpot: Bots Target Malaysian Elections’, DFRLab, 21 April 2018. https://med ium.com/dfrlab/botspot-bots-target-malaysian-elections-785a3c25645b. 11 ‘4.83 Million Singaporeans Are Now Online’, Singapore Business Review, 30

January 2018. https://sbr.com.sg/information-technology/news/483-million-singapore ans-are-now-online. Internet user penetration rates are expected to increase to 97% in 2023. https://www.statista.com/statistics/975069/internet-penetration-rate-in-singap ore/. 12 ‘TRS Co-Founder Yang Kaiheng Jailed 8 Months for Sedition’, The Straits Times, 28 June 2018. https://www.straitstimes.com/singapore/courts-crime/trs-co-founder-yangkaiheng-jailed-8-months-for-sedition. 13 ‘TRS Ad Revenue “Used to Pay Mortgage on Couple’s Apartment”’, The Straits Times, 29 March 2016. https://www.straitstimes.com/singapore/courts-crime/trs-ad-rev enue-used-to-pay-mortgage-on-couples-apartment. 14 ‘Fake News Tells More Than Just Lies’, TODAY , 29 March 2017. https://www.tod ayonline.com/singapore/fake-news-tells-more-just-lies.

142

S. JAYAKUMAR ET AL.

society. It undermines the media. It undermines trust in government. It undermines what the truth is. It spreads fear and panic.”15 In January 2018, the Ministry of Law and Ministry of Communications and Information published a Green Paper, “Deliberate Online Falsehoods: Challenges and Implications”, detailing the threat of fake news in various states, the responses of these states in addressing fake news, and giving examples of fake news in Singapore previously.16 This chapter made it clear that deliberate online falsehoods represented a serious threat to the body politic, noting further that the government would ask Parliament to set up a Select Committee to look into the issue and make recommendations. The very formation of the Select Committee should be taken as an indication to the gravity with which the issue was viewed; one has to look back to 1994 for the last occasion where the government had set up a Select Committee to give policy recommendations. The 2018 Select Committee, whose remit was to “study the problem of deliberate online falsehoods, and to recommend how Singapore should respond”,17 boasted several high-ranking and influential individuals—besides K. Shanmugam, the chair was a longstanding and respected backbencher from the ruling party, Charles Chong. Also among the members were office holders Desmond Lee, Janil Puthucheary, and Edwin Tong. It should also be observed that an Opposition member from the Workers’ Party, Pritam Singh, was also appointed, perhaps a sign that the issue was of a magnitude requiring broad cross-parliamentary support and consensus. The Select Committee held public hearings over eight days in March 2018, hearing testimony from 65 individuals and organizations (170 written submissions were also received). The oral testimony in itself had several highlights, chiefly the long and testy exchange between Minister

15 ‘Battle Against Fake News: Rise of the Truth-Seekers’, The Straits Times, 25 June 2017. https://www.straitstimes.com/opinion/battle-against-fake-news-rise-ofthe-truth-seekers. 16 Deliberate Online Falsehoods: Challenges and Implications, Green Paper by the Ministry of Communications and Information and the Ministry of Law, Misc. 10 of 2018, Presented to Parliament by the Minister for Law, 5 January 18. https://www.mlaw.gov.sg/files/news/press-releases/2018/01/Annexe% 20A%20-%20Green%20Paper%20on%20Deliberate%20Online%20Falsehoods.pdf. 17 https://www.mlaw.gov.sg/news/press-releases/select-committee-deliberate-online-fal sehoods.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

143

Shanmugam and Simon Milner, vice-president of public policy for Facebook’s Asia-Pacific operations on 22 March.18 The general feeling was not just that Facebook had come off second best; the somewhat truculent attitude evidenced by Milner on that occasion appears to have simply reinforced and confirmed the impression on the part of government that it could not look to the big platforms to regulate themselves when it came to fake news and controlling disinformation.19 In its report, published on 20 September 2018, the Select Committee observed that falsehoods spread by foreign state (or non-state) actors include undermining particular domestic or foreign policies, discrediting or delegitimizing public institutions and individuals, influencing election outcomes, sowing discord among communities and groups by polarizing political discourse, and fracturing society’s shared sense of reality to weaken the country’s resilience to foreign influence or aggression.20 Noting that “there is no one silver bullet to combat this complex problem, and a multi-pronged approach is necessary”, the Committee made several recommendations. Besides the need for legal approaches, the recommendations included including nurturing an informed public, reinforcing social cohesion and trust, and promoting fact-checking. Two further noteworthy recommendations included the need for the government to “come up with a coordinated approach to tackle disinformation from state-sponsored operations”,21 and the need for social media and

18 For a flavor, see ‘“We Made a Wrong Call”: Facebook Says It Should Have Informed Users Earlier on Cambridge Analytica Breach’, ChannelNewsAsia, 22 March 2018. https://www.channelnewsasia.com/news/singapore/we-made-a-wrong-callfacebook-says-it-should-have-informed-users-10067144. Milner had earlier in February testified before the UK Digital, Culture, Media and Sport Committee examining these issues. See https://www.c-span.org/video/?440521-7/british-committee-hearingfake-news-facebook-panel. 19 Mark Cenite, ‘Commentary: Someone Needs to do Something About Facebook— But What?’, ChannelNewsAsia, 23 November 2018. https://www.channelnewsasia.com/ news/commentary/deliberate-online-falsehoods-fake-news-facebook-select-committee-109 57450. 20 Report of the Select Committee on Deliberate Online Falsehoods —Causes Consequences and Countermeasures. Presented to Parliament on 19 September 2018. https://sprs.parl. gov.sg/selectcommittee/selectcommittee/download?id=1&type=subReport. 21 ‘Select Committee on Fake News: Summary of Panel’s 22 Suggestions’, The Straits Times, 20 September 2018. https://www.straitstimes.com/politics/select-committee-onfake-news-summary-of-panels-22-suggestions.

144

S. JAYAKUMAR ET AL.

technology companies to prioritize credible content as well as delegitimizing and closing accounts spreading falsehoods, working together with government to develop solutions and guidelines.

Singapore’s Protection from Online Falsehoods and Misinformation Act While the scope of online falsehoods can be immense, it is clear from parliamentary debates that Singapore’s lawmakers chose to focus on the most egregious strain: false statements of fact that are spread with malicious intent to cause unrest or undermine public confidence in public institutions. Ministers have reiterated several times that the law is not intended to stifle academic research, opinions, satire, hypotheses or theories.22 The lawmakers also wanted tools to compel social media platforms to increase the visibility of corrections, disrupt fake accounts, discredit the online sources of falsehoods, and cut off the financial incentives of online sources of falsehoods.23 Singapore’s Parliament passed the Protection from Online Falsehoods and Misinformation Act (POFMA) on 8 May 2019.24 Overall, the approach has been a calibrated one. All of this can be seen in sections of the POFMA that make it an offence to intentionally communicate a false statement of fact, with the knowledge that it would cause the harms listed,25 making it an offence to make or provide tools (bots) or services (trolls) for the same, and most uniquely, providing for various Directions. 22 ‘Why Academics Should Not Fear Online Falsehood Law’, The Straits Times, 9 May 2019. https://www.straitstimes.com/opinion/why-academics-should-not-fear-onlinefalsehood-law. 23 ‘Measures Targeting Online Falsehoods Aim to “Remedy” Impact, “Not Punish Wrongdoers”: Edwin Tong’, ChannelNewsAsia, 7 May 2019. https://www.channelnewsa sia.com/news/singapore/online-falsehoods-bill-aim-to-remedy-impact-edwin-tong-115 11674. 24 Protection from Online Falsehoods and Manipulation Act 2019 (Act 18 of 2019). https://sso.agc.gov.sg/Acts-Supp/18-2019/Published/20190625?DocDate=20190625. 25 Harms listed include: (i) be prejudicial to the security of Singapore or any part of Singapore; (ii) be prejudicial to public health, public safety, public tranquillity or public finances; (iii) be prejudicial to the friendly relations of Singapore with other countries; (iv) influence the outcome of an election to the office of President, a general election of Members of Parliament, a by-election of a Member of Parliament, or a referendum; (v) incite feelings of enmity, hatred or ill-will between different groups of persons; or (vi) diminish public confidence in the performance of any duty or function of, or in the

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

145

Of these Directions, the Correction Directions and Targeted Correction Directions deserve particular comment given that they appear to be unique in the world at the time of writing—in some ways an experiment that could influence the development of similar legislation in the region or beyond. These relevant provisions require people or Internet platforms to carry corrections alongside content deemed false.26 This means the original statement would not be taken down, but remain available to readers, along with the rebuttal by the authorities (the “Correction”). The Minister for Law has characterized this as action that “will not affect the right to free speech but could even encourage it by exposing people to more viewpoints”, because the facts will be put up along with the false content, and people can decide for themselves what they want to believe.27 The major social media platforms have, understandably, said they have reservations with regard to POFMA but will abide by it.28 Other objections can be grouped into two broad categories. One broad group of objections came from those who see POFMA as threatening civil liberties generally, and in particular, freedom of speech. Prime Minister Lee Hsien Loong specifically addressed the criticism from Reporters Without Borders, stating that “what we have done has worked for Singapore and it’s our objective to continue to do things which will work for Singapore”.29 In fact, in the Second Reading Speech in Parliament by the Minister of Law, he highlighted that POFMA would be “better calibrated” than existing laws because they (Broadcasting Act, Telecommunications Act) are wider than POFMA in their takedown powers, POFMA gives greater judicial oversight. POFMA also deals specifically exercise of any power by, the Government, an Organ of the State, a statutory board, or a part of the Government, an Organ of the State or a statutory board. 26 Section 21, Protection from Online Falsehoods and Manipulation Act 2019 (Act 18 of 2019). 27 ‘Parliament: Law Against Online Falsehoods Will Not Stifle Free Speech, Say Ministers’, The Straits Times, 1 April 2019. https://www.straitstimes.com/politics/parliamentlaw-against-online-falsehoods-will-not-stifle-speech-ministers. 28 Facebook, ‘Rights Groups Hit Out at Singapore’s Fake News Bill’, Reuters, 1 April 2019. https://www.reuters.com/article/us-singapore-politics-fakenews/facebook-rig hts-groups-hit-out-at-singapores-fake-news-bill-idUSKCN1RD279. 29 ‘Proposed Anti-fake News Law “Works for Singapore” Despite Criticism: PM Lee’, ChannelNewsAsia, 9 April 2019. https://www.channelnewsasia.com/news/singapore/pro posed-anti-fake-news-law-works-for-singapore-pm-lee-11425686.

146

S. JAYAKUMAR ET AL.

with “falsehoods that can be spread online with incredible speed, in a targeted manner, and to address such falsehoods, with speed, with proportionality” and Court oversight.30 While he was technically correct that “lawyers will know, when you have a narrower Bill, and the facts come within the narrower Bill, as opposed to the broader law in general, the narrower Act will apply”,31 this has not been the perception of many nonlawyer observers, journalists and academics, who do not share the same perspective. In response to an open letter from 124 academics to the Minister of Education at the time (Ong Ye Kung), the Minister explained in Parliament that it was impossible for academic research to be caught by POFMA unless it was intentionally based on false observations or data, and it would not stifle political discourse. The second (and, to some degree, overlapping) objection came from those who interpret the law as saying that it is the Minister who decides what is false. Some individuals and groups felt that the Minister should not be given this power; or that the power should rest with some independent authority or ombudsman. The Minister of Law explained in Parliament that the Courts “ultimately decide what is true and false” because Directions can be challenged.32 Public assurances have extended to a promise by the Minister of Law that the process for appealing against orders or directions under POFMA will be made “quick … as inexpensive as possible”, and simple enough to be made even without a lawyer.33 In the last few months of 2019, the Government invoked POFMA four times: against two opposition politicians Brad Bowyer and Lim Tean (from different parties), one opposition party (the Singapore Democratic 30 Second Reading Speech by Minister for Law, K. Shanmugam on the Protection

from Online Falsehoods and Manipulation Bill, 7 May 2009. https://www.mlaw.gov.sg/ news/parliamentary-speeches/second-reading-speech-by-minister-for-law-k-shanmugamon-the-protection-from-online-falsehoods-and-manipulation-bill. 31 Second Reading Speech by Minister for Law, K. Shanmugam on the Protection from Online Falsehoods and Manipulation Bill, 7 May 2009. 32 ‘Government Makes Initial Decision on Falsehood but Courts Are Final Arbiter of Truth: Shanmugam’, The Straits Times, 2 April 2019. https://www.straitstimes.com/ politics/govt-makes-initial-decision-on-falsehood-but-courts-are-final-arbiter-of-truth-k-sha nmugam. 33 ‘Proposed Anti-fake News Law “Works for Singapore” Despite Criticism: PM Lee’, ChannelNewsAsia, 9 April 2019. https://www.channelnewsasia.com/news/singapore/pro posed-anti-fake-news-law-works-for-singapore-pm-lee-11425686.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

147

Party or the SDP), and one blogger based in Australia. Bowyer, Lim and the SDP complied with the Correction Directions, then subsequently published follow up comments disagreeing with the use of POFMA against them and disputing that they had published falsehoods in the first place.34 The first four POFMA directions were issued against the makers of statements, although the then-Senior Minister of State for Law Edwin Tong had said previously that POFMA was “designed for platforms, not individual publishers”.35 It was when the Australia-based blogger refused to comply, that the Government directed Facebook to post a correction notice on his post. Facebook complied, stating that it hoped “the Singapore government’s assurances that it will not impact free expression will lead to a measured and transparent approach to implementation”.36 The recipients of the POFMA directions have continued the debate publicly: Bowyer has stated publicly that he believes the government read his post wrongly and was rebutting his opinion, not a “nefarious falsehood”; while SDP has argued publicly that this was “more of an interpretation of statistics” even though the “original rationale for the law was to counter deliberate falsehoods”.37 These arguments are aimed at the earlier statements by Ministers that the law is not intended to stifle (inter alia) opinions, hypotheses or theories.38 The Singapore Government has in turn issued written rebuttals to articles by Bloomberg and the South China Morning Post (SCMP) which criticized these uses of POFMA, highlighting that the Correction Direction “allows the public 34 ‘Singapore’s Fake News Law: Protecting the Truth, or Restricting Free Debate?’, South China Morning Post, 21 December 2019. https://www.scmp.com/week-asia/pol itics/article/3043034/singapores-fake-news-law-protecting-truth-or-restricting-free. 35 ‘Measures Targeting Online Falsehoods Aim to “Remedy” Impact, “Not Punish Wrongdoers”: Edwin Tong’, Channel News Asia, 7 May 2019. https://www.channe lnewsasia.com/news/singapore/online-falsehoods-bill-aim-to-remedy-impact-edwin-tong11511674. 36 ‘Facebook Urges Singapore Government to Respect “Free Expression” as It Complies with Fake News Law’, South China Morning Post, 30 November 2019. https://www.scmp.com/week-asia/politics/article/3040045/facebook-urges-singaporegovernment-respect-free-expression-it. 37 ‘Singapore’s Fake News Law: Protecting the Truth, or Restricting Free Debate?’, South China Morning Post, 21 December 2019. https://www.scmp.com/week-asia/pol itics/article/3043034/singapores-fake-news-law-protecting-truth-or-restricting-free. 38 ‘Why Academics Should Not Fear Online Falsehood Law’, The Straits Times, 9 May 2019. https://www.straitstimes.com/opinion/why-academics-should-not-fear-onlinefalsehood-law.

148

S. JAYAKUMAR ET AL.

to access both the original posts and corrections, and decide for themselves which is true. … No information or view has been suppressed … We have never shied from answering our foreign critics on any issue. They can say what they please. All we insist upon is the right of reply”.39 To some extent, the public nature of the continuing debate, and public availability of the original statements, is consistent with the Minister of Law’s earlier statements that POFMA “will not affect the right to free speech but could even encourage it by exposing people to more viewpoints”.40 When the COVID-19 pandemic struck Singapore in early 2020, POFMA proved to be a valuable tool in the fight against false information circulating online, which could otherwise have caused panic, hysteria, or discord. The Minister of Health, through the POFMA office, issued orders against false claims that there were more COVID-19 cases in Singapore that were being covered up41 ; that (inter alia) alleged there were numerous infections in schools42 ; or that Singapore had run out of face masks.43 Separately, there were government websites and official government channels on WhatsApp and Telegram that appeared useful in conveying facts and rebutting falsehoods relating to COVID-19, but it is unclear the extent to which these have played a role (if any) in misinformation and disinformation on private messaging platforms like WhatsApp.44

39 ‘Singapore Government Officials Rebut Bloomberg, South China Morning Post Articles on Pofma’, The Straits Times, 31 December 2019. https://www.straitstimes. com/singapore/government-officials-rebut-bloomberg-south-china-morning-post-articleson-pofma. 40 ‘Parliament: Law Against Online Falsehoods Will Not Stifle Free Speech, Say Ministers’, The Straits Times, 1 April 2019. https://www.straitstimes.com/politics/parliamentlaw-against-online-falsehoods-will-not-stifle-speech-ministers. 41 Ng, M. ‘Pofma invoked against website that claimed cover-up on coronavirus case numbers.’ https://www.straitstimes.com/singapore/pofma-invoked-against-websitethat-claimed-cover-up-on-case-numbers. 42 CNA. ‘Singapore States Times issued correction direction over post alleging COVID-

19 transmission in schools’. https://www.channelnewsasia.com/news/singapore/singap ore-states-times-correction-direction-pofma-covid19-school-12705166. 43 Armstrong, V. ‘COVID-19 in Singapore: A Fake News Frenzy’. https://www.taylor vinters.com/article/covid-19-in-singapore-a-fake-news-frenzy. 44 Jie, P., ‘In The Face of Covid-19, POFMA Has Proven As Effective As A ‘Wet Noodle’. https://www.ricemedia.co/current-affairs-opinion-covid-19-pofma-wet-noodle/

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

149

While most commentators and the general public generally had no issue with the use of POFMA against COVID-19 misinformation and disinformation, they were much more critical of its use during Singapore’s General Election 2020 from end June to 10 July 2020, when it was invoked several times during the campaign. In one notable example, POFMA was invoked against opposition candidate Dr. Paul Tambyah, Chairman of the Singapore Democratic Party (SDP), and a global authority on infectious diseases, who had claimed the government might have discouraged COVID–19 testing among migrant workers in February without consulting medical professionals. POFMA directions were also issued against posts discussing government spending for foreign students, and the SDP’s claim that the government had planned to increase the population to 10 million.45 Opposition supporters characterized the use of POFMA as part of “hard-line tactics” by the government, and by extension the ruling party, the People’s Action Party (PAP). Pursuant to the POFMA directions, the disputed statements were allowed to remain visible, but the government’s official statements were displayed side by side. It appears that some readers decided that the POFMA directions were politically motivated, and chose not to believe the official position. Some commentators also argued that excessive use of POFMA may have created unhappiness about the PAP’s style of campaigning, particularly in light of the objections (described in the preceding paragraphs) that the government should not have the power to decide what is false. This may have contributed in part to the drop in support for the PAP (whose 2020 vote share of 61.2% represented an 8.7% drop compared to 2015 [69.9%]). If the public perceives that a law is over-used, then several risks arise—despite robust government rebuttals against accusations that this is directed against the political opposition, it is possible that in time public may become cynical. In addition, depending on how the various scenarios play out, the public may start to think that if an online statement does not trigger legal action, then it might well be true. A well-resourced and sophisticated attacker (such as a malicious state) may game the system by issuing a series of vague online falsehoods which trigger the law, in order to create a subsequent narrative of repression or conspiracy theory. To 45 Jaipragas, B. ‘Has Singapore’s fake news law passed the election test?’ https://www. scmp.com/print/week-asia/politics/article/3092228/has-singapores-fake-news-law-pas sed-election-test

150

S. JAYAKUMAR ET AL.

avoid these scenarios, laws should be used strategically for only the most egregious cases, and any follow up communications (debunking, clarifications) should be composed not only for cold technical accuracy, but also aimed at winning public trust. Lessons can be learned from nations like Latvia, mentioned later in this chapter, who are organically promoting trust within society and between society and the government.

Comparison of Singapore’s Law and Other Legislation More than a dozen countries have responded to the threat of disinformation by legislating or attempting to legislate against it, with varying degrees of success.46 A brief examination of the differences and similarities is appropriate here. Germany passed the Network Enforcement Act (“Netzwerkdurchsetzungsgesetz”, colloquially referred to as the “Facebook Law” or “NetzDG”) in October 2018. The law requires social media platforms to remove “illegal content” or be penalized up to 50 million Euros. Unlawful content is defined as anything that violates Germany’s Criminal Code, which bans incitement to hatred, incitement to crime, the spread of symbols belonging to unconstitutional groups (e.g. Nazis), and more. Two comparative observations can be made. First, NetzDG gives the authorities powers to remove content, whereas Singapore’s law provides power to order corrections, with removal of content as a secondary focus. Second, Germany’s law covers content which is already illegal under existing laws, whereas Singapore’s law creates new offences for the intentional spread of falsehoods in order to create public harm. France has passed laws aimed at curbing the spread of misinformation during the country’s election campaigns, by enforcing more media transparency and blocking offending sites.47 The laws, passed in 2018, will allow political parties or candidates to complain about widely spread assertions deemed to be false or “implausible” during and in the run-up to elections. The general premise has come in for a great deal of criticism by free speech advocates, as have the specific provisions which detail how a judge must, within 48 hours, decide whether the allegedly false 46 A Guide to Anti-misinformation Actions Around the World, Poynter. https://www. poynter.org/ifcn/anti-misinformation-actions. 47 Yasmeen Serhan, ‘Macron’s War on Fake News’, The Atlantic, 6 January 2018. https://www.theatlantic.com/international/archive/2018/01/macrons-war-on-fakenews/549788/.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

151

information could alter the course of an election, and whether it has been spread online on a massive scale. France’s law applies specifically to elections whereas Singapore’s law applies at all times. France’s process can be employed by any political party or candidate, whereas Singapore’s law can only be employed by a Minister. France’s law also does not provide for correction directions, only for blocking. What about bringing malicious entities—those which seek to foment subversive activity—to account? Many states are concerned about actors based outside of their territory. The US has led in indicting Russians for interfering in the 2016 Presidential and 2018 mid-term elections. While Singapore may not follow this lead anytime soon, Singapore’s legislation has some bite beyond its borders, as the provisions of POFMA apply to persons and actions “whether in or outside Singapore”, as long as such actions are communicated or affect people in Singapore.48 The enforcement of these clauses will depend on mutual legal assistance from other states; laying of charges in the first place will depend on factors such as the ability of Singapore to accurately identify the actors and the political implications of naming them. Besides state actors, there are corporate “hired guns” who offer services to target online political messaging to manipulate people, the most notorious being Cambridge Analytica and its parent, Strategic Communications Limited (SCL). If Singapore authorities believe that these services might have a deleterious impact on Singapore, or might influence an electoral outcome in Singapore, then it would be logical for the Singapore government to take action. Under POFMA, the service providers would be committing an offence if they provided “services for communication of false statements of fact in Singapore” (Section 9), or making or altering bots for communication of false statements of fact in Singapore (Section 8). In the murky world of influence operations which do not employ false statements of fact but instead use biased opinions or innuendo, they could be liable under Singapore’s Sedition Act if they were carrying out acts which might, inter alia, create hatred or contempt against the Government, raise discontent or disaffection among Singaporeans, or promote feelings of ill-will and hostility between different races or classes. 48 Protection from Online Falsehoods and Manipulation Act 2019 (Act 18 of 2019) s 7(1). https://sso.agc.gov.sg/Bills-Supp/10-2019/Published/20190401?DocDate=201 90401.

152

S. JAYAKUMAR ET AL.

Non-legal Aspects: Media and Digital Literacy, and Building Trust Singapore government officials have repeatedly acknowledged the need to look beyond just laws to counter online falsehoods. On the same day POFMA was passed, Communications and Information Minister S. Iswaran recognized how “legislation is necessary but not sufficient in the fight against online falsehoods”. He added how Singapore’s “first and most important line of defence against online falsehoods is a wellinformed and discerning citizenry, equipped with the tools to combat online falsehoods”.49 This would include government-supported initiatives to build fact-checking capabilities and digital literacy. In this context, it is worth noting that while surveys conducted in 2017 and 2018 have shown that a high percentage of Singaporeans are genuinely concerned about fake news online, it did not necessarily follow that Singaporeans have the necessary skills to differentiate fake news from authentic ones.50 A 2018 Ipsos survey showed that 91% Singaporeans surveyed were unable to identify false news headlines despite 79% indicating they were “somewhat” or “very confident” in their ability to detect fake news.51 This underlines the fact that steeling the resilience of the people, instilling critical thinking and providing very swift access to the facts— under consideration by almost every nation taking this threat seriously—are vital too in Singapore. In this regard, the Singapore Parliamentary Select Committee endorsed the need for a public education national framework to “coordinate and guide public education initiatives”, enhancing training of journalists to ensure accurate journalism, and establishing a transparent and independent fact-checking coalition. At the grassroots level, the Committee encouraged the need for more honest discussions, conversations and interactions with the public—to strengthen social cohesion, reduce the distance between society and authorities, as well as build trust in public institutions.

49 ‘Ministers Issuing Directives, with Scope for Judicial Oversight, Strikes Best Balance in Combating Fake News: Iswaran’, ChannelNewsAsia, 8 May 2019. https://www.cha nnelnewsasia.com/news/singapore/ministers-given-authority-issue-directives-fake-newspofma-bill-11514544. 50 ‘Most S’poreans concerned About Fake News: BBC Study’, TODAY , 25 May 2017. https://www.todayonline.com/singapore/most-sporeans-concerned-aboutfake-news-bbc-study. 51 ‘Most S’poreans Concerned About Fake News: BBC Study’, TODAY , 25 May 2017.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

153

(A) Critical thinking/Media literacy Several initiatives have been announced (or strengthened) following the release of the Select Committee’s report. In July 2019, the Digital Media and Information Literacy Framework was established to “provide public organisations with guidelines on how to spot fake news”,52 when designing their digital literacy programmes.53 A significant part of the Framework is be public education.54 There is recognition, however, that various segments of society might have different needs. Therefore, relevant authorities will cater their programmes and tools to suit the preferences of the respective segments. For example, National Library Board’s Source.Understand.Research.Evaluate. (S.U.R.E.) campaign has been enhanced to better cater to the general public, adults and children.55 Several fact-checking resources have also been adopted by the Media Literacy Council (MLC), to teach students how to evaluate reliability of news sources and to tell the difference between fact and opinion.56 An example is the News and Media Literacy Toolkit, targeted at students aged 13–18 years old. It will be produced as lesson plans, where activity sheets would be made available on MLC’s website, and distributed to

52 ‘Two New Media Literacy Resources to Teach Youth How to Spot Fake News’, TODAY , 11 March 2019. https://www.todayonline.com/singapore/two-new-media-lit eracy-resources-teach-youth-how-spot-fake-news. 53 ‘National Framework to Build Information and Media Literacy to Be Launched in 2019: S Iswaran’, ChannelNewsAsia, 2 November 2018. https://www.channelnewsa sia.com/news/singapore/framework-build-information-media-literacy-launched-2019-isw aran-10890438. 54 ‘National Framework to Build Information and Media Literacy to Be Launched in 2019: S Iswaran’, ChannelNewsAsia, 2 November 2018. 55 National Library Board Singapore, S.U.R.E. Campaign. http://www.nlb.gov.sg/ sure/sure-campaign/. SURE for Life focuses on educating the public about “the threats deliberate online falsehoods pose to the peace and stability of society”, SURE for Work teaches techniques to filter out credible sources of information at work, while SURE for School focuses on information literacy and critical thinking skills for students and educators. 56 ‘Two New Media Literacy Resources to Teach Youth How to Spot Fake News’, TODAY , 11 March 2019. https://www.todayonline.com/singapore/two-new-media-lit eracy-resources-teach-youth-how-spot-fake-news.

154

S. JAYAKUMAR ET AL.

secondary schools.57 MLC’s Better Internet Campaign is also looking at strengthening the role of parents in guiding youth and children into use digital technology responsibly, by organizing digital literacy workshops as well as producing a 20-page parent resource.58 (B) Strengthening trust At the same time, a whole-of-society approach in countering disinformation requires a healthy level of trust from all levels of society. Building trust requires: (a) the use of open-source information—which are transparent and easily accessible to the public—to paint a coherent picture of the truth and debunk falsehoods; and (b) encouraging open debates on issues that falsehoods have muddied. Debates can facilitate fact-checking, and may help reduce confirmation biases resulting from echo chambers. There is a need to ensure this is sustainable in Singapore, and can be built on the dialogues and exchanges happening within different segments of society presently. In the Select Committee Report, recommendations were proposed to encourage trust among people and communities. One recommendation was for organizations to provide necessary clarifications on falsehoods which may affect social cohesion. This can be via people-to-people communication, “safe spaces” to exchange honest perspectives on sensitive issues, having role models to cultivate a core within society who are less susceptible to falsehoods, and reaching into and across “echo chambers”. Examples given in the report include the Inter-Racial and Religious Confidence Circles (IRCC) and Inter-Religious Organisation (IRO), which provide platforms promoting understanding between faith and ethnic groups. The Select Committee report also highlighted the need to maintain trust in public institutions, when responding to or taking measures against online falsehoods.59 The report proposed how this can be done 57 Similar toolkits will also be distributed at the primary and junior college levels of

education, and will include real-life case studies for students to better relate to issues of online falsehoods. 58 ‘A Better Internet for a Better World’, 6 April 2018. https://www.imda.gov.sg/inf ocomm-and-media-news/buzz-central/2018/3/a-better-internet-for-a-better-world. 59 Report of the Select Committee on Deliberate Online Falsehoods —Causes Consequences and Countermeasures, p. 86.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

155

by providing timely information to the public in response to online falsehoods, pre-empting vulnerabilities and putting out information in advance, and communicating with the public clearly.60 The report also observed that the government should assure the public of the “integrity of the information … [put] forward concerning public institutions”, and how explanations should be given if there are reasons why public institutions are prevented from releasing full information to the public.61 Examples of government-initiated efforts given in the report to maintain trust in public institutions include REACH, which practice groundsensing and engage with the different segments of society to understand issues, and formulate policies and management strategies better.62 This two-pronged approach in encouraging trust within Singaporean society is significant; it promotes understanding and confidence horizontally among different groups of people, and vertically between society and the state. Culturing societal trust can then encourage “people to be more discerning and skeptical in the face of divisive disinformation”.63 Healthy levels of trust would therefore encourage individuals to proactively look for reliable sources of information, in the face of rumours and online falsehoods. Regional/International Level: Comparing Notes Regional and international cooperation between like-minded states facing similar tactics on the part of adversaries (but not necessarily the same adversaries) will also be critical. Some nations have organized themselves in an organic, ground-up fashion to counter disinformation, as well as promote trust within society, and between society and the government. Galvanizing the assistance of the grassroots to counter disinformation can be extremely useful, while encouraging vigilance on the ground. Singapore can also take a leaf from these experiences. Lithuania and other Baltic states such as Estonia and

60 Report of the Select Committee on Deliberate Online Falsehoods —Causes Consequences and Countermeasures, p. 88. 61 Report of the Select Committee on Deliberate Online Falsehoods —Causes Consequences and Countermeasures, p. 89. 62 REACH is the abbreviation for “Reaching Everyone for Active Citizenry @ Home”. 63 Report of the Select Committee on Deliberate Online Falsehoods, p. 82.

156

S. JAYAKUMAR ET AL.

Latvia, for example, have groups of online volunteers working at countering disinformation. The Baltic Elves consist of volunteers countering Kremlin propaganda and disinformation online by surveying content on social media and weed out Pro-Russian trolls and fake accounts.64 They also fact-check, and share with other “elves” sources of fake news they found. This effort is wholly voluntary, and stems from the commitment individuals have invested in ensuring their state is not subjected to falsehoods or further Russian interference. On a more official level, the North Atlantic Treaty Organization (NATO)—chiefly through the NATO StratCom Center of Excellence— has well-developed coordination among many of its states, who have experienced disinformation and influence campaigns from Russia. In this context, it is worth observing that sharing across East and West is at its infancy; and in Asia, collaborative efforts comparable to what has been seen in Europe have been few and far in between. Small steps have however been made in Southeast Asia. The Association of Southeast Asian Nations (ASEAN), after the 14th Conference of ASEAN Ministers Responsible for Information (AMRI) and Fifth Conference of ASEAN Plus Three Ministers Responsible for Information, too released an introductory framework to counter disinformation in the region. It detailed the need to focus on education, norms and grassroots participation.65 Singapore’s efforts to learn best practices from other states and organizations should continue apace. These encourage learning, understanding and coordination in developing domestic policies and standards to manage fake news within the state.

Conclusion Three final observations are in order. Singaporeans have become accustomed to drills, and various initiatives that are aimed at preparing individuals and societies to face the 64 Beata Stur, ‘Baltic “Elves” Take on Russian “Trolls”’, 9 October 2017. https:// www.neweurope.eu/article/baltic-elves-take-russian-trolls/. 65 Ministry of Communication and Information, ‘Joint media statement at the

14th Conference of ASEAN Ministers Responsible for Information (AMRI) and Fifth Conference of ASEAN Plus Three Ministers Responsible for Information’, 10 May 2018. https://www.mci.gov.sg/pressroom/news-and-stories/pressroom/2018/5/jointmedia-statement-at-14th-conference-of-amri-and-5th-conference-of-amri-plus-3-on-10may-2018.

FAKE NEWS AND DISINFORMATION: SINGAPORE PERSPECTIVES

157

threat and aftermath of “kinetic” threats such as terror attacks. Partly on account of initiatives such as SGSecure, the overall discourse has moved markedly over the years—now not only are threats discussed in an open and forthright fashion, but issues concerning societal resilience in post-incident scenarios (including mutual trust between communities) are increasingly discussed.66 This has helped holistic preparation when it comes to the kinetic threats (and specifically for “day-after” scenarios following a terrorist attack). It may be necessary to update movements such as SGSecure, and to rethink and refine strategies currently used to educate Singaporeans on our national security. Any refresh should include measures and strategies against slow-burn scenarios, and in particular disinformation campaigns that could disrupt the fabric of Singaporean society. A start has already been made when it comes to updating Singaporean society’s readiness in the face of global misinformation and disinformation campaigns. In February 2019, Digital Defence was added as a sixth pillar of Singapore’s Total Defence concept complementing the existing pillars (Military, Civil, Economic, Social, Digital and Psychological Defence). The rationale for the addition of Digital Defence was that it was needed as a response to “Cyberattacks, disinformation campaigns, and fake news” which could be used to divide and weaken the population.67 Secondly, aggressors are continually honing their methods. Those who set up fake accounts aimed at influencing the US mid-term elections went through far greater pains to hide their identities than we have seen with the Kremlin-lined Internet Research Agency, that interfered in the 2016 US Presidential election. Technology (Artificial Intelligence, and the use of “Deep Fakes”, which can synthesize video and audio in a manner indistinguishable from the real thing) will increasingly feature in the arsenal of subversive actors. Those playing defence are kept for the most part on the backfoot. What this might mean is any new laws or amendments to existing ones aimed at combatting fake news and disinformation in or out 66 SGSecure is a national movement that calls on Singaporeans to be part of the nation’s anti-terrorism efforts, by sensitising, training and mobilising the community against terror attacks; it also equips citizens to be mobilisers to help their communities stay united and resilient as well as return to normalcy in the aftermath of an attack. See https://www. mha.gov.sg/hometeamnews/our-community/ViewArticle/terrorism-at-sgsecure-launch. 67 ‘Digital Defence to Be Sixth Pillar of Total Defence’, The Straits Times, 15 February 2019. This is the first time that a new pillar has been added since the inception of Total Defence in 1984.

158

S. JAYAKUMAR ET AL.

of an election period will have to be future-proof, taking into account these evolutions. Policies to counter falsehoods and disinformation would work better if designed with evidence such as longitudinal studies on the impact of falsehoods. There is also a need to monitor the actions (and intent) of malicious actors and take pre-emptive measures, instead of solely relying on the analysis of the present impact of disinformation campaigns. We would want to avoid situations where new legal provisions focus solely on containing the present. They will quickly become anachronisms. That said, the recognition of an arms race between defender and aggressor should not lead to overreach—legal or otherwise—on the part of nations seeking to curb fake news, subversion and information manipulation in and out of their electoral cycles. The means employed should be calibrated. If sweeping means are used to counter threats, for example, in terms of walling off a “national” cyberspace (and elsewhere, others are already beginning to talk about the “Splinternet”), this may mean diminishing the diversity in discourse and plurality that has played a part in encouraging that country to develop and prosper in the first place. All concerned should be mindful that excessive zeal in attempts to defend national resilience might mean the defender simply becomes an imitative shadow of the adversary—in other words, in seeking to protect itself, the nation might be discarding the very thing that it is in reality trying to defend.