138 90
English Pages 335 [323] Year 2023
Mobile Communication in Asia: Local Insights, Global Implications
Carol Soon Editor
Mobile Communication and Online Falsehoods in Asia Trends, Impact and Practice
Mobile Communication in Asia: Local Insights, Global Implications Series Editor Sun Sun Lim, Singapore University of Technology & Design, Singapore, Singapore
This Book Series thoroughly canvases the research community that studies the social impact of mobile communication in Asia, bringing to the forefront research that has not attained a sufficiently international profile. Currently, this research community can be broadly divided into scholars within and outside of Asia who work on Asia and publish in English and Asian scholars who work in Asia and publish in their native languages. While the work of former gains natural exposure and traction given the Anglophone bent of the international academic fraternity, research output of the latter group of scholars is disseminated and consumed within narrower domestic realms. Yet, the culturally-specific and idiosyncratic ways in which mobile communication is appropriated privileges local perspectives with rich insights that can enhance the global conversation on mobile communication and its social impact. This series thus reaches out to the second group, translating their work into English and in formats that reach the highest international standards of academic rigor. Authors can submit book proposals by using the form available on the website. Proposals must be addressed to the series editor, Dr. Sun Sun Lim: sunsun_lim@ sutd.edu.sg and Springer publishing editor, Alex Westcott Campbell: alexandra. [email protected] All book proposals will be reviewed by the editor and the editorial board. If accepted, proposals will be sent to Springer for final approval. Springer will issue a contract to the author. When completed, the manuscript will be double peer-reviewed by the editorial board before it is sent to Springer.
Carol Soon Editor
Mobile Communication and Online Falsehoods in Asia Trends, Impact and Practice
Editor Carol Soon Institute of Policy Studies National University of Singapore Singapore, Singapore
ISSN 2468-2403 ISSN 2468-2411 (electronic) Mobile Communication in Asia: Local Insights, Global Implications ISBN 978-94-024-2224-5 ISBN 978-94-024-2225-2 (eBook) https://doi.org/10.1007/978-94-024-2225-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature B.V. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature B.V. The registered company address is: Van Godewijckstraat 30, 3311 GX Dordrecht, The Netherlands
Acknowledgements
Over homemade carrot cake at the university canteen, Prof. Lim Sun Sun extended an invitation to me to put together a volume on a timely, yet under-researched topic. I am grateful to her for the opportunity to put together a book on the latest research and practices in Asia that are dedicated to unravelling the mystery of online falsehoods on a hard to study platform, mobile instant messaging systems. My gratitude to the team at Springer Nature who helped to bring this volume to print. In the course of doing research on online falsehoods in the past five years, I have had the pleasure of working with academic and practitioner groundbreakers who are unyielding in their efforts to develop and improve interventions to combat the scourge of online falsehoods. This volume brings together great minds and unyielding spirits. I express my heartfelt thanks to the contributing authors of this volume, for working patiently with me to bring the book to print and for all the sweat and tears they have shed untying the Gordian knot of online falsehoods. I deeply appreciate the Institute of Policy Studies and the National University of Singapore for their steadfast support and belief in my work and special thanks to Shawn Goh and Nandhini Bala Krishnan whose partnership made the work fun, even during nerve-wrecking moments. Together, we spread the word to policymakers, educators and practitioners who seek data-driven insights. And finally, to my son, Dev, whose sense and sensibility are my source of comfort and strength.
v
Contents
1
Complexities in Falsehoods Management and Implications for Research and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carol Soon
1
Part I Trends in the Proliferation of Online Falsehoods and MIMS 2
3
4
5
6
COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities in Indonesia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Engelbertus Wendratama and Iwan Awaluddin Yusuf
17
The Unbelieving Minority: Singapore’s Anti-Falsehood Law and Vaccine Scepticism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swati Maheshwari and Ang Peng Hwa
27
Orders of Discourse and the Ecosystem of Rumour Management on WeChat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mingyi Hou
45
Understanding the Flow of Online Information and Misinformation in the Australian Chinese Diaspora . . . . . . . . . . Anne Kruger, First Draft Team, and Stevie Zhang
69
The Battle between the Thai Government and Thai Netizens Over Mis/Disinformation During COVID-19 . . . . . . . . . . . . . . . . . . . . . Pijitra Suppasawatgul
97
Part II 7
Impact of Online Falsehoods Transmitted via MIMS
Users, Technologies and Regulations: A Sociotechnical Analysis of False Information on MIMS in Asia . . . . . . . . . . . . . . . . . . 113 Shawn Goh
vii
viii
Contents
8
No “Me” in Misinformation: The Role of Social Groups in the Spread of, and Fight Against, Fake News . . . . . . . . . . . . . . . . . . 131 Edson C. Tandoc Jr., James Chong Boi Lee, Chei Sian Lee, Joanna Sei Ching Sin, and Seth Kai Seet
9
Understanding the Nature of Misinformation on Publicly Accessible Messaging Platforms: The Case of Ivermectin in Singapore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Chew Han Ei and Chong Yen Kiat
10 Did You Hear? Rumour Communication via Instant Messaging Apps and Its Impact on Affective Polarisation . . . . . . . . . 173 Brenna Davidson and Tetsuro Kobayashi 11 Fact Checking Chatbot: A Misinformation Intervention for Instant Messaging Apps and an Analysis of Trust in the Fact Checkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Gionnieve Lim and Simon T. Perrault Part III Practice and Interventions 12 Regulating Online Pandemic Falsehoods: Practices and Interventions in Southeast Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Netina Tan and Rebecca Lynn Denyer 13 Countering Fake News on WhatsApp in Malaysia: Current Practices, Future Initiatives and Challenges Ahead . . . . . . . . . . . . . . . 249 Bahiyah Omar 14 Towards an Effective Response Strategy for Information Harms on Mobile Instant Messaging Services . . . . . . . . . . . . . . . . . . . . 263 Tarunima Prabhakar, Aditya Mudgal, and Denny George 15 Misinformation in Open and Closed Online Platforms: Impacts and Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Lucy H. Butler and Ullrich K. H. Ecker 16 Turning MIMS from a Curse into a Blessing: Tripartite Partnership for Tackling Online False Information in Taiwan . . . . . 305 Chen-Ling Hung, Shih-Hung Lo, and Yuan-Hui Hu Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Editor and Contributors
About the Editor Carol Soon is Senior Research Fellow and head of the Society and Culture department at the Institute of Policy Studies, National University of Singapore, Singapore.
Contributors Lucy H. Butler University of Western Australia, Perth, Australia Brenna Davidson University of Michigan, Ann Arbor, MI, USA Rebecca Lynn Denyer Political Science Department, McMaster University, Hamilton, ON, Canada Ullrich K. H. Ecker University of Western Australia, Perth, Australia Chew Han Ei Institute of Policy Studies, National University of Singapore, Singapore, Singapore First Draft Team First Draft, Woolloomooloo, NSW, Australia Denny George Tattle Civic Tech, Gurgaon, India Shawn Goh Oxford Internet Institute, University of Oxford, Oxford, UK Mingyi Hou Department of Culture Studies, School of Humanities and Digital Sciences, Tilburg University, Tilburg, The Netherlands Yuan-Hui Hu Public Television Service Foundation, Taipei, Taiwan Chen-Ling Hung The Graduate Institute of Journalism, National Taiwan University, Taipei, Taiwan
ix
x
Editor and Contributors
Ang Peng Hwa Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore Chong Yen Kiat Institute of Policy Studies, National University of Singapore, Singapore, Singapore Tetsuro Kobayashi City University of Hong Kong, Hong Kong, China Anne Kruger First Draft, Woolloomooloo, NSW, Australia Chei Sian Lee Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore James Chong Boi Lee Ministry of Social and Family Development, Singapore, Singapore Gionnieve Lim Information Systems Technology and Design, Singapore University of Technology and Design, Singapore, Singapore Shih-Hung Lo Department of Communication, National Chung Cheng University, Chiayi, Taiwan Swati Maheshwari Independent Researcher, Singapore, Singapore Aditya Mudgal National Law School of India University, Bengaluru, India Bahiyah Omar Universiti Sains Malaysia, Penang, Malaysia Simon T. Perrault Information Systems Technology and Design, Singapore University of Technology and Design, Singapore, Singapore Tarunima Prabhakar Tattle Civic Tech, Gurgaon, India Seth Kai Seet Centre for Information Integrity and the Internet, Nanyang Technological University, Singapore, Singapore Joanna Sei Ching Sin Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore Carol Soon Institute of Policy Studies, National University of Singapore, Singapore, Singapore Pijitra Suppasawatgul Department of Journalism and New Media, Faculty of Communication Arts, Chulalongkorn University, Bangkok, Thailand Netina Tan Political Science Department, McMaster University, Hamilton, ON, Canada Edson C. Tandoc Jr. Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore Engelbertus Wendratama PR2Media, Yogyakarta, Indonesia
Editor and Contributors
xi
Iwan Awaluddin Yusuf Department of Communications, Universitas Islam Indonesia, Yogyakarta, Indonesia Stevie Zhang First Draft, Woolloomooloo, NSW, Australia
Chapter 1
Complexities in Falsehoods Management and Implications for Research and Practice Carol Soon
Abstract Online falsehoods, in the form of fake news, misinformation and disinformation disseminated via electronic communications, remain a critical one that confronts governments, social media companies and civil society. In Asia, a wide slate of stakeholders, comprising governments, civil society and social media platforms, have responded in myriad ways to contain the problem (e.g., legislative moves, takedown by social media companies, and fact checking efforts by society and academia). However, these existing measures tend to focus on open social media platforms such as Facebook, Instagram and YouTube. However, as the reliance on mobile devices for information seeking and sharing continues to grow, the spread of online falsehoods on mobile instant messaging services (MIMS) such as WhatsApp, LINE, Telegram and WeChat, has become a growing problem. This chapter identifies the gaps in existing research and interventions in falsehoods management. It highlights what is to come in the rest of the edited volume—analyses of the latest developments and emerging trends pertaining to the spread of falsehoods on MIMS, innovative methodological work that seeks to solve the puzzle of online falsehoods on MIMS, and key lessons that can be learnt from different stakeholders who are doing important work in countering online falsehoods on MIMS in Asia. Keywords Online falsehoods · Mobile instant messaging services (MIMS) · Mega apps · Audio-based misinformation · Asia
1.1 The State of Play Today Online falsehoods, in the form of fake news, misinformation, disinformation and information operations disseminated via electronic communications, have received much scrutiny among scholars, practitioners and policymakers in the past few years. C. Soon (B) Institute of Policy Studies, National University of Singapore, Singapore, Singapore e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_1
1
2
C. Soon
The term “fake news” came at the heels of two political events of international significance in 2016—Brexit that culminated in the UK leaving the European Union and the US Presidential Election which led to the election of Donald Trump as president. The ripples caused by online falsehoods on elections were also detected in Asia, such as during the Jakarta gubernatorial election in 2017 and the Indian general election in 2019. In the case of the former, online falsehoods contributed to the criminal persecution of a political candidate. Fake news is one type of false information in the information ecology. The use of the term “fake news” is problematic as it fails to adequately describe the complexities of the phenomenon. Politicians have weaponised the term by using it to attribute blame to news organisations whose reporting they disagree with. First Draft’s definitional framework provides a useful illustration of the diverse types of problematic information, categorised based on the intent (or lack thereof) to deceive and to harm (Wardle & Derakhshan, 2017). Its information disorder has three distinct components: mis-information, when false information is shared but no harm is meant; dis-information, when false information is knowingly shared to cause harm; and malinformation, when genuine information is shared to cause harm, often by moving private information into the public sphere. The examples for each are provided in Fig. 1.1. This framework is cited by several authors in this volume. Depending on their subject matter, the authors in this book will use different terminologies. Given the definitional parameters of existing terminologies, the term “online falsehoods” which covers a wide range of falsehoods that are spread on digital platforms, regardless of their form and intent, is used in this chapter. More than five years on, the problem of online falsehoods clearly remains a critical one confronting and confounding governments, social media companies and civil society. This is evinced by the wide-ranging actions that have been rolled out by various stakeholders from the public, private and people sectors. They have attempted to counter the effects of online falsehoods through different legislative and non-legislative means. Within the region of Southeast Asia, a range of measures were rolled out to counter online falsehoods. See Fig. 1.2 for some of the measures adopted by various stakeholders in the region. They include the enactment of “fake news laws” in Malaysia, Myanmar, Vietnam and Singapore, and the establishment of new government agencies and taskforces to monitor and respond to online content. Governments are also organising cyber armies to wage wars online. In the Philippines, the Department of Information and Communications Technology launched its first cybersecurity platform to identify those responsible for misinformation and election-related threats on social media, in addition to carrying out counter-terrorism surveillance and intercepting online drug traffickers (Umali, 2019). In addition to governments, social media companies have rolled out a slew of measures to tackle the problem, such as taking down content and removing accounts that publish online falsehoods. In 2021, Twitter launched Birdwatch to nudge a global community response to misinformation. Birdwatch allows people to identify the information they believe is misleading and write notes that provide context
1 Complexities in Falsehoods Management and Implications for Research …
3
Fig. 1.1 First Draft’s framework of information disorder
Fig. 1.2 Measures by different stakeholders to combat online falsehoods
(Coleman, 2021). Other measures involving civil society and academia include fact checking efforts such as Indonesia’s Mafindo and Cekfacta and local and transnational collaborations to promote digital literacy. The Institute for War and Peace Reporting, an independent not-for-profit organisation, which trains professional and citizen journalists to help fight fake news in Southeast Asia, has been active in
4
C. Soon
competency-building in different Asian countries. Their efforts include an educational COVID-19 fact checking game in Malaysia, a radio series and a fact checking campaign in Cambodia, podcasts and newsroom monitoring in Myanmar, workshops in Thailand and webinars in Vietnam (Pañares, 2021).
1.2 Cause for Action and Retaliation The measures from various stakeholders tend to focus on open social media platforms such as Facebook, Instagram and YouTube. However, as the reliance on mobile devices for information seeking and sharing continues to grow, the spread of online falsehoods on mobile instant messaging services (MIMS) such as WhatsApp, LINE, Signal, Telegram and WeChat, is a growing problem that confounds academics, practitioners and policymakers. As of 2022, WhatsApp is one of the most popular global mobile messenger apps worldwide with approximately over two billion monthly active users globally, followed by WeChat which has 1.2 billion users and Facebook Messenger at 988 million global users (Dixon, 2022). WhatsApp is also increasingly being used as a platform for news-seeking in Asia, Latin America and Africa (Newman, 2022). However, as MIMS become more popular as a source of information and news, the spread of “fake news” via chat groups that attract thousands of followers on the platform has become a problem (Newman, 2022). In Singapore, MIMS are where people encountered false information most frequently (Soon & Goh, 2021). Academic research that seeks to determine the effects of media and technology is confronted with challenges in determining causation. This task is gargantuan, given that the impact of online falsehoods on society is often compounded by and connected to entrenched institutional and structural problems (e.g., polarisation in the political system, partisan media and a fractious society). However, academic work, as well as observations by the media and civil society organisations, point to the correlation between specific incidents and responses generated among or towards certain communities. First, falsehoods that exacerbate existing fears and suspicions towards specific communities have contributed to acts of violence and hate crimes targeted at those communities, with MIMS like WhatsApp contributing to an “ecosystem of hate” (Nizaruddin, 2021). In India, one of WhatsApp’s largest markets with over 200 million users, false and malicious information against minority communities like Muslims has resulted in deathly attacks. An estimated number of 20 WhatsApp lynchings were reported to have taken place between 2017 and 2020. Investigations into these incidents found that people often forwarded false messages and doctored videos to large groups, some of which had more than 100 participants (The Wire Staff, 2017). Similar intercommunal violence stemming from misinformation was also observed in the Global South (The Sentinel Project Staff, 2021). Online falsehoods disseminated on MIMS have also been linked to political unrest and turmoil, periods of conflict when people are especially vulnerable as they are
1 Complexities in Falsehoods Management and Implications for Research …
5
hungry for information. For instance, the political misinformation found in Brazilian WhatsApp groups during the presidential election was characterised by polarising and conspiratorial content (Machado et al., 2019). At the time of writing this chapter, the Ukrainian-Russian war was into its tenth month. While Telegram provided a lifeline to Ukrainians who used the platform to stay in touch with their family and friends dispersed across the country and overseas, it was also used to spread disinformation to mislead those within and outside of Ukraine. For instance, a fake President Volodymyr Zelensky video which appeared two days after Russia invaded the country on a Telegram account urged Ukrainian armed forces to surrender (Simonite, 2022). The fake account reached 20,000 followers on Telegram before it was shut down (Baudoin-Laarman, 2022). Other online falsehoods include misleading videos of unrelated explosions that were posted within hours of Russia’s invasion and were seen by thousands of people (Holroyd, 2022). Besides fuelling confusion, tensions and conflicts within a country, falsehoods on MIMS have also exacerbated cross-border tensions. MIMS like WeChat are used by diaspora communities to connect with their loved ones and the latest happenings in their home country. However, they are also used as a tool to influence diaspora communities’ opinions and mobilise actions in their host country. For instance, in the US, WeChat was used to organise opposition towards affirmative action policies at universities that were framed as discriminatory against Asians (Hsu, 2018). In Australia, advertisements that targeted Labour party members and the party’s tax policy were published without any party attribution on WeChat, raising concerns that misinformation could be circulated without oversight and its potential in influencing Australian Chinese voters (Davies & Kuang, 2022). The COVID-19 pandemic further demonstrates the repercussions of falsehoods on public health. Globally, deaths and hospitalisations have been associated with falsehoods which took the forms of rumours, conspiracy theories and discriminatory allegations blaming specific groups for the pandemic (Islam et al., 2020). In their three-wave panel study involving 1,023 residents in Singapore, Kim and Tandoc (2022) found that exposure to online misinformation was correlated with self-reported behaviours such as eating more garlic and regular nose rinsing with saline. The pandemic becomes a double whammy for vulnerable communities such as the Muslim population in India who are blamed in coronavirus conspiracy theories for the spread of the virus (Apoorvanand, 2020).
1.3 Community, Confidentiality and Content: Affordances or Pitfalls? By providing a low cost or free channel, when tapping on free internet networks, for the exchange of messages and calls between users in dyads and groups (in the absence of data charges), MIMS have transformed the communication landscape. Mega apps like WeChat with their multi functions (e.g., text and voice messaging, broadcast
6
C. Soon
messaging, video calling, conference calling and e-payments) are becoming an indispensable part of people’s daily lives. The challenges faced by policymakers and practitioners to tackle the problem of online falsehoods on MIMS, and researchers to study the problem, can be attributed to the three Cs of MIMS—Community, Confidentiality and Content. The group messaging features that are present in many MIMS systems create the ideal relational conditions for the propagation of online falsehoods, what The Atlantic writer Alexis Madrigal called the “dark social”—“closed systems of relationships invisible to the general public” (Madrigal, 2012). For example, WhatsApp enables users to share texts, files, audio and video clips, voice messages and make voice calls for up to 32 people (WhatsApp, 2022a, 2022b, 2022c). Telegram, which does not require phone numbers for account registration, allows users to create groups for up to 200,000 people and broadcast to unlimited audiences (Telegram, 2022). Besides allowing for the formation of small and large communities that transcend geographical boundaries, the end-to-end encryption technology—a key feature of most MIMS—engenders a sense of confidentiality and encourages user perceptions of privacy and safety. While regular chats on Telegram are not end-to-end encrypted (i.e., third parties can intercept and read the messages that are exchanged), the secret chats are (Telegram, 2022). MIMS such as WhatsApp are also constantly updating their safety and privacy policies in line with the evolving online space. Together with the two aforementioned Cs, the third C—Content—heightens the virality and impact of online falsehoods that are disseminated on MIMS. I focus here on the format and modality through which the content (or message) is communicated instead of the content of the message. The common modalities for content disseminated via MIMS are texts, images and audio and video files. Images wield immense communicative power—they attract attention, especially in a cluttered information environment and provide cues to what is worth paying attention to (Soon et al., 2022). A study of over 2,500 misinformation images from more than 5,000 political groups from India by Garimella and Eckels (2020) uncovered a variety of images that perpetuated misinformation on WhatsApp: old images which were taken out of context and reshared, memes (funny but misleading images that contain incorrect statistics or quotes), images which were manipulated (e.g., deepfakes) and other types of fake images (e.g., health scares and fake alerts). In Resende et al.’s study (2019) on the dissemination of misinformation on WhatsApp during two political events in Brazil (i.e., a strike by truck drivers and election) that involved 1,828 publicly accessible WhatsApp groups, it was found that images were the most popular media content shared. The images included satire and messages that supported truck drivers and images with opinions from personalities during the election campaign. As a media-rich format, the video modality provides verdant grounds for the spread of online falsehoods. In Machado et al.’s study (2019) of Brazilian WhatsApp groups, YouTube videos account made up almost 40 per cent of the shared links among the 17,000 over links shared in the groups that shared conspiratorial content. Their findings point to how videos are an important channel for the dissemination of polarising and conspiratorial content. Videos are particularly effective because they can be easily edited and convey information using loaded and appealing imagery.
1 Complexities in Falsehoods Management and Implications for Research …
7
They are also used to simulate the production of content from users as eyewitnesses of certain events (Mukherjee, 2020). Recent studies that examined audio-based misinformation (e.g., voice messages) on WhatsApp uncovered specific characteristics that render them more intimate, urgent and hence powerful (Kischinhevsky et al., 2020; Maros et al., 2020). Audio messages imbue a sense of urgency via calls for action and create the illusion of intimacy and credibility through the use of colloquial language and vocabulary that is familiar to the listener. Since its introduction in 2013, WhatsApp voice notes sees an average of seven billion voice messages being sent every day, all of which are protected by end-to-end encryption to keep them private and secure at all times (WhatsApp, 2022a, 2022b, 2022c). Maros et al.’s (2020) analysed voice notes during the 2018 Brazilian elections and found that voice notes that had negative and sad connotations were more popular than others. Audio messages typically follow this structure: an emotionally charged sender establishes an interpersonal relationship with the receiver, ascertains his credibility as an eyewitness, expert or insider, before finally delivering the misleading information and calling for action (e.g., “do me a favour, and forward this”) (El-Masri et al., 2022). Research that compared different modalities found that media-rich formats such as audio messages and videos elicit stronger responses than text. In their comparison of a fake story that was disseminated in text, audio and video modalities, Sundar et al. (2021) found that most people believed the video, followed by the audio message and the text message. This is because the video was perceived to be the most realistic and most credible. Conversely, media-rich formats can be leveraged to debunk falsehoods. Pasquetto et al. (2022) found that compared to text and image-based messages, audio messages generated more interest and were more effective in correcting beliefs about the falsehood stimulus.
1.4 Challenges for Practice and Research To mitigate the spread of falsehoods, social media platforms have rolled out a slew of measures. As the MIMS which sees the highest adoption, WhatsApp has come under most scrutiny for the spread of online falsehoods on its platform. Since 2018, it has been introducing features that seek to flag viral messages to users and slow down the speed at which information is being shared on the platform. In response to criticisms of how the platform was used to spread malicious and false information, WhatsApp introduced a “forward” label in 2018. The label, which applies to text, images, audio and video messages, indicates when a message has been forwarded to someone from another user (WhatsApp, 2018). Subsequently, in 2020, a feature where users are only able to forward “highly forwarded” messages to one chat (person or group) was introduced. These messages are marked by double arrows, indicating that they had been forwarded more than five times and did not originate from a close contact (Chaturvedi, 2020; WhatsApp, 2022a, 2022b, 2022c). Most recently, WhatsApp has announced that it will further restrict the forwarding limit for messages that have
8
C. Soon
already been forwarded once where “any message that has already been forwarded once will only be allowed to be forwarded to one group at a time instead of five” (WhatsApp, 2022a, 2022b, 2022c). The objective of such measures is to create more friction in message transmission and greater intentionality among users who want to share a message with multiple recipients. On Telegram where there are public channels, public groups and public bots, the platform regulates content by moderating publicly viewable materials. It suspends the accounts of creators of public groups and channels when they violate the platform’s Terms of Service (Badiei, 2022). However, there are several limitations to these measures. Research has shown that while adding a forward label may increase awareness that a message may be less credible, these labels may sometimes backfire because different groups of people interpret these labels differently (Tandoc et al., 2022). Moreover, the label only slows down the spread of false news but the message remains in the ecosystem and continues to be passed on (de Freitas Melo et al., 2019). In the case of content moderation on Telegram, Badiei (2022) noted that Telegram’s brief Terms of Service forbid the promotion of violence on public channels but do not mention anything about promoting violence on private channels or groups; its approach in handling takedown requests is also ambiguous. Telegram will not block “alternative opinions” but there is no clear definition of what constitutes such speech. As mentioned earlier, governments, practitioners and civil society actors such as fact checkers have been trying to manage the problem through different measures. However, the end-to-end encryption makes it difficult for stakeholders to detect falsehoods that are being shared and determine the scale of dissemination (Rossini et al., 2021). Researchers face additional barriers such as ethical considerations as chat group members may not know that their messages are being monitored and researchers hence risk privacy violation (Barbosa & Milan, 2019; Garimella & Eckles, 2020). In addition, the different message formats pose a challenge to data analysis. For instance, annotating misinformation in the image form requires expertise in fact checking and contextual knowledge (e.g., an image can be misinformation or not based on the context in which it was shared and the time when it was shared) (Garimella & Eckles, 2020). Digitally manipulated images add to the problem, give the difficulties in identifying photoshopped images and the proliferation of memes which are manipulated images but not necessarily falsehoods (Garimella & Eckels, 2020; Nightingale et al., 2017).
1.5 Structure of the Book This edited volume seeks to plug existing gaps relating to research and interventions in falsehoods management on MIMS. It does so by bringing together works by both scholars and practitioners focusing on different MIMS (i.e., WhatsApp, Telegram, LINE and WeChat) in a diverse range of Asian countries. The first section of the volume examines the latest developments and emerging trends pertaining to the spread of falsehoods on MIMS. This section seeks to shed light on critical dynamics
1 Complexities in Falsehoods Management and Implications for Research …
9
that underpin the landscape of online falsehoods: How do people interact with falsehoods? What role do MIMS play in the information ecology and what makes some people more vulnerable to being ensnared in online falsehoods? What conditions at the social and institutional level exacerbate the problem of online falsehoods? Given that the region, as with the rest of the world, was coping with the COVID-19 pandemic at the time of writing, the pandemic provided the natural context for study. Chapter 2 focuses on the case of Indonesia, where Engelbertus Wendratama and Iwan Awaluddin Yusuf examine Indonesians’ news-seeking behaviours during the pandemic and the role of WhatsApp in the information ecology during the pandemic. Their study identifies the authoritative sources (physicians, academics and public officials) who play a critical part in correcting online falsehoods. In Chapter 3, Swati Maheshwari and Ang Peng Hwa focus on an elusive group— the vaccine-hesitant. Their analysis of Singapore-based Telegram chat groups uncovers the information consumption habits and motivations of the vaccine-hesitant. The information diet of this group comprises social media (e.g., Facebook, Twitter, YouTube, Instagram, WhatsApp and Telegram) and right-wing and partisan media in the US, indicating information-seeking of a political nature. Their survey data explicates the allure of a MIMS like Telegram for people who feel marginalised and misunderstood—it provides the vaccine-hesitant with the elusive security as they interact with like-minded others in a private and uncensored environment. In his analysis of the rumour ecology on WeChat in Chapter 4, Hou Mingyi examines rumour content as well as those who are involved in debunking the rumours. He argues that while WeChat’s active management of rumours leverages collective intelligence, it fails to change the incentive structure of rumour creation and dissemination and ignores the replicability of the rumours. Also looking at WeChat, but focusing on the Chinese diaspora living in Australia, Anne Kruger, Esther Chan and Stevie Zhang from First Draft show how the community is vulnerable to both misinformation and disinformation. First Draft’s monitoring of WeChat, as documented in Chapter 5, unearths covert and overt operations by both Chinese state actors and organisations that oppose the Chinese Communist Party to influence ethnic Chinese in Australia. Addressing the role of the state in Chapter 6, Pijitra Suppasawatgul details how the government in Thailand combats online falsehoods during the pandemic. However, the trust gap between the citizens impedes the government’s efforts to tackle both pandemic-related falsehoods and the pandemic itself. The second section of this volume focuses on methodological work that seeks to solve the puzzle of online falsehoods on MIMS and their impact in different domains. As discussed earlier, the encrypted nature of most MIMS platforms poses significant challenges for research. The contributing authors to this section deploy a range of techniques to collect data and monitor the ongoings in closed groups. In Chapter 7, Shawn Goh adopts a sociotechnical analytical lens and proposes three research directions. He argues that research should focus on “mobile-first, mobilecentric” users, super-apps in Asia and the efficacy of legislative tools. MIMS provide a rich social context for the spread of falsehoods, but how do social ties come into play when debunking falsehoods? Through an experiment, Edson Tandoc, James Lee and colleagues in Chapter 8 seek to unravel the dynamics in interpersonal and group
10
C. Soon
chats on MIMS. Guided by the social identity theory and the social presence theory, they study the impact of source familiarity and mode of delivery on the perceived credibility of a correction message sent on WhatsApp. Empirical work has provided us with some insights on the perpetrators and recipients of online falsehoods (e.g., their traits and motivations). In Chapter 9, Chew Han Ei and Chong Yen Kiat focus on the content and means of transmission of vaccine misinformation shared in a Singapore-based Telegram group. Through topic modelling and sentiment analysis of 130,000 messages shared among 3,983 unique users over a four-month period, they expose characteristics of the chatgroup. Their study informs us whom the vaccine-hesitant sees as persons and sources of authority and shows how people’s fears and insecurities are exploited. Their study yields findings with implications for public education efforts. Comparisons are often made between open communication networks on social media and closed networks on MIMS, particularly in terms of how they allow for intervention. However, cross-platform analysis is scarce. In Chapter 10, Brenna Davidson and Tetsuro Kobayashi take on the task of comparing the dissemination of political rumours on WhatsApp and Facebook and the impact of rumour communication on affective polarisation. While they found that engaging in political rumour communication on closed and private networks, such as WhatsApp, exacerbated affective polarisation across partisan lines, the effect was not equal among users. Addressing the potential of chatbot in fact checking, Gionnieve Lim and Simon Perrault in Chapter 11 seek to ascertain the effectiveness of fact checking chatbots and the effect of trust in three different fact checkers (i.e., Government, News Outlets and Artificial Intelligence) on users’ trust in the chatbots. Their study points to a silver lining but they also inject a dose of caution. There is overall support for the chatbot intervention, which bodes well for the design of interventions to tackle online falsehoods on MIMS. However, the mixed results for the different fact checkers hold significant implications for intervention design in different institutional contexts. The final section of the volume is dedicated to the ongoing initiatives in the region which counter online falsehoods on MIMS, the challenges that are encountered and the lessons learnt. As mentioned in the beginning of this chapter, governments have adopted different measures to tackle the problem of online falsehoods, of which legislation is a common tool. The first two chapters in this section examine the efficacy and pitfalls of legislation. In Chapter 12, Netina Tan and Rebecca Denyer examine the enforcement and effects of falsehood regulations on democratic freedoms in nine countries in Southeast Asia. They compare laws governing online falsehoods on MIMS and other social media and their enforcement, across the region. They illustrate how abuses can happen and if the expansion of the state’s policing power is left unchecked, democratic backsliding in the post-pandemic era will result. Chapter 13 focuses on the Malaysian case. Bahiyah Omar discusses the initiatives implemented by the government to manage online falsehoods on MIMS. Laws place legal liability on the individuals who create and disseminate false information; while fact checking portal Sebenarnya.my encourages Malaysians to verify news before sharing. However, given their limitations, moving forward, more needs to be done
1 Complexities in Falsehoods Management and Implications for Research …
11
to improve the standard of Malaysian media and people’s resilience through digital literacy programmes. In Chapter 14, Tarunima Prabhakar from Tattle Civic Tech and her collaborators, Aditya Mudgal and Denny George, call for a paradigm shift in approaching MIMS. They situate MIMS at the intersection of two distinct technical developments— mobile telephony-based text messaging and instant messaging services—and argue that any countermeasures must take into account the unique technical and social affordances of MIMS. Lucy Butler and Ullrich Ecker in Chapter 15 draw insights from cognitive, social and political psychology to dissect the dynamics underpinning misinformation belief and continued influence effect of corrected misinformation. They argue how the unique nature of online information acquisition creates an environment that is not only ideal for misinformation spread but also inhibits accurate knowledge revision and recommend specific interventions moving forward. In the final chapter, Chen-Ling Hung, Shih-Hung Lo and Yuan-Hui Hu analyse how a tripartite partnership involving the government, technology companies and civil society, can help combat online falsehoods. Using LINE, the most popular MIMS in Taiwan, as a case study, they show how the key players within the local information ecosystem organise themselves to respond to the challenge and suggest lessons such a tripartite partnership might hold for other countries. Compared to research published on online falsehoods on open communication platforms such as Facebook, Twitter and YouTube, the body of research on online falsehoods on MIMS remains small. Research on the dynamics between MIMS and online falsehoods in Asia is even more scarce, despite thriving efforts made by educators and ground-up organisations to promote digital literacy and fact checking. As MIMS plays a bigger role in information sharing, news dissemination, collective organisation and political campaigning, this volume stands on the shoulders of giants and seeks to guide ongoing work in research and practice in the Asian context.
References Apoorvanand. (2020, April 18). How the coronavirus outbreak in India was blamed on Muslims. Al Jazeera. Retrieved from https://www.aljazeera.com/opinions/2020/4/18/how-the-coronavirusoutbreak-in-india-was-blamed-on-muslims Badiei, F. (2022). The tale of Telegram governance: When the rule of thumb fails. Yale University, Yale Law School or the Justice Collaboratory. Retrieved from https://law.yale.edu/sites/default/ files/area/center/justice/document/telegram-governance-publish.pdf Barbosa, S., & Milan, S. (2019). Do not harm in private chat apps: Ethical issues for research on and with WhatsApp. Westminster Papers in Communication and Culture, 14(1), 49–65. https:// doi.org/10.16997/wpcc.313 Baudoin-Laarman, L. (2022, March 14). Lacking oversight, Telegram thrives in Ukraine disinformation battle. Retrieved from https://www.thecitizen.co.tz/tanzania/oped/lacking-oversight-tel egram-thrives-in-ukraine-disinformation-battle-3746840 Chaturvedi, A. (2020, April 7). Covid fallout: WhatsApp changes limit on forwarded messages, users can send only 1 chat at a time. The Economic Times. Retrieved from https://www.citati onmachine.net/bibliographies/bf69ecc8-b70c-42a3-a0e1-a39dcb9d0a23
12
C. Soon
Coleman, K. (2021, January 25). Introducing Birdwatch, a community-based approach to misinformation. Retrieved from https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwa tch-a-community-based-approach-to-misinformation Davies, A., & Kuang, W. (2022, May 9). Unattributed attack ads targeting Labor on Chinese-language WeChat fuel fears of election misinformation. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2022/may/09/unattributed-attack-ads-target ing-labor-on-chinese-language-wechat-fuel-fears-of-misinformation de Freitas Melo, P., Vieira, C. C., Garimella, K., de Melo, P. O. V., & Benevenuto, F. (2019). Can WhatsApp counter misinformation by limiting message forwarding? 1–12 https://doi.org/10. 48550/arXiv.1909.08740 Dixon, S. (2022, July 27). Most popular messaging apps 2022. Retrieved from https://www.statista. com/statistics/258749/most-popular-global-mobile-messenger-apps/ El-Masri, A., Riedl, M. J., & Woolley, S. (2022). Audio misinformation on WhatsApp: A case study from Lebanon. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/ 10.37016/mr-2020-102 Garimella, K., & Eckles, D. (2020). Images and misinformation in political groups: Evidence from WhatsApp in India. Harvard Kennedy School (HKS) Misinformation Review, 1–12. https://doi. org/10.37016/mr-2020-030 Holroyd, M. (2022, March 15). Debunking the most viral misinformation about Russia’s war in Ukraine. Euronews. Retrieved from https://www.euronews.com/my-europe/2022/03/15/debunk ing-the-viral-misinformation-about-russia-s-war-in-ukraine-that-is-still-being-share Hsu, H. (2018, October 8). The rise and fall of affirmative action. The New Yorker. Retrieved from https://www.newyorker.com/magazine/2018/10/15/the-rise-and-fall-of-affirmative-action Islam, M. S., Sarkar, T., Khan, S. H., Mostofa Kamal, A., Hasan, S. M. M., Kabir, A., Yeasmin, D., Islam, M. A., Amin Chowdhury, K. I., Anwar, K. S., Chughtai, A. A., & Seale, H. (2020). COVID-19–related infodemic and its impact on public health: A global social media analysis. The American Journal of Tropical Medicine and Hygiene, 103(4), 1621–1629. https://doi.org/ 10.4269/ajtmh.20-0812 Kim, H. K., & Tandoc, E. C., Jr. (2022). Consequences of online misinformation on COVID19: Two potential pathways and disparity by eHealth literacy. Frontiers in Psychology, 13, 783909–783909. https://doi.org/10.3389/fpsyg.2022.783909 Kischinhevsky, M., Vieira, I. M., dos Santos, J. G. B., Chagas, V., Freitas, M. d. A., & Aldé, A. (2020). WhatsApp audios and the remediation of radio: Disinformation in Brazilian 2018 presidential election. Radio Journal, 18(2), 139–158. https://doi.org/10.1386/rjao_00021_1 Machado, C., Kira, B., Narayanan, V., Kollanyi, B., & Howard, P. (2019). A study of misinformation in WhatsApp groups with a focus on the Brazilian presidential elections. In Conference Proceedings of the 2019 World Wide Web Conference (pp. 1013–1019). https://doi.org/10.1145/ 3308560.3316738 Madrigal, A. C. (2012, October 12). Dark social: We have the whole history of the web wrong. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2012/10/dark-socialwe-have-the-whole-history-of-the-web-wrong/263523/ Maros, A., Almeida, J., Benevenuto, F., & Vasconcelos, M. (2020). Analyzing the use of audio messages in WhatsApp groups. In Conference Proceedings of the 2020 World Wide Web Conference (pp. 3005–3011). https://doi.org/10.1145/3366423.3380070 Mukherjee, R. (2020). Mobile witnessing on WhatsApp: Vigilante virality and the anatomy of mob lynching. South Asian Popular Culture, 18(1), 79–101. https://doi.org/10.1080/14746689.2020. 1736810 Newman, N. (2022, June 15). Overview and key findings of the 2022 Digital News Report. Reuters Institute for the Study of Journalism. Retrieved from https://reutersinstitute.politics.ox.ac.uk/ digital-news-report/2022/dnr-executive-summary Nightingale, S. J., Wade, K. A., & Watson, D. G. (2017). Can people identify original and manipulated photos of real-world scenes? Cognitive Research: Principles and Implications, 2(1), 1–21. https://doi.org/10.1186/s41235-017-0067-2
1 Complexities in Falsehoods Management and Implications for Research …
13
Nizaruddin, F. (2021). Role of public WhatsApp groups within the Hindutva ecosystem of hate and narratives of “CoronaJihad.” International Journal of Communication, 15, 1102–1119. Pañares, J. (2021, August 12). Fighting fake news in Southeast Asia. Institute for War and Peace Reporting. Retrieved from https://iwpr.net/impact/fighting-fake-news-southeast-asia Pasquetto, I. V., Jahani, E., Atreja, S., & Baum, M. (2022). Social debunking of misinformation on WhatsApp: The case for strong and in-group ties. Proceedings of the ACM on Human-Computer Interaction, 6(1), 1–35. https://doi.org/10.1145/3512964 Resende, G., Melo, P., Sousa, H., Messias, J., Vasconcelos, M., Almeida, J., & Benevenuto, F. (2019). (Mis)information dissemination in WhatsApp: Gathering, analyzing and countermeasures. In Conference Proceedings of the 2019 World Wide Web Conference (pp. 818–828). https://doi. org/10.1145/3308558.3313688 Rossini, P., Stromer-Galley, J., Baptista, E. A., & Veiga de Oliveira, V. (2021). Dysfunctional information sharing on WhatsApp and Facebook: The role of political talk, cross-cutting exposure and social corrections. New Media & Society, 23(8), 2430–2451. https://doi.org/10.1177/146 1444820928059 Simonite, T. (2022, March 17). A Zelensky Deepfake was quickly defeated. the next one might not be. Wired. Retrieved from https://www.wired.com/story/zelensky-deepfake-facebook-twi tter-playbook/ Soon, C., & Goh, S. (2021). Singaporeans’ susceptibility to false information. IPS Exchange Series, No. 19. Retrieved from https://lkyspp.nus.edu.sg/docs/default-source/ips/ips-study-on-singap oreans-and-false-information_phase-1_report.pdf Soon, C., Goh, S., & Bala Krishnan, N. (2022). Study on Singaporeans and false information, phase 2 and phase 3: Immunity and intervention. Retrieved from https://lkyspp.nus.edu.sg/docs/defaultsource/ips/ips-study-on-singaporeans-and-false-information-phase-two-and-phase-three.pdf Sundar, S. S., Molina, M. D., & Cho, E. (2021). Seeing is believing: Is video modality more powerful in spreading fake news via online messaging apps? Journal of Computer-Mediated Communication, 26(6), 301–319. Retrieved from https://doi.org/10.1093/jcmc/zmab010 Tandoc, E., Jr., Rosenthal, S., Yeo, J., Ong, Z., Yang, T., Malik, S., Ou, M., Zhou, Y., Zheng, J., Mohamed, H., Tan, J., Lau, Z., & Lim, J. (2022). Moving forward against misinformation or stepping back? WhatsApp’s forwarded tag as an electronically relayed information cue. International Journal of Communication, 16, 1852–1866. Telegram. (2022). Telegram FAQ. Retrieved from https://telegram.org/faq The Wire Staff. (2017, May 22). Two arrests, protests follow after WhatsApp rumours lead to lynching of seven in Jharkhand. The Wire. Retrieved from https://thewire.in/politics/whatsappmessage-turns-tribals-violent-leaves-seven-dead The Sentinel Project Staff. (2021, September 30). The Sentinel Project’s blog series on misinformation, hate speech, and violence. Retrieved from https://thesentinelproject.org/2021/09/30/thesentinel-projects-blog-series-on-misinformation-hate-speech-and-violence/ Umali, T. (2019, February 24). Philippines’ cybersecurity platform to be rolled out. Retrieved from https://opengovasia.com/development-of-philippines-cybersecurity-platformawarded-to-joint-venture/ Wardle, C., & Derakhshan, H. (2017, October 31). One year on, we’re still not recognizing the complexity of information disorder online. Retrieved from https://firstdraftnews.org/latest/coe_ infodisorder/ WhatsApp. (2018, July 10). Labelling forwarded messages. Retrieved from https://blog.whatsapp. com/labeling-forwarded-messages WhatsApp. (2022a). How to make a group video call: Whatsapp help center. Retrieved from https:// faq.whatsapp.com/226697115160417/?cms_platform=android&locale=en_US WhatsApp. (2022b, March 30). We’re making voice messages even better. WhatsApp.com. Retrieved October 26, 2022, from https://blog.whatsapp.com/making-voice-messages-better WhatsApp. (2022c). About forward limits. Retrieved from https://faq.whatsapp.com/759286787919 153/?locale=en_US
Part I
Trends in the Proliferation of Online Falsehoods and MIMS
Chapter 2
COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities in Indonesia Engelbertus Wendratama and Iwan Awaluddin Yusuf
Abstract The propagation of online falsehoods on COVID-19 has been of great concern in Indonesia, particularly those disseminated on WhatsApp, the second most popular social networking site in the country with more than two million monthly active users. This chapter explores how Indonesians engaged with the information ecosystem. This exploratory study found that WhatsApp was among the top three sources Indonesians used to obtain information relating to the pandemic and that corrections from physicians, academics and public officials were more likely to be believed by Indonesians. Keywords Online falsehoods · COVID-19 falsehoods · Indonesia · MIMS · WhatsApp
2.1 Background The propagation of online falsehoods has been of a great concern in Indonesia, mainly those encountered on WhatsApp, the second most popular social networking site in the country after YouTube (Statista, 2021b) with two million monthly active users. This makes WhatsApp more popular than other Mobile Instant Messaging Services (MIMS) such as Telegram, Line and Facebook Messenger in the country. This is because it is user-friendly, has no advertising and does not take up much storage space on mobile devices (Kurnia et al., 2020). Online falsehoods on various media such as blogs, websites that resemble popular news sites and social media platforms were first detected during the 2014 presidential E. Wendratama (B) PR2Media, Yogyakarta, Indonesia e-mail: [email protected] I. A. Yusuf Department of Communications, Universitas Islam Indonesia, Yogyakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_2
17
18
E. Wendratama and I. A. Yusuf
election (Bata, 2019). In the following years, studies showed that online falsehoods on the internet spiked during political events such as the 2017 Jakarta gubernatorial election, 2019 general election and 2020 regional elections (Utami, 2018; Yusuf, 2020). Utami (2018) found that falsehoods spread during elections attacked political candidates, such as those claiming that a particular candidate was fraudulent or not qualified to be a leader. As the government and technology companies continue to tackle the spread of online falsehoods on open social media platforms, websites and print media, WhatsApp with its end-to-end encryption feature has become a popular conduit for the spread of falsehoods. A survey conducted by Kurnia et al. (2020) on 1250 Indonesian women from five cities (Jakarta, Yogyakarta, Banda Aceh, Makassar and Jayapura) of different ages and educational backgrounds found that they encountered falsehoods and hate speech most frequently on WhatsApp. The most common type of falsehoods respondents encountered were those relating to politics. For example, President Joko Widodo was accused of being anti-Islam during the 2019 presidential campaign—this falsehood was widely spread in WhatsApp groups (Kurnia et al., 2020). However, since the COVID-19 pandemic began in Indonesia in February 2020 (Nurita, 2022), falsehoods relating to the pandemic have become the most circulated falsehoods on the internet, with WhatsApp being ranked the most-used medium for circulation (Liputan 6, 2020). COVID-19 falsehoods spread swiftly and widely on WhatsApp and other social networking sides,1 as people wanted to inform others about the dangers of COVID-19 (Arika, 2020).2 However, such falsehoods hampered pandemic management efforts in the country. For example, a young man attacked an ambulance carrying a COVID-19 patient because he believed a falsehood which was widely circulated on WhatsApp claiming that empty ambulances were driven to create panic among people to get them to obey the large-scale social restrictions implemented by the government (Kompas TV, 2021). COVID-19 falsehoods jeopardised government efforts to control the pandemic, as COVID-19 cases reached their peak on 24 July 2021 with 574,135 active cases (Situmorang, 2021). The vast majority of falsehoods shared on WhatsApp in Indonesia during the pandemic were disseminated in the form of simple texts and memes (Wendratama, 2020). The falsehoods were also presented as laypeople’s testimonies and circulated more than non-testimonial falsehoods on WhatsApp (Sonjo Jogja, 2021). These misleading personal testimonies (Sonjo Jogja, 2021) added to the common formats of falsehoods that were used, such as edited images with personal comments, screen captures of fake news sites combined with personal comments and deceptive statements combined with unrelated or misinterpreted photo or video.
1
The Indonesian Ministry of Communication and Informatics had, by April 2021, recorded 1,556 online falsehoods on COVID-19 and 177 online falsehoods on COVID-19 vaccines (Agustini 2021). 2 This news story provided examples of how Indonesians, with good intentions of sharing urgent information to others, became involved with the police because it was a crime to publish false information that created public panic.
2 COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities …
19
In view of the above developments, this chapter explores whom people trust most when it comes to correcting falsehoods, their opinions on what the authorities should do to combat falsehoods and their preferred formats for official information. As the spread of COVID-19 falsehoods on MIMS in the Indonesian context remains under-researched, this chapter intends to contribute to the knowledge in the area, thus creating a baseline for further studies on mobile communication and falsehoods during the pandemic.
2.2 Methodology Online surveys are gaining in popularity because of lower costs and growing internet usage. However, online surveys may not reach the entire population in a country like Indonesia where some communities do not have internet access. As a result, the data collected might be skewed towards certain demographic profiles. Nevertheless, as internet access continues to grow in Indonesia, this issue is becoming less of a concern. As of February 2022, internet users in the country reached 204.7 million and this made up 73.7 per cent of the population (We are Social & Kepios, 2022). This exploratory study used an online survey based on non-probabilistic sampling. The online survey was developed using Google Forms and the responses were collated in an online spreadsheet accessible only to the researchers. We did not intend to capture all the characteristics and diversity of Indonesia’s population that is made up of more than 270 million people dispersed across thousands of islands. The questionnaire was distributed to people residing in three regions—western, central and eastern Indonesia. The survey was conducted between 20 September and 31 October 2022 and the questionnaire was posted on social media platforms, one mailing list comprising Indonesia Endowment for Education Scholarship awardees and WhatsApp. The final sample comprised 443 respondents. To protect respondents’ privacy, the data was anonymised.
2.3 Findings and Discussion 2.3.1 Respondents’ Demographics Of the 443 respondents, 56 per cent were female and 43.3 per cent were male (the remaining 0.7 per cent did not specify their gender). The largest proportion of respondents were aged 36 to 45 years old (34.8 per cent), followed by those aged 17 to 25 years old (27.1 per cent), 31 to 35 years old (15.6 per cent), 26 to 30 years old (12 per cent), 46 to 55 years old (8.1 per cent) and 56 to 65 years old (1.4 per cent). Almost half of the respondents (43.6 per cent) had a bachelor’s degree, followed by
20
E. Wendratama and I. A. Yusuf
those with a high school degree (25.7 per cent), a master’s degree (25.5 per cent), an associate degree (2.5 per cent) and a doctoral degree (2.1 per cent).
2.3.2 Media and Information Access Relating to COVID-19 Our survey found that among the respondents, the most popular source of information on COVID-19 was mainstream news sites (28.4 per cent), followed by Instagram (18.5 per cent), WhatsApp (16.5 per cent), television (16 per cent), Twitter (6.8 per cent) and Facebook (5.6 per cent). See Table 2.1. The popularity of mainstream news sites, such as http://detik.com/, http://kom pas.com/ and http://cnnindonesia.com/, could be attributed to the comprehensive reporting by these sites on the pandemic. They provided the updates on COVID19 transmissions and preventive measures, and the latest government regulations to combat the pandemic. Based on Reuters Institute’s Digital News Report 2022, http:// detik.com/ was Indonesia’s most popular news website and it was accessed by 65 per cent of 2,068 Indonesian respondents in January/February 2022, ahead of http:// kompas.com/ (at 48 per cent) and http://cnnindonesia.com/ (at 35 per cent) (Reuters Institute, 2022). A study by Rianto and Setiawati (2021) found that mainstream news sites such as http://tempo.co/ and http://tirto.id/ played an important role in fighting the spread of COVID-19 falsehoods. They conducted fact checking and collaborated with Mafindo, the largest fact check network in the country (https:// www.mafindo.or.id/). Those two reasons could explain why Indonesians turned to news sites for COVID-19 information—they could easily access statements and data from public officials and physicians compared to other sources on social media platforms (Mellado et al., 2021). The public’s reliance on news media coverage for accurate information increased during times of uncertainty and crises (De Coninck et al., 2020). In Indonesia, in the face of falsehoods that are circulated on social media Table 2.1 Sources used by respondents to access information on COVID-19
Medium
Respondents n
%
Mainstream news site
126
28.4
Instagram
82
18.5
WhatsApp
73
16.5
Television
71
16
Twitter
30
6.8
Facebook
25
5.85
Others
23
5.2
Print media
13
2.9
Total
443
100
2 COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities …
21
platforms, people turn to news sites that provide relatively more trusted information produced via the gatekeeping process. The second most popular source of information on the pandemic among the respondents was Instagram and this could be attributed to the presentation of health information in a visually attractive way on the platform (Wong et al., 2019). In the country with 94 million Instagram users (Statista, 2021a), a few months into the pandemic, a number of Instagram accounts were offering updated COVID-19 information. Some of these accounts amassed a large number of followers. One of them was @pandemictalks, created on 12 April 2020 and had 403,000 followers as of 17 June 2021. Instagram accounts owned by physicians also gained popularity during the pandemic, for example, @dr.tirta with 2.4 million followers, @adampbrata with 408,000 followers and @drningz with 157,000 followers as of 17 June 2021. Through their accounts, these doctors actively debunked COVID-19 falsehoods and communicated knowledge and tips in everyday language. They used attractive graphics to explain how people could respond to the threats posed by the pandemic. They also conducted live chats with public figures and their followers and answered frequently asked questions on the pandemic. The government COVID-19 task force also disseminated information to educate the public on the pandemic through its official Instagram account @satgasCOVID19.id. The survey also asked the respondents how frequently they encountered information from different sources that they thought could be false (i.e., potential falsehoods). Our study found that WhatsApp was the most common source of potential falsehoods (46.5 per cent), followed by news sites (18.7 per cent), Facebook (11.3 per cent), Instagram (6.8 per cent), television (6.8 per cent), others (5.6 per cent), Twitter (3.2 per cent) and the print media (0.9 per cent).
2.3.3 Verification of Falsehoods When asked if they verified the potential falsehoods they encountered, majority of the respondents said they did (78.6 per cent). For those who performed verification, the most popular method was using search engines such as Google and Yahoo (60.9 per cent), followed by asking a credible person privately, either face-to-face or online (24.1 per cent), asking openly on social media platforms (12.6 per cent) and asking people in WhatsApp groups (2.3 per cent). Among those who did not verify potential falsehoods (21.4 per cent of the respondents), the most commonly cited reason was because they could make their own decision on the veracity of the information (47.4 per cent), followed by not having the time to do it (27.4 per cent), was lazy to verify (21.1 per cent), felt it was not important to verify (12.6 per cent) and having internet connectivity problems (11.6 per cent).3
3
The percentages added up to more than 100 per cent because respondents were allowed to choose more than one reason.
22
E. Wendratama and I. A. Yusuf
The respondents were also asked which was the platform they received COVID-19 information which they later found out to be false. Among the respondents, WhatsApp ranked as the number one platform where they received them (56.4 per cent), followed by Facebook (15.3 per cent). Our study confirms the findings from existing research which found that WhatsApp groups such as friends and school alumni groups are conduits for the sharing of falsehoods (Kurnia et al., 2020).4 In addition, the study also found that most respondents have received at least one verification message on WhatsApp regarding any COVID-19 falsehoods (55.8 per cent) while 44.2 per cent of the respondents did not receive any verification on WhatsApp. This finding highlights the need for more verification messages produced by fact checking organisations and public authorities to be distributed on WhatsApp to counter the spread of falsehoods on the platform. The verification messages produced by fact checkers and public authorities could be shared by WhatsApp users and serve as a form of social correction (Kligler-Vilenchik, 2021). In other words, WhatsApp, with its advantages such as its personal nature and closeness to users’ daily lives, has enormous potential to counter falsehoods. The respondents were also asked whom their most trusted sources of information for matters were relating to COVID-19. The sources they trusted were physicians and health experts (72.5 per cent), followed by anyone who could provide them with reliable information (49 per cent), academics (18.2 per cent), public officials (11.7 per cent), religious figures (6.5 per cent) and public figures (e.g., celebrities) who were prominent in the mainstream news media (3.6 per cent).5 This presents valuable opportunities for stakeholders, especially healthcare organisations and public health authorities to play a bigger role in the information ecosystem. As mentioned earlier, physician-owned Instagram accounts are gaining popularity among Indonesians. Thus, healthcare organisations and public health authorities could leverage social media to reach a wider public.
2.3.4 Role of Government in Combating COVID-19 Falsehoods While scientists and researchers work on treatments and vaccinations, it is critical that governments, national and international health organisations, collaborate to combat the transmission of falsehoods and misleading information about COVID-19. When asked whether they thought the Indonesian government made an adequate effort to
4
The survey, involving 1,250 Indonesian women, found that they received misinformation most frequently from friends and school alumni WhatsApp groups, followed by family groups and direct messages on WhatsApp. 5 The percentages added up to more than 100 per cent because respondents were allowed to choose more than one reason.
2 COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities …
23
combat COVID-19 falsehoods, most of the respondents (70.9 per cent) thought that the government had not. According to them, if the government were to improve its efforts in combating COVID-19 falsehoods, the best medium to do so was the television (36.6 per cent), news sites (23.3 per cent), WhatsApp (13.8 per cent), others (8.1 per cent), Instagram (7.2 per cent), print media (3.8 per cent), Facebook (2.7 per cent) and Twitter (2.3 per cent). In general, television is considered to be an authoritative source of information in Indonesia (Kamil, 2020). Many issues that are viral or trending on social media, when spotlighted on television, become legitimate as important public interest issues (Christin et al., 2021, Muslikhin, 2017). Their most recommended message format for the government to combat COVID19 falsehoods was the combination of text, digital poster and video (80.8 per cent), followed by video (18.7 per cent), digital poster (14.4 per cent) and text (14.4 per cent).6 This indicated that the government needs to use more multimedia approaches in their communication to the public, particularly on social media platforms and MIMS where multimedia messages are easily accessed by Indonesians. Recent responses by governments in a number of countries (e.g., US, UK, Japan and Australia) had been studied by Freckelton Qc (2020). The researchers concluded that health information has to be communicated in a way that inspires calm and responsible conduct without creating irrational fears.
2.4 Conclusion While the issues of online falsehoods have been widely studied and discussed, the spread of COVID-19 falsehoods on MIMS in the Indonesian context remains underresearched. This chapter sheds light on the problem. The spread of online falsehoods on WhatsApp presents both challenges and opportunities in Indonesia. In countering COVID-19 falsehoods, this study found that corrections and health-related statements were more likely to be believed by Indonesians when they are shared by physicians, academics and public officials. Thus, information campaigns and debunking efforts could leverage these key stakeholders for wider reach and efficacy. This study also found that mainstream media, in particular, news sites, served as popular sources of information relating to the pandemic in Indonesia. Although social networking sites and MIMS such as WhatsApp provide a more intimate communication experience, news media outlets still play an important role in the information ecosystem. This points to extensive opportunities for news media organisations in meeting people’s information needs during a health crisis.
6
The percentages added up to more than 100 per cent because respondents were allowed to choose more than one reason.
24
E. Wendratama and I. A. Yusuf
References Agustini, P. (2021, May 3). Kominfo catat 1.733 hoaks Covid-19 dan vaksin. Kementerian Komunikasi dan Informatika. https://aptika.kominfo.go.id/2021/05/kominfo-catat-1-733hoaks-covid-19-dan-vaksin/ Arika, Y. (2020, April 21). Mengapa hoaks cepat menyebar? Kompas. https://www.kompas.id/baca/ dikbud/2020/04/21/mengapa-hoaks-cepat-menyebar Bata, A. (2019, January 6). Hoax di tahun politik. Beritasatu. https://www.beritasatu.com/archive/ 531211/hoax-di-tahun-politik Christin, M., Yudhaswara, R. M., & Hidayat, D. (2021). Deskripsi Pengalaman Perilaku Selektif Memilih Informasi di Masa Pandemi Covid-19 pada Media Massa Televisi. Jurnal Penelitian Komunikasi dan Opini Publik, 25(1), 61–73. De Coninck, D., d’Haenens, L., & Matthijs, K. (2020). Forgotten key players in public health: News media as agents of information and persuasion during the COVID-19 pandemic. Elsevier Public Health Emergency Collection, 183, 65–66. https://doi.org/10.1016/j.puhe.2020.05.011 Freckelton Qc I. (2020). COVID-19: Fear, quackery, false representations and the law. International journal of law and psychiatry, 72, 101611. https://doi.org/10.1016/j.ijlp.2020.101611 Kamil, I. (2020, July 22). KPI; 89 persen masyarakat lebih percaya televisi daripada internet. Kompas.com. https://nasional.kompas.com/read/2020/07/22/20263851/kpi-89-persenmasyarakat-lebih-percaya-televisi-dibanding-internet?page=all#page2 Kligler-Vilenchik, N. (2021). Collective social correction: Addressing misinformation through group practices of information verification on WhatsApp. Digital Journalism, 300–318. https:// doi.org/10.1080/21670811.2021.1972020 Kompas TV. (2021, July 21). Maraknya hoaks, memperparah penyebaran Covid-19 bahkan bias berujung fatal!’ Kompas TV. https://www.youtube.com/watch?v=sYRaiMtaqm4 Kurnia, N., Wendratama, E., Rahayu, R., Adiputra, W. M., Syafrizal, S., Monggilo, Z. M. Z., Utomo, W. P., Indarto, E., Aprilia, M. P., & Sari, Y, A. (2020). WhatsApp group and digital literacy among Indonesian women. WhatsApp, Program Studi Magister Ilmu Komunikasi, PR2Media and Jogja Medianet. Liputan 6. (2020, February 3). Menkominfo sebut penyebaran hoaks virus corona terbanyak lewat WhatsApp. Liputan 6. https://www.liputan6.com/news/read/4170451/menkominfo-sebut-pen yebaran-hoaks-virus-corona-terbanyak-lewat-whatsapp Mellado, C., Hallin, D., Cárcamo, L., Alfaro, R., Jackson, D., Humanes, M. L., Ramirez, M. M., Mick, J., Mothes, C., Lin, C. H., Lee, M., Alfaro, A., Isbej, J., & Ramos, A. (2021). Sourcing pandemic news: A cross-national computational analysis of mainstream media coverage of COVID-19 on Facebook, Twitter, and Instagram. Digital Journalism, 9(9), 1261–1285. https:// doi.org/10.1080/21670811.2021.1942114 Muslikhin. (2017, November). Television news in the social media era (Study in the newsroom Indosiar and SCTV) [Paper presentation]. In The International Conference on Social Sciences. The 1st International Conference on Social Sciences (Muhammadiyah University of Jakarta, Indonesia). Nurita, D. (2022, March 3). 2 tahun pandemi Covid-19, ringkasan perjalanan wabah corona di Indonesia. Tempo. https://nasional.tempo.co/read/1566720/2-tahun-pandemi-covid-19-ringka san-perjalanan-wabah-corona-di-indonesia Reuters Institute. (2022). Digital news report 2022. Reuters Institute. https://reutersinstitute.politics. ox.ac.uk/sites/default/files/2022-06/Digital_News-Report_2022.pdf Rianto, P., & Setiawati, T. (2021). The role of Indonesian mainstream media to fight against Covid19 hoaxes. Atlantic Press, Advances in Social Science, Education and Humanities Research, 596. https://doi.org/10.2991/assehr.k.211121.053 Situmorang, H. (2021, August 12). Indonesia sudah lewati puncak Covid-19. Berita Satu. https:// www.beritasatu.com/kesehatan/813121/indonesia-sudah-lewati-puncak-covid19 Sonjo Jogja. (2021, June 27). Ketika vaksinasi harus melawan hoaks [Video]. YouTube. https:// www.youtube.com/watch?v=Dk5jN52lE9g&t=2697s
2 COVID-19 Falsehoods on WhatsApp: Challenges and Opportunities …
25
Statista. (2021a). Leading countries based on Instagram audience size as of October 2021. Statista. https://www.statista.com/statistics/578364/countries-with-most-instagram-users/ Statista. (2021b). Penetration of leading social networks in Indonesia as of Q3 2020. Statista. https:// www.statista.com/statistics/284437/indonesia-social-network-penetration/ Utami, P. (2018). Hoax in modern politics: The meaning of hoax in Indonesian politics and democracy. Jurnal Ilmu Sosial dan Politik, 22(2), 85–97. https://doi.org/10.22146/jsp.34614 We are social and Kepios. (2022). Digital 2022: Indonesia. Datareportal. https://datareportal.com/ reports/digital-2022-indonesia Wendratama, E. (2020, April 3). Sekadar mengingatkan: Misinformasi pandemi paling banyak ada di WhatsApp. The Conversation Indonesia. https://theconversation.com/sekadar-mengingatkanmisinformasi-pandemi-paling-banyak-ada-di-whatsapp-135430 Wong, X., Liu, R., & Sebaratnam, D. (2019). Evolving role of Instagram in medicine. Internal Medicine Journal, 49(10), 1329–1332. https://doi.org/10.1111/imj.14448 Yusuf. (2020, November 28). Kominfo: Hati-hati hoaks Pilkada 2020 yang menghasut. Kominfo. https://kominfo.go.id/content/detail/31101/kominfo-hati-hati-hoaks-pilkada2020-yang-menghasut/0/berita_satker
Chapter 3
The Unbelieving Minority: Singapore’s Anti-Falsehood Law and Vaccine Scepticism Swati Maheshwari and Ang Peng Hwa
Abstract Singapore hosts at least 17 groups using Mobile Instant Messaging Services (MIMS) to self-organise around vaccine hesitancy despite a highly regulated information environment. Although the country has one of the highest vaccination rates in the world, the rates among middle-aged and older adults remained lower than younger adults till the end of 2021, delaying the opening of Singapore’s borders and economy. The country has also enacted an anti-fake news law called Protection from Online Falsehoods and Manipulation Act in 2019 that covers MIMS but it has yet to be invoked against them even once in the 96 times that it has been applied to other media (as of the time of writing). This begs the question: how effective are antifake news laws in regulating MIMS platforms like Telegram that have been described as “ideal platforms” for the spread of falsehoods by the Singapore government (Wong, 2019). We surveyed two prominent vaccine-hesitant groups on Telegram to discover the socioeconomic demographic profile of the groups, their sources of pandemicrelated information and what their reasons for joining the groups. Although our survey had a low response rate, our study sheds light on what is often a shadowy world rife with misinformation. We found that Telegram’s technological characteristics such as affording anonymity and freedom from censorship played a significant role as did the in-group nature of such group chats in a context where punitive measures were implemented to nudge the vaccination rates higher. Keywords MIMS · Fake news · POFMA · Vaccine hesitancy · Singapore
S. Maheshwari (B) Independent Researcher, Singapore, Singapore e-mail: [email protected] A. P. Hwa Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_3
27
28
S. Maheshwari and A. P. Hwa
3.1 Introduction Much of the media coverage and communication research around false information tend to focus on social media platforms such as Facebook, Twitter and YouTube. The result has been legislation aimed at suppressing the spread of false information online through regulating these social media platforms. The Singapore government passed the Protection from Online Falsehoods and Manipulation Act (POFMA) with the aim of outlawing misinformation and disinformation, which is defined as falsehoods with the intent to mislead (Ang & Goggin, 2021). This law, which can be invoked only by the government, empowers a government minister to demand the publication of corrections from anyone who posts online content that the minister believes is false and may cause some harm. At the time of writing, it has been more than two years since POFMA came into force in October 2019 and at least 96 orders have been issued, most of them demanding publication of correction and, in a few cases, takedown of content deemed false (Teo, 2022). One might expect that POFMA, coupled with existing strict media regulations, would minimise the spread of false information. However, as the national vaccination programme rolled out by the government in Singapore has shown, there exist pockets of vaccination hesitancy propelled by groups on Mobile Instant Messaging Services (MIMS) such as WhatsApp and Telegram (Guest et al., 2021). For example, the Telegram group SG Suspected Vaccine Injuries Channel, which provides a platform for people to discuss suspected side effects of vaccines, had 8,600 members at the end of 2021 (Guest et al., 2021). Some of the assertions made on this chat group included claims that mRNA vaccines led to magnets sticking to some vaccinated individuals (Chong, 2021). This research investigates two such groups in order to better understand the discourse on anti-COVID vaccinations in a closed chat group on Telegram run by administrators based in Singapore as has been widely documented in the press. This research does not claim to have lofty aims but tries to shed light on the profile of such groups where false information about the COVID-19 vaccination proliferates in the context of Singapore and how the circulation of false information on these platforms tests the efficacy of Singapore’s anti-fake news law POFMA.
3.2 Literature Review 3.2.1 An “Infodemic” and Vaccine Hesitancy The coronavirus pandemic has contributed to an overload of information in both digital and physical environments. The vast amount of information, including falsehoods, half-truths and conspiracy theories about the coronavirus vaccines, during the outbreak led the World Health Organization to coin a new term to describe it—an “infodemic” (World Health Organization, 2019). Misinformation has contributed to
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
29
vaccine hesitancy, which is defined as “the delay in acceptance or refusal of vaccination despite availability of vaccination services” (MacDonald, 2015, p. 4161). Vaccine hesitancy is not a new phenomenon. A 2014 World Health Organization (WHO) report on vaccine hesitancy elaborates: “Vaccine attitudes can be seen on a continuum, ranging from total acceptance to complete refusal. Vaccine-hesitant individuals are a heterogeneous group in the middle of this continuum. Vaccine-hesitant (sic) individuals may refuse some vaccines but agree to others; delay vaccines or accept vaccines but are unsure in doing so” (World Health Organization, 2014, p. 8). Both the refusal to get vaccinated and vaccine hesitancy or scepticism can impose social, economic and public costs on states because refusal can undermine public healthcare efforts (MacDonald, 2015). The anti-vaccination phenomenon has largely been associated with the global North, particularly the US. There is evidence, however, that vaccine hesitancy fuelled by misinformation has slowed down vaccination efforts globally during COVID-19 (Calonzo et al., 2021). For example, the Philippines had one of the worst outbreaks in Southeast Asia—partly due to lack of availability of the COVID-19 vaccine and partly due to misinformation about the COVID vaccine (Su & Thayaalan, 2021). The Indonesian non-profit Anti-Slander Society (Mafindo) documented more than 700 hoaxes about the COVID-19 vaccination between January and November 2020 and found that hoaxes played a role in limiting vaccine take-up (Renaldi, 2021). Southeast Asia went from being a poster child for the management of the pandemic initially to falling far behind the US and Europe in inoculating its citizens and opening up their economies, due to factors such as a shortage of vaccines, a more transmissible coronavirus variant as well as misleading information about COVID-19 vaccines (Rich et al., 2021). Social media has played a pivotal role in the proliferation and amplification of false or misleading information (Larson et al., 2018; Silverman & Singer-Vine, 2016). Studies showed a positive correlation between exposure to misinformation, belief in it (the misinformation) and risk perception of the pandemic (Lee et al., 2020), while low health literacy and misinformation positively correlated to negative attitudes towards COVID-19 vaccines (Kricorian et al., 2021). A study conducted in the UK with 4,000 residents across the UK in September 2020 found a 6 per cent reduction in vaccine acceptance in people who had been exposed to online misinformation (Loomba et al., 2021). Although Singapore saw a 92 per cent vaccination rate among its population by the end of March 2022 (Ministry of Health, 2022), the city-state has not been immune to the flood of vaccine-related false information. A study conducted jointly by the National Centre for Infectious Diseases (NCID), the National University of Singapore and Nanyang Technological University found that three in five people had received false information about the pandemic in general, mostly through MIMS such as WhatsApp, Telegram and Facebook Messenger (NCID et al., 2020). This is surprising in a small, technologically and financially advanced country of 5.5 million inhabitants (as of June 2021 according to Singapore’s Department of Statistics) with an internet penetration of more than 90 per cent in 2020 in a tightly regulated media environment (Tandoc, 2021). As the vaccination programme was rolled out, initial apprehensions
30
S. Maheshwari and A. P. Hwa
about the potential side effects and efficacy of vaccines faded and confidence in the vaccines grew (Lin & Aravindan, 2020). However, vaccine hesitancy, notably higher among middle-aged and older adults, delayed Singapore’s attempts to relax its public health restrictions and open its borders (Kurohi, 2021). Even though vaccinations for those above the age of 70 years started in February 2021 and in March 2021 for those 60 years old and above, about 25 per cent of adults above the age of 60 years remained unvaccinated and had not booked appointments to vaccinate as of July 2021 (Kurohi, 2021). Compare this to 86 per cent of individuals aged 40 to 59 years who had received at least one dose of the vaccine or had booked their appointments to do so at that point in time, despite vaccinations for this group starting later (Zhang, 2021). Noteworthy is the fact that many societal factors that are linked to mis- and disinformation, such as trust in government, healthcare and media, are high in Singapore, and factors such as political polarisation are not salient (Larson et al., 2018; Neely et al., 2021). Also, the media is tightly regulated and the population is generally law-abiding (Abdullah & Kim, 2020). Furthermore, the government set up “an impressive crisis communication infrastructure to share information and address misinformation” during the pandemic (Seah & Tham, 2021). Besides a sophisticated health communication strategy, the government has pursued a strategy of increasing social costs for those who refuse to get vaccinated through vaccine-differentiated measures (e.g., disallowing the unvaccinated from dining out in restaurants or entering shopping malls).
3.3 Anti-Fake News Legislation Although social media platforms such as Facebook, Twitter, YouTube and other public communication media have borne the brunt of POFMA, the Singapore government recognises the potent role MIMS could play in circulating falsehoods given their subterranean nature and reach (Ang & Goggin, 2021). The government recognises that MIMS have become a “catchment” site for the circulation of falsehoods during the pandemic, although it remains unclear how they can be brought within the reach of the public and law enforcement due to their encrypted nature (Qing & Soh, 2021). The government had issued a correction notice to a local website called “Truth Warriors” for posting misleading information about COVID-19 vaccines on Facebook, but they did not issue such notices to the messaging chat groups run by the same group on Telegram (Wong, 2021). This may at least partially have to do with the technological characteristics of closed messaging platforms like WhatsApp and Telegram, which are different from platforms like Facebook that make it more challenging to track and identify both false content and its producers (Rossini et al., 2020). One big difference lies in the sharing mechanism whereby the lack of a news feed means that the content shared on WhatsApp is not easily traceable to a source (Rossini et al., 2020). Another difference is the end-to-end encryption, which implies that companies are unable to fact check
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
31
all the content that is shared—unlike platforms such as Facebook or Twitter. This is why they have focused on identifying unusual behaviours like mass-texting, masscreation of groups and limiting virality (Rossini et al., 2020). When it comes to open and public social media platforms, technological solutions such as algorithmic intervention can be used to monitor, flag and correct false information. Thus, the technological characteristics of MIMS might be one of the reasons the government has yet to issue similar directions for correction on any social messaging apps using POFMA although the law includes them in its purview.
3.4 False Information on Social Media Most of the literature on misinformation is Western-centric and limited to the study of social media such as Facebook and Twitter (Allcott & Gentzkow, 2017; Del Vicario et al., 2016; Vosoughi et al., 2018; Vraga & Bode, 2018). And while the use of Facebook is declining worldwide, the use of messaging apps is on the rise (Reuters Institute for the Study of Journalism, 2019). Further, the bulk of the misinformation about COVID-19 is circulated in private groups on social media and messaging platforms (Simon et al, 2020). There are other reasons why falsehoods on MIMS merit a closer scrutiny. First, MIMS are increasingly and inextricably linked with communication in our personal realm. They have been described as “a transformative form with which people connect, communicate with friends in their daily life—they catalyse the formation of social groups, and they bring people stronger sense of community and connection” (Qiu et al., 2016). MIMS have become indispensable channels for people to exchange news and discuss political issues. Furthermore, as MIMS are used primarily for informal interactions among people sharing close ties (i.e., friends, family and coworkers), the high trust that exists within such networks fuels the spread of online falsehoods in a way that is distinct from open social media networks (Boczkowski et al., 2018). A study on the spread of political news in Brazilian WhatsApp groups found that cascades containing false information “tend to be deeper, reach more users, and last longer in political groups than in non-political groups” (Caetano et al., 2019).
3.5 Factors that Influence the Effects of False Information As we attempt to shed light on the socio-demographic profile of the Telegram groups in our study and the motivations of people for being in these groups, we review the literature more broadly on these aspects. Trust in government and society has been shown to play a central role in the proliferation of misinformation (Casiday, et al., 2006; Larson et al., 2018). In fact, a study in the US found that the higher the trust individuals have in the government and health authorities, the more likely they will
32
S. Maheshwari and A. P. Hwa
be inclined towards COVID-19 vaccination (Daly et al., 2021). Trust in the Singapore government has been high and largely continued to be so during the pandemic (Ho, 2020). If societal factors that drive the proliferation of misinformation may not be the central factors in the Singapore context, might demographic factors such as age, class and gender as factors influence the circulation and belief in misinformation? A study conducted in Singapore indicates that class and age may be variables that influence the consumption of misinformation. It found that Singaporeans who were more vulnerable to false information tended to be older, lived in public housing, have higher trust in local online-only news sites or blogs, have greater confirmation bias in information-seeking and processing, have lower levels of discernment between real and false information and have lower digital literacy (Soon & Goh, 2021). However, that still leaves a gap in understanding the group of individuals and their socio-demographic characteristics who populate vaccine hesitancy chat groups in particular, a gap this study aims to fill. There is literature that shows that the use of certain kinds of sources of information, especially social media, may be related to the consumption of misinformation about the COVID-19 vaccine (Jennings et al., 2021; Kumari et al., 2021). It has also been shown that Singaporeans’ use of social media as a news source has been increasing while their consumption of news from television or print media has been falling, although this trend saw a slight reversal during the pandemic with legacy media regaining some of the lost audience (Tandoc, 2021). Significant for our study is the point that the use of Telegram for news rose three percentage points to 14 per cent, while Facebook and WhatsApp lost some of their users for news (Tandoc, 2021). Previous literature has demonstrated that sources of information shape beliefs about health and health-related behaviour (Brown-Johnson, et al., 2018; Chen et al., 2018). What is also significant is people’s trust placed in the sources of information determined how valid they may think the piece of information is (Brewer & Ley, 2013). Additionally, we know that middle-aged and older Singaporeans are less likely to be social media savvy but might place greater trust in informal social networks like friends and family for information about health (Harvey & Alexander, 2012). Given that there is high trust in institutions in Singapore, yet vaccination rates among the middle-aged and the older adults were lower than the younger adults till the end of 2021, coupled with the fact that Telegram usage spiked during the pandemic, makes it pertinent to discover who are attracted to these vaccine-hesitant groups on Telegram and why.
3.6 Research Questions This study seeks to answer the following questions by tracking the discourse and conducting a survey with the members of two closed chat groups on the instant messaging service Telegram based in Singapore.
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
33
1. What is the socio-demographic profile of vaccine-hesitant chat groups on Telegram in Singapore? 2. What are the primary sources of information about COVID-19 and COVID vaccination in vaccine-hesitant chat groups on encrypted platforms like Telegram? 3. Why have the vaccine hesitant joined chat groups on Telegram? 4. What are the implications of vaccine-hesitant Telegram groups for anti-fake news laws like POFMA?
3.7 Method As this study aimed to uncover and analyse the user attributes and reasons for joining closed chat groups on MIMS, both qualitative and quantitative research methods were used to build a rich picture of Telegram groups and their users. Several Telegram groups with communities of the vaccine-hesitant whose members ran into the thousands were approached. The largest of these groups are SG Suspected Vaccine Injuries and SG Covid La Kopi groups on Telegram; they have more than 10,000 and 13,500 members respectively (as of 28 February 2022). These two Telegram groups were run by a group called Truth Warriors who described themselves as a “community repository of evidence-based information on global developments surrounding COVID-19” (Lai & Chong, 2021). We emailed the administrators of these groups several times but they declined our requests for permission to administer the survey to their group. The identity of those who run these groups is not publicly known. In contrast, the two groups we studied, Healing the Divide Discussion and Healing the Divide Channel on Telegram set up by Singaporean Iris Koh and her husband Raymond Ng did not hide their identity but have talked to the press and been visible on social media. They described themselves and their group members as “intelligent vaxxers”, rejecting the label of anti-vaxxers. Its 2,000-strong membership roll includes professionals such as a doctor, who has since been prosecuted (Tham, 2022). Ms. Koh explained that her decision set up this community was triggered by the need to fill an information gap in society (Lai & Chong, 2021). Iris Koh responded to our request to survey the two Telegram groups saying she supported academic research. A survey was used to build a profile of the users in terms of their age, gender, socioeconomic status, social media use, exposure to, interaction with and responses to content disseminating misleading information about the COVID-19 vaccination programme. It sought to identify their interactions with, and attitudes and motivations relating to false information on MIMS. The survey contained primarily multiplechoice questions and respondents took an average of 15 to 20 minutes to complete it. A screening question about their age was asked to ensure that the individuals were aged 21 years and above and were eligible for the survey on the groups. Besides the questions on their demographics, there were questions on their sources of COVID-19 and COVID-19 vaccine information, kinds of false information and how frequently they encountered it, their trust in the different sources of information and information verification techniques. A few open-ended questions were included in the survey to
34
S. Maheshwari and A. P. Hwa
elicit respondents’ motivations for using Telegram, compared with other platforms for information on COVID-19 vaccines. Given the vaccine-differentiated measures taken by the government against those who refused to get vaccinated (e.g., unvaccinated could not enter shopping malls), most members of these chat groups worried about being stigmatised (Min & Yeoh, 2021). They preferred the anonymity that Telegram affords (Chua, 2022). Thus, our survey had to be designed such that it was not too intrusive and did not alienate the members. The survey was piloted with eight respondents to check the flow and most importantly, ensure that the members did not consider the questions accusatory or stigmatising in any way. Descriptive statistics were used to analyse quantitative summaries of each variable and to highlight emerging patterns in the data. Our tracking of these chat groups found that the main focus of the discourse on two Telegram groups was COVID-19 vaccines and their shortcomings. The topics included the inefficacy of mRNA vaccines, their expedited development, how the government and mainstream media, including the Singapore government, concealed their side effects to increase the uptake among the population and the discriminatory measures instituted for those who chose not to be vaccinated (Qing & Soh, 2021). Many posts were about the side effects that individuals were suffering from the COVID-19 vaccine but were not able to avail the COVID-19 injury medical costs provided by the Singapore government as the side effects were not recognised as being caused by the vaccine.
3.8 Findings and Discussion 3.8.1 Profile of the Groups and Information-Seeking Behaviour The survey yielded 132 completed responses over a period of eight days from 22 February to 1 March 2022. As the group was open to anyone interested to sign up, there was no attempt to determine the representativeness of the group. Further, the response rate should increase caution in attempts to generalise the results. In terms of demographics, 48 per cent of the respondents were male, 41 per cent were female and the remaining preferred not to disclose their gender or indicated a third gender; 82 per cent of them were Chinese (compared with 74.3 per cent in the 2020 census), 8 per cent were Malay (compared with 13.5 per cent in the 2020 census), 5 per cent were Indian (compared with 9 per cent in the 2020 census) and 5 per cent others (compared with 3.2 per cent in the 2020 census). Although earlier studies have shown that those above 70 years old were most resistant to the vaccine (Tan, 2021), Table 3.1 shows that more of the members in Telegram group who were against COVID-19 vaccination were younger. This suggests that Telegram may be an important source of COVID19-related information, including vaccine misinformation, for those between 30 and 60 years old.
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law … Table 3.1 Age profile
35
20-29 Years
9%
30-39 Years
29%
40-49 Years
27%
50-59 Years
19%
60-69 Years
11%
70 Years and above
5%
The survey found that the group members had a lower vaccination rate than the general Singapore population. As of 6 March 2022, 92 per cent of the total population had received at least one dose, 91 per cent had received two doses and 69 per cent had received a booster jab as well (Ministry of Health, 2022). In contrast, 36 per cent of the survey respondents said they did not plan to be vaccinated and only 31 per cent said they had received at least one dose of the vaccination. Those who said they did not plan on getting vaccinated at all seemed to be staunchly against the vaccination while those who received one dose seemed to have a slightly less rigid antipathy to COVID-19 vaccination. Unsurprisingly, the majority of the respondents (87 per cent) said that social media such as Facebook, Twitter, YouTube, WhatsApp, Instagram, Telegram etc. were sources of information on COVID-19. About half of them (51 per cent of the respondents) said official sources such as Gov.sg (a government website that publishes information relating to the pandemic) were also a source of information on the pandemic. A study by NCID with 700 participants found that more than 90 per cent in the NCID surveys trusted communications and information from official government sources (Min, 2020). This indicates that government official sources form a much smaller part of the information source diet for members of these groups than the national average. It also suggests that government-endorsed information relating to COVID-19 was not reaching or being consumed by members, suggestive of the processes of an echo chamber of false information in these groups. A key characteristic of the group is the wide range of their information-seeking behaviour—53 per cent said they got their information from websites and other sources that were not mainstream media sources or were perceived to be not government-approved. This is significant in Singapore’s context as the country has a tightly controlled media environment with a strictly managed political discourse that makes information seeking through alternative rather than mainstream sources noteworthy (George, 2007). The sources of COVID-19 information for members of these groups included Rumble, BitChute, GETTR, Odysee, podcasts and the search engine DuckDuckGo. Their information sources also included right-wing and partisan media in the US, indicating seeking information of a political nature. A recent New York Times report found that the search engine DuckDuckGo surfaced more untrustworthy sites than Google (Thompson, 2022). Significantly, 61 per cent of the respondents said they were very likely or somewhat likely to fact check the COVID-19 information they received in their Telegram or WhatsApp groups with other sources.
36
S. Maheshwari and A. P. Hwa
3.9 Why Telegram? 3.9.1 Privacy and No Censorship Pertaining to the third research question on why the vaccine-hesitant joined the Telegram groups, we found that one of the key reasons was that the platform it afforded them the most privacy and is the least censored. Sixty-five per cent of the respondents strongly agreed or agreed that Telegram protected their privacy better than other social media sites such as Facebook and Twitter. This has to do with Telegram’s feature of allowing members to remain anonymous or not make their contact numbers public. The premium placed on privacy on Telegram may have stemmed from the publicity concerning WhatsApp’s clarification of a privacy policy that stated that it would share sharing certain data, though not the content of messages, with its corporate parent, Facebook, in January 2021 (Greenberg, 2021). After the announcement, there was a mass migration of users to platforms such as Telegram and Signal (Nicas et al., 2021). Another reason for respondents’ preference for Telegram was that the platform, unlike most other social media platforms, did not take steps to curb COVID-19 vaccine misinformation. One respondent chose “Healing the Divide Telegram chat group because Facebook and Instagram censor postings that are anti-vaccine. Facebook and Instagram ban accounts of users who write against vaccines”. Another noted that they choose Telegram over other social media because “it is not subject to algorithms or censorship the way centralised platforms such as Facebook, Instagram or social media in general are” and “all other platforms have censorship of alternative opinions”. Respondents were asked to rank whom they trusted most in telling them the facts on the handling of and information about COVID-19. Doctors and scientists were most trusted with opinion leaders on Telegram as second, followed by friends and family, and organisations such as WHO and Gov.sg. As governments crack down on misinformation on social media sites such as Facebook and Twitter and enact laws to curb misinformation and disinformation, Telegram has become the platform of choice for those wanting not to be surveilled and censored globally (Mallick, 2021). Further, Telegram allows groups of up to 200,000 members, unlike WhatsApp where the maximum group size is much lower. In response to a question asking them why they found Telegram useful during the pandemic, most of the respondents (84 per cent) strongly agreed or agreed that Telegram was useful for information gathering during the pandemic, and 53 per cent of the respondents said they strongly agreed or agreed that Telegram was useful for staying in touch with friends and family during the pandemic. Another reason the respondents joined these Telegram groups was that they wanted a reprieve from the stigma they thought they were suffering from for holding vaccinehesitant beliefs in society. One respondent said: “Acceptance and respect for each other’s vax status in search of and sharing of truthful information, and as such, all questions and concerns are welcome without fear of censorship, coercion, criticism
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
37
nor shaming”. Another said: “There is also unlikely to be discrimination in such platforms where people are genuinely in the same situation or share the same views”. These comments showed how the punitive measures to get people to be vaccinated could have contributed to a sense of solidarity among those who were vaccinehesitant. There is evidence of an “ingroup” versus the “outgroup” (i.e., those who were vaccinated). This suggests that public vilification by the state and media hence runs the risk of hardening the position of the vaccine-hesitant because they felt they were under attack. Most of the respondents (88 per cent) strongly agreed or agreed that they found Telegram to be useful in providing them information that was not available in the mainstream media during the pandemic. Telegram was seen as an alternative source of information, as one respondent said, it provided them with “more information from those who are directly affected by the vaccination which is not available in the MSM (mainstream media)” and “lots of links to real news and not MSM nonsense”. In fact, the founder of Healing the Divide took on the country’s biggest newspaper in circulation, The Straits Times, suing them for making “false statements of facts” in 2021 (Yong, 2021). There was also a perception among the respondents that the government, through its control over mainstream media, sought to delegitimise their concerns, leaving no room for alternative views or doubts. Most of the respondents (87 per cent) strongly disagreed or disagreed that the government had been responsive to their concerns about COVID-19 vaccination.
3.9.2 Meeting Like-minded Others in Real Time Another reason the respondents joined the Telegram group was to meet like-minded people—81 per cent of them strongly agreed or agreed that it was because Telegram enabled them to find like-minded people. The immediacy in interaction, sharing of sympathy for those who were affected by the vaccination or chose not to be vaccinated, having access to information and a space to contest dominant narratives surround COVID-19 vaccination, were some of the reasons respondents gave when asked why they found Telegram useful during the pandemic: According to a male respondent in the age group of 30-39, “there are numerous Telegram groups with like-minded individuals having similar concerns. Such platforms will allow better sharing of relevant information. There is also unlikely to be discrimination in such platforms where people are genuinely in the same situation or share the same views”. A female respondent in the age group of 60-69 explained why she used this Telegram chat group, “HTD TG chat - Acceptance and respect for each other’s vax status in search of and sharing of truthful information, and as such, all questions and concerns are welcome without fear of censorship, coercion, criticism nor shaming”. Another male respondent in the age group of 70 and above said he was on “Healing the Divide chat group - informative, quick responses, for vaxxed n unvaxxed, focus, shared experiences”.
38
S. Maheshwari and A. P. Hwa
A female respondent in the age group of 40-49 stated that “there are real life accounts of people who have suffered the injuries and the injustices that they have faced”. A male respondent in the age group of 40-49 said he preferred Telegram chat group due to its “bite size info, easy to receive and for me to validate further”. A female respondent in the age group of 20-29 used it because “it gives most up to date info”. A female respondent in the age group of 40-49 said she joined the Telegram chat group because “there are facts when sharing news, there is countless support, no worries about censorship of news”. A female respondent in the age group pf 50-59 said “there are like-minded people who engage in open and deep discussion, information provided by members required to provide the sources for other to do further research or verification”.
Our analysis also found that the Telegram chat group helped provide members of the chat group with “information and opinions” against the backdrop of an uncertain and fast-changing environment, and perceived conflicting advice by the government. In an information ecosystem that is highly regulated, some moved to less monitored platforms, particularly those that allow the creation and sustenance of online communities, to plug information gaps and anxiety during the pandemic.
3.10 Implications for POFMA and Its Efficacy Despite the false information that is shared on Telegram chat groups like the ones studied here, the Singapore government has not issued a single correction notice or takedown notice to any MIMS chat groups. Instead, the government issued a takedown notice to Iris Koh, founder of Healing the Divide, to remove certain content on her YouTube channel in 2021, and subsequently arrested her on charges of perpetrating fraud in early 2022 (Devaraj, 2022). The government also issued a correction notice to a website run by Truth Warriors in 2021 for claiming that ivermectin was safe and effective to use for COVID-19, but not to the multiple Telegram groups they run that make similar claims and have become a repository of COVID-19-related misinformation (Low, 2021). This suggests the limits of legislative instruments such as POFMA in checking the production and circulation of false information. As the experience of Germany has shown, right-wing, anti-vaxxer and anti-semitic groups shifted to Telegram after the country forced social media giants such as Facebook and Twitter to remove illegal content (AFP, 2022). As several respondents in the survey stated, many used the Telegram chat groups because it was less censored than other social media Facebook and Twitter, which have come under pressure to take steps to curb COVID-19 false information. This suggests that online falsehoods can easily migrate to newer and less regulated spaces in the highly mutable online information ecosystem.
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
39
POFMA seeks to issue correction notices or takedowns of content by delegitimising the content and the people associated with producing falsehoods. A key part of its efficacy rests not on the sanctions imposed but influencing “public perceptions of an individual’s credibility” (Teo, 2021, p. 4802). However, in the case of vaccinehesitant groups such as Healing the Divide, their identity and credibility might even be enhanced among their supporters the more it is attacked. Their credibility, in fact, draws often on being opposed to the vast majority that complies “blindly” with government rules on vaccination. Therein lies the problem with formulating legal instruments that seek to arbitrate the “truth” and “facts”. These can be a matter of beliefs, most often strongly held, in vaccine-hesitant groups.
3.11 Conclusion The main limitation of the study is the representativeness of the views of the group. The low response rate also limits the generalisability of the findings. The survey was conducted when the founder of the Telegram group was out on bail. Her YouTube channel attracted the Singapore government’s attention in November 2021, and she was asked to remove anti-vaccination content (Tan, 2021). She was arrested on conspiracy charges in January 2022 and released on bail subsequently (Lam, 2022). The arrest could have dampened the response rate. There is also a self-selection bias as the sample is not random. It would appear that the anti-vaxxer movement is a worldwide phenomenon. Even in Singapore where much of the population is vaccinated and there is strong regulation on the dissemination of information publicly, there are the vaccine-hesitant and vaccine-resistant. In the Singapore case, there is much higher trust in the government and even among the hesitant and resistant, there is no wholesale rejection of official information. Although POFMA includes MIMS within its purview, the fact that it has yet to be applied to these “epicentres” of misinformation about COVID-19, raises questions about the challenge of regulating these platforms where boundaries between public and private blurs.
References Abdullah, W., & Kim, S. (2020). Singapore’s responses to the COVID-19 outbreak: A critical assessment. The American Review of Public Administration, 770–776. AFP. (2022, January 26). Germany mulls banning Telegram as mandatory vaccine debate begins. The Times of Israel. https://www.timesofisrael.com/germany-mulls-banning-telegram-as-man datory-vaccine-debate-begins/ Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31, 211–236. https://doi.org/10.1257/jep.31.2.211
40
S. Maheshwari and A. P. Hwa
Ang, P. H., & Goggin, G. (2021). The regulation of online disinformation in Singapore. In Perspectives on platform regulation, pp. 549–564 (1st ed.). https://doi.org/10.5771/978374892978 9-549 Boczkowski, P. J., Matassi, M., & Mitchelstein, E. (2018). How young users deal with multiple platforms: The role of meaning-making in social media repertoires. Journal of Computer-Mediated Communication, 23(5), 245–259. https://doi.org/10.1093/jcmc/zmy012 Brewer, P., & Ley, B. (2013). Whose science do you believe? Explaining trust in sources of scientific information about the environment. Science Communication, 115–137. Brown-Johnson, C. G., Boeckman, L. M., White, A. H., Burbank, A. D., Paulson, S., & Beebe, L. A. (2018). Trust in health information sources: Survey analysis of variation by sociodemographic and tobacco use status in Oklahoma. JMIR Public Health and Surveillance, e6260. Caetano, J. A., Magno, G., Goncalves, M., Almeida, J., Marques-Neto, H. T., Almeida, V. (2019). Characterizing attention cascades in whatsApp groups. WebScience, 19, 27–36. https://doi.org/ 10.1145/3292522.3326018 Calonzo, A., Wei, K., & Tan, K. (2021, June 30). Anti-Vaxxer Propaganda Spreads in Asia, Endangering Millions. Bloomberg.com. https://www.bloomberg.com/news/articles/2021-06-30/antivaxxer-disinformation-spreads-in-asia-endangering-millions Casiday, R., Cresswell, T., Wilson, D., & Panter-Brick, C. (2006). A survey of UK parental attitudes to the MMR vaccine and trust in medical authority. Vaccine, 177–184. Chen, X., Hay, J. L., Waters, E. A., Kiviniemi, M. T., Biddle, C., Schofield, E., & Orom, H. (2018). Health literacy and use and trust in health information. Journal of Health Communication, 724–734. Chong, C. (2021, June 17). True or not? Increasingly tough to get through the minefield of Covid19 misinformation. The Straits Times. https://www.straitstimes.com/singapore/true-or-not-inc reasingly-tough-to-get-through-minefield-of-covid-19-misinformation Chua, N. (2022, February 8). ST explains: Why is Telegram so popular and what can be done about its problems. The Straits Times. https://www.straitstimes.com/tech/st-explains-telegram-popula rity-perils-and-precautions Daly, M., Jones, A., & Robinson, E. (2021). Public trust and willingness to vaccinate against COVID-19 in the US from october 14, 2020, to March 29, 2021. The Journal of the American Medical Association (JAMA), 2397–2399. Del Vicario, M., Vivaldo, G., Bessi, A., Zollo, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). Echo chambers: Emotional contagion and group polarization on Facebook. Scientific Reports, 6(1). https://doi.org/10.1038/srep37825 Devaraj, S. (2022, February 4). Healing the divide’s Iris Koh granted bail, gets new charge for tearing police statement. The Straits Times. https://www.straitstimes.com/singapore/courts-crime/hea ling-the-divides-iris-koh-granted-bail-gets-new-charge-of-obstructing-police George, C. (2007). Consolidating authoritarian rule: Calibrated coercion in Singapore. The Pacific Review, 20(2), 127–145. https://doi.org/10.1080/09512740701306782 Greenberg, A. (2021, January 27). Fleeing WhatsApp for privacy? Don’t turn to Telegram. WIRED. Retrieved March 8, 2022, from https://www.wired.com/story/telegram-encryption-whatsapp-set tings/ Guest, P., Firdaus, F., & Daman, T. (2021, July 28). “Fake news” laws are failing to stem Covid-19 misinformation in Southeast Asia. Rest of World. Retrieved March 10, 2022, from https://restofworld.org/2021/fake-news-laws-are-failing-to-stem-covid-19-misinf ormation-in-southeast-asia/ Harvey, I. S., & Alexander, K. (2012). Perceived social support and preventive health behavioral outcomes among older women. Journal of Cross-Cultural Gerontology, 275–290. Ho, T. (2020, March 16). Singaporeans are confident in the government amidst fears of the COVID19 outbreak. Retrieved from Ipsos: https://www.ipsos.com/en-sg/singaporeans-are-confidentgovernment-amidst-fears-covid-19-outbreak
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
41
Jennings, W., Stoker, G., Bunting, H., Valgarðsson, V. O., Gaskell, J., Devine, D., & Mills, M. C. (2021). Lack of trust, conspiracy beliefs, and social media use predict COVID-19 vaccine hesitancy. Vaccine, 593. Kricorian, K., Civen, R., & Equils, O. (2021). COVID-19 vaccine hesitancy: Misinformation and perceptions of vaccine safety. Human Vaccines & Immunotherapeutics, 30, 1–8. https://doi.org/ 10.1080/21645515.2021.1950504 Kumari, A., Ranjan, P., Chopra, S., Kaur, D., Kaur, T., Datt, & Vikram, n. (2021). Knowledge, barriers and facilitators regarding COVID-19 vaccine and vaccination programme among the general population: A cross-sectional survey from one thousand two hundred and forty-nine participants. Diabetes & Metabolic Syndrome, 987–992. Kurohi, r. (2021, June 24). Ministers urge more seniors to get vaccinated against Covid-19. The Straits Times. https://www.straitstimes.com/singapore/ministers-urge-more-seniors-to-get-vac cinated-ahead-of-further-reopening Lai, L., & Chong, C. (2021, October 20). Half-truths and lies: How Covid-19 misinformation spreads in S’pore. The Straits Times. https://www.straitstimes.com/singapore/pandemic-of-onl ine-misinformation-on-covid-19-takes-its-toll Lam, L. (2022, February 4). Healing the divide’s Iris Koh gets new charge of obstructing police by tearing up statement; offered bail. CNA. https://www.channelnewsasia.com/singapore/healingdivide-iris-koh-new-charge-tearing-police-statement-offered-bail-2478946 Larson, H. J., Clarke, R. M., Jarrett, C., Eckersberger, E., Levine, Z., Schulz, W. S., & Paterson, P. (2018). Measuring trust in vaccination: A systematic review. Human Vaccines & Immunotherapeutics, 14(7), 1599–1609. https://doi.org/10.1080/21645515.2018.1459252 Lee, J. J., Ah-Kang, K., Wang, M. P., Zhao, S. Z., Wong, J., & o’Connor, S., Yang, S. C., & Shin, S. (2020). Associations between COVID-19 misinformation exposure and belief With COVID-19 knowledge and preventive behaviors: Cross-sectional online study. Journal of Medical Internet Research, 22(11), e22205. https://doi.org/10.2196/22205 Lin, C., & Aravindan, A. (2020, December 23). COVID-19 vaccine stirs rare hesitation in nearly virus-free Singapore. Reuters. https://www.reuters.com/article/us-health-coronavirus-sin gapore-vaccine-idUSKBN28X0BP Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5, 337–348. https://doi.org/10.1038/s41562-021-01056-1 Low, D. (2021, October 25). Truth warriors website puts up correction notices after Pofma direction over Covid-19 claims. The Straits Times. https://www.straitstimes.com/singapore/health/truthwarriors-website-puts-up-correction-notices-after-pofma-direction-over-covid MacDonald, N. E. (2015). Vaccine hesitancy: Definition, scope and determinants. Vaccine, 33(34), 4161–4164. https://doi.org/10.1016/j.vaccine.2015.04.036Mallick, A. (2021, April 14). Inside India’s Anti-Vaxx Telegram Groups, COVID-19 is a Conspiracy. Quint.com. https://www.the quint.com/news/webqoof/telegram-anti-vaccine-covid-19-misinformation Mallick, A. (2021). Inside India’s anti-vaxx telegram groups, COVID-19 is a Conspiracy, TheQuint, Website. Min, C. H. (2020). 6 in 10 people in Singapore have received fake Covid-19 news, likely on social media: Survey. Channel News Asia. https://www.ntu.edu.sg/docs/default-source/corporate-ntu/ hub-news/6-in-10-people-in-singapore-have-received-fake-covid-19-news-likely-on-socialmedia4401d3c6-e6a5-483b-a049-30f548c57478.pdf?sfvrsn=4a0fbd8b_3 Min, A. H., & Yeoh, G. (2021, October 13). Convincing the doubters: Why some younger people are reluctant to get the COVID-19 vaccine. https://www.channelnewsasia.com/singapore/unv accinated-new-rules-malls-hawker-centres-covid-19-vaccine-2235856 Ministry of Health. (2022). MOH | Vaccination statistics. Ministry of Health. Retrieved March 10, 2022, from https://www.moh.gov.sg/covid-19/vaccination/statistics National Centre for Infectious Diseases, National University of Singapore, Nanyang Technological University. (2020). NCID, NUS and NTU studies highlight the role of socio-behavioural factors in managing COVID-19—National centre for infectious diseases. ncid.sg. Retrieved March 11,
42
S. Maheshwari and A. P. Hwa
2022, from https://www.ncid.sg/News-Events/News/Pages/NCID,-NUS-and-NTU-Studies-Hig hlight-the-Role-of-Socio-Behavioural-Factors-in-Managing-COVID-19-.aspx Neely, S., Eldredge, C., & Sanders, R. (2021). Health information seeking behaviors on social media during the COVID-19 pandemic among American social networking site users: Survey study. Journal of Medical Internet Research, 23(6), e29802. https://doi.org/10.2196/2980 Nicas, J., Isaac, M., & Frenkel, S. (2021, January 13). Millions flock to Telegram and Signal as fears grow over Big Tech. https://www.nytimes.com/: https://www.nytimes.com/2021/01/13/tec hnology/telegram-signal-apps-big-tech.html Qing, A., & Soh, G. (2021, October 10). Messaging app chat groups a catchment for Covid-19 misinformation on ivermectin, vaccines. The Straits Times. https://www.straitstimes.com/sin gapore/health/messaging-app-chat-groups-a-catchment-for-covid-19-misinformation Qiu, J., Li, Y., Tang, J., Lu, Zheng, Ye, H., Chen, B., Yang, Q., & Hopcroft, J. (2016, February 20). The lifecycle and cascade of weChat social messaging groups. https://arxiv.org/abs/1512.07831 Renaldi, A. (2021, March 24). There’s no virus here’: An epic vaccine race against all odds in Indonesia. Washington Post. https://www.washingtonpost.com/world/asia_pacific/covid-vac cines-indonesia-rollout/2021/03/24/bee98662-6b84-11eb-a66e-e27046e9e898_story.html Reuters Institute for the Study of Journalism, T. (2019). Digital news report. Reuters Institute for the Study of Journalism. Rich, M., Albeck-Ripka, L., & Inoue, M. (2021, September 3). These countries did well with Covid—So why are they slow on vaccines? The New York Times. https://www.nytimes.com/ 2021/04/17/world/asia/japan-south-korea-australia-vaccines.html Rossini, P., Stromer-Galley, J., Baptista, E., & de Oliveira, V. (2020). Dysfunctional information sharing on WhatsApp and Facebook: The role of political talk, cross-cutting exposure and social corrections. New Media and Society, 2430–2451. Seah, J., & Tham, B. (2021). Ministries of truth: Singapore’s experience with misinformation during COVID-19. Medium. Retrieved March 10, 2022, from https://medium.com/digital-asia-ii/min istries-of-truth-singapores-experience-with-misinformation-during-covid-19-2cd60d0c4b91 Silverman, C., & Singer-Vine, J. (2016, December 7). Most Americans who see fake news believe it, new survey says. BuzzFeed News. Retrieved March 11, 2022, from https://www.buzzfeedn ews.com/article/craigsilverman/fake-news-survey Simon, F., Howard, P., & Nielsen, R. (2020). Types, sources, and claims of COVID-19 misinformation. Reuters Institute for the Study of Journalism. Soon, C., & Goh, S. (2021). Singaporeans’ susceptibility to false information. IPS. Su, Y., & Thayaalan, S. (2021, December 21). Misinformation & government inaction fuel vaccine hesitancy in the Philippines. New Mandala. https://www.newmandala.org/misinformation-gov ernment-inaction-fuel-vaccine-hesitancy-in-the-philippines/ Tan, A. (2021, November 7). Content from anti-vaccination YouTube channel removed for violating guidelines. The Straits Times. https://www.straitstimes.com/singapore/health/contentfrom-anti-vaccination-youtube-channel-removed-for-violating-guidelines Tandoc, E. C. (2021). Singapore | Reuters Institute for the Study of Journalism. Reuters Institute. Retrieved March 10, 2022, from https://reutersinstitute.politics.ox.ac.uk/digital-news-rep ort/2021/singapore Tham, D. (2022, July 27). Iris Koh, suspended doctor and assistant linked to healing the divide group get more charges. Channel News Asia. https://www.channelnewsasia.com/singapore/iriskoh-jipson-quah-doctor-healing-divide-covid-19-vaccine-moh-2837986 Teo, K. X. (2021). Civil society responses to Singapore’s online “Fake News” law. International Journal of Communication, 15, 4795–4815. Teo, K. X. (2022). POFMA’ed dataset. Retrieved March 8, 2022, from https://pofmaed.com/ Thompson, S. A. (2022, February 28). Fed up with Google, conspiracy theorists turn to DuckDuckGo. The New York Times. https://www.nytimes.com/2022/02/23/technology/duckduckgoconspiracy-theories.html Vosoughi, S., Deb Roy, & Aral, S. (2018). The spread of true and false news online. Science, 9(359(6380)), 1146–1151. https://doi.org/10.1126/science.aap9559
3 The Unbelieving Minority: Singapore’s Anti-Falsehood Law …
43
Vraga, E., & Bode, L. (2018). I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, 21(10), 1337–1353. https://doi.org/10.1080/1369118X.2017.1313883 World Health Organization. (2014, October 1). Report of the sage working group on vaccine hesitancy. Who | World Health Organization. Retrieved March 10, 2022, from https://www.who.int/immunization/sage/meetings/2014/october/1_Report_WOR KING_GROUP_vaccine_hesitancy_final.pdf World Health Organization. (2019). Ten threats to global health in 2019. World Health Organization. https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 Wong, S. (2021, October 15). MOH counters false claims by local website Truth Warriors on use of Ivermectin for Covid-19. The Straits Times. https://www.straitstimes.com/singapore/health/ moh-counters-false-claims-by-local-website-truth-warriors-on-use-of-ivermectin-for Wong, T. (2019). Singapore fake news law polices chats and online platforms. bbc.com. https:// www.bbc.com/news/world-asia-48196985 Yong, N. (2021, November 12). ‘No basis’ for ‘intelligent vaxxer’ group healing the divide’s lawsuit: SPH. Yahoo News Singapore. https://sg.news.yahoo.com/no-basis-intelligent-vaxxer-group-hea ling-the-divide-lawsuit-sph-084258817.html Zhang, L. M. (2021). Bigger push to vaccinate more seniors: Ong Ye Kung. The Straits Times. https://www.straitstimes.com/singapore/bigger-push-to-vaccinate-more-seniors-ong-ye-kung
Chapter 4
Orders of Discourse and the Ecosystem of Rumour Management on WeChat Mingyi Hou
Abstract This study investigated the symbolic meanings and cultural implications of rumour management practices on WeChat through the lens of rumour management as the exertion of power in maintaining and challenging the order of discourse. Through a digital ethnographic approach, the study identified the actors that debunk rumours on WeChat. Debunking rumours does not remove all the visibility of a rumour text; instead, some parts of the rumour content have to be exposed and then recontextualised to a new discourse in which it is refuted. Rumours on WeChat also cover event-based false information, debatable institutionalised knowledge and perceived false political claims from China’s ideological opponents in international politics. To control the spread of rumours is therefore more than distinguishing truth from false, but involves meta-discursive practices to assert the truth by different professional groups and the dominant political power in China. The study contributes to a critical understanding of the rumour ecology on WeChat. Keywords Rumour management · WeChat · Digital ethnography · China
4.1 Introduction This study investigates the symbolic meanings and cultural implications of rumour management practices on WeChat through the lens of rumour management as the exertion of power in maintaining and challenging the order of discourse (Foucault, 1971). Rumours are claims made about a certain person, object or situation without secured epistemological standings (Buckner, 1965; Fine, 2007). According to this definition, rumours can be true. In Chinese public discourse, however, rumour is linked to falsity and viewed as the major embodiment of online falsehoods. M. Hou (B) Department of Culture Studies, School of Humanities and Digital Sciences, Tilburg University, Tilburg, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_4
45
46
M. Hou
WeChat is China’s largest mobile instant messaging application which provides a wide range of functions—(voice) messaging, calling, social networking, publishing and digital payment. It also serves as an infrastructural platform carrying third-party lightweight applications (Plantin & de Seta, 2019). Launched by Tencent in 2011, the number of monthly active users on WeChat was 1.263 billion in 2021 (WeChat Open Lecture Pro, 2022, January 7). Due to its voice input affordance, WeChat has become an important digital medium for users who are not familiar with the Romanised Chinese language input, many of whom are elderly people (Harwit, 2017). The mass adoption of WeChat also coincided with the development bottleneck of the microblogging platform Weibo in 2013, when opinion leaders and public intellectuals were confronted with strict media policies. Many of them migrated to WeChat official accounts (Ng, 2015; Tu, 2016). Nowadays, influential public figures, brands and governmental institutions have taken WeChat as a platform to communicate with their audiences and disseminate their messages. Given its embedment in everyday life and the diverse range of users, WeChat has become an important digital space in China. Studies using a normative approach have investigated WeChat rumours’ psychological mechanism, transmission models and intermedia agenda-setting patterns, with an aim to provide solutions to rumour management (Dou et al., 2018; Li & Yu, 2018; Zeng et al., 2019). Critical media studies often examine political sensitive rumours and contextualise anti-rumour campaigns within the framework of media censorship and political protest (Creemers, 2017; Harwit, 2017; Jiang, 2016; Ng, 2015). While the normative approach reinforces the power relations in online public space by demarcating the line between official and non-official information, the critical approach has not yet provided a comprehensive analysis of the actors, content and cultural logic of WeChat rumour management. In particular, the critical approach does not investigate the everyday scenarios of false rumours where the falsehood is indeed harmful and is supported by the business model of content producers on WeChat platform. In this sense, rumour as a term in the Chinese context faces a similar cultural baggage like “fake news” in the Western context (Wardle, 2018). Fake news has been defined as “deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design” (Gelfert, 2018, p. 108). Scholars may disagree upon the intention and effect of deception when defining fake news, but more importantly, partisan rhetoric can appropriate this term to dismiss or counter news content that diverts from their ideological stances. Similarly, false rumours cause disruption for the healthy function of the public sphere, but it is also prone to political appropriation to restrict free speech. This study aims to provide a critical yet comprehensive investigation of the dynamics of WeChat rumours. Wardle (2018) proposes that the term “fake news” cannot describe the nuances of online falsehood, and researchers should examine the information disorder of contemporary digitalised public spheres. The information disorder encompasses different types of harmful online content, the abovementioned definition of fake news being one of them. However, online false information does not all mimic the news format. Several genres beyond fake news call for more academic attention including, for instance, inaccurate posts and information shared on public
4 Orders of Discourse and the Ecosystem of Rumour Management …
47
social media and closed messaging apps. The false rumours in this study fall into this category. Wardle’s framework also captures the global scale, complex motivations, disseminating models and the platforms of these contents. The economic-driven fake news can profit, because the online advertisement model subsidises both “real” and “fake” news (Braun & Eklund, 2019). While Wardle’s framework enables this study to recognise rumour as a manifestation of information disorder, this study moves to the other side of the coin by conceptualising WeChat rumour management as a practice to restore and maintain certain “information orders” in a disordered information landscape. The study also addresses the multiplicity of orders, because WeChat as an infrastructural platform (van Dijck et al., 2018) organises a multi-sided market by aggregating transitions among end-users and several third parties. WeChat’s business model is hence completed by several parties including individual users, commercial-driven content producers and governmental institutions. The platform is also constantly required by the government to stem the spread of rumours. The maintenance and restoring of information orders are examined through the theoretical lens order of discourse. According to Foucault (1971), “in every society the production of discourses is at once controlled, selected, organised and redistributed” (p. 8). It is through discourses that we make meaningful statements about a particular topic and the subjects who are being governed and establish normality. Rumour, given its unofficial and unverified status, is one of the ways to challenge the established orders of discourse. Debunking rumours then restores and maintains such orders. While information disorder is a problematic scenario in Wardle’s framework, Foucault’s work on discourse indicates that the orderliness of information is hierarchical and imbued with power dynamics. Investigating rumour management practices can therefore unravel the power relations in producing and governing internet content. An analysis of the rumour debunking techniques and texts helps to elucidate what are perceived as trustworthy information sources, how truth is asserted and, importantly, how the challengers (rumour fabricators) are morally evaluated. In this way, this study does not depoliticise the nature of rumour management, but proposes to investigate the political struggle of discourse on a more comprehensive scale. The study is guided by these research questions: (1) What are the rumour debunking practices on WeChat and which actors are involved in the practices? (2) What types of content are defined as rumours? (3) How are the fabricators of rumours constructed in rumour debunking texts? The study adopts a digital ethnographic approach (Varis & Hou, 2019) to contextualise local rumour-related discourses, practices and technologies against the broader context of internet governing policies, the digital knowledge industry and sociocultural landscape in China. It contributes to a critical understanding of the multiconstructed reasons for and solutions to online false information in the Chinese context.
48
M. Hou
4.2 Literature Review 4.2.1 Rumour as Information Disorder in Chinese Digital Space Rumours are defined as “unverified and instrumentally relevant information statements in circulation that arise in contexts of ambiguity, danger or potential threat, that function to help people make sense and manage risk” (DiFonzo & Bordia, 2007, p. 19). Two criteria are crucial to define what a rumour is. Firstly, a rumour is an unofficial statement. It is unconfirmed at the time of transmission (Buckner, 1965; Fine, 2007). Secondly, a rumour provides instrumentally relevant information about a particular topic. It is different from gossip (an evaluative social talk), urban legend (storytelling to conveying values) (DiFonzo & Bordia, 2007) and conspiracy theories (that involve the conspiracies and sinister of powerful groups). Earlier rumour studies started from World War II (WWII) and associated rumours with the ambiguous, dangerous and threatening social environment (Allport & Postman, 1946; Knapp, 1944). These foundational rumour studies provide a map for us to identify how rumours are defined on WeChat. To gain a holistic view of the rumour management ecosystem, it is important to see who the actors are. When explaining the psychological motivation for passing on rumours (Allport & Postman, 1946; Knapp, 1944), scholars focused on the emotions and sentiments of social groups. Rumours have been found to express people’s hopes, fears, suspicions and hostilities. During WWII, scholars advised that anti-rumour campaigns should reveal rumours as propaganda from the enemy that seeks to destroy our people’s morale. More recent studies conceptualised rumour as collective sensemaking, instead of individual knowledge. The frequency, diffusion, boundaries, divisiveness and stability of rumours are shaped by the features of the publics and their social environments. To summarise, these studies theorised rumours as a type of information that involves and is configured by a collective of people. In contrast, Hu’s (2009) study on Chinese internet rumours found that official voices often separated the individual malicious rumour fabricator from the naive and innocent citizens who were deceived by the message. The fabricator was called out for their ill intentions, and the public was warned to not believe in nor further disseminate the rumour. Academic studies on internet rumours in China are divided into two strands. The first strand holds a critical view of China’s internet censorship policies, thus conceptualising rumour, a type of unofficial voice, as social protest (Hu, 2009). Antirumour campaigns are perceived as a means of suppressing ideological dissidents and silencing the public (Creemers, 2017; Harwit 2017; Jiang, 2016; Ng, 2015). These studies highlight the need to consider rumour management as a power-invested practice to govern who can say what in public spheres. However, most of the rumours that are censored in the everyday scenario are indeed false. The discourse about rumour falls into a similar muddled debate like those of “fake news” in the West. False rumours misinform the public but are prone to be appropriated by political
4 Orders of Discourse and the Ecosystem of Rumour Management …
49
propaganda. Therefore, this study strives to provide a more nuanced yet critical understanding of WeChat rumour management. The second strand of rumour studies in China seeks to provide policy solutions. These studies identify rumour categories, analyse the linguistic features of rumour texts, construct a rumour transmission model, explain psychological motivation (Dou et al., 2018; Li & Yu, 2018; Zeng et al., 2019; Zhou, 2019) and explore intermedia agenda-setting patterns (Guo & Zhang, 2020). They also adopt a narrower definition of rumour—unverified information which turns out to be false or inaccurate. However, these studies are uncritical of the power relations involved in rumour management practices, although they provide more up-to-date information on the features of rumours and their evolution on WeChat. The rumour categories that have been established in these studies include healthcare and food safety, celebrity gossip and current affairs. According to Li and Yu (2018), healthcare and food safety rumours form the largest category. The abovementioned studies also specify an important economic incentive of creating rumours. As rumours fill information gaps and play to people’s emotions, they attract online traffic and subscribers. The features of WeChat rumour inspire this study to see rumour as a type of information disorder. Wardle (2018) suggests that “the ecosystem of polluted information extends far beyond that mimics ‘news’” (p. 951). Several problematic content genres are identified as disordered information, including inaccurate posts and information shared on public social media and closed messaging apps. The false rumours in this study belong to this category. However, not all rumours can be mapped onto the information disorder framework, as it classifies false information according to falsity and intention to harm. In contrast, a rumour is defined by its unstable epistemological status, the struggle of which lies in the unequal power relationship between officialdom and ordinary citizens in shaping the public information. True rumours without mal-intent are hence beyond Wardle’s framework. Wardle’s holistic view towards the digital information landscape is shared by journalism and political economy scholars who investigate the incentive structure of online news publishing (Braun & Eklund, 2019; Couldry & Turow, 2014). Venturini (2019) suggests that the problematic news content should be called junk news, as many of them do not seek cognitive adherence, but clicks. In other words, whether readers believe in the news is not important, as long as they have contributed to the click counts of it. Rumours on WeChat share this incentive of garnishing online traffic and attention. Paying attention to the incentive structure of WeChat rumours helps shed light on the effectiveness of rumour debunking practices on WeChat.
4.2.2 Foucault’s Theory on the Order of Discourse While rumour illustrates a type of information disorder in Wardle’s framework, the epistemological struggle characterising rumours inspires this study to conceptualise rumour debunking as a practice to restore and maintain the “information order”. Foucault (1971) said that “in every society the production of discourse is at
50
M. Hou
once controlled, selected, organised and redistributed by a certain number of procedures…” (p. 9). To a certain degree, we can argue that such procedures allow for the production of officially verified information. In contrast, rumours emerge outside of these formally sanctioned procedures, hence its dubious epistemological status. Several procedures theorised by Foucault shed light on the analysis in this study. The most straightforward type of discourse control is prohibition on certain subject matters like sexuality and politics. Not everyone has the right to say anything, as what and how to speak about politics are grounds of power struggle. Prohibition directs us to different content curation practices on WeChat. According to Ng (2015), WeChat applies an automatic filtering mechanism. Certain sensitive political phrases and words cannot be sent or published in the first place and are thus prohibited. Through the analysis of rumour debunking discourses, this study will illustrate that rumour debunking on WeChat is not used as a major discursive practice to prohibit political discourse. Another relevant control procedure is the division between truth or false. In Foucault’s idea, the means to truthfulness are historically constituted. In ancient Greece, the true discourse, among other requirements, was those pronounced by men using required rituals, such as conventions, scripts and set-ups in certain cultural, political and religious contexts. The contemporary scientific paradigm however prescribes drastically different objects of, manners to and the functions of truthful knowledge. In the case of rumours on WeChat, medical and healthcare topics account for the largest category (Li & Yu, 2018). Continuing Foucault’s line of argument, this study examines what content is attributed as scientific medical and healthcare knowledge and what is not. Medical and healthcare rumours merit some scrutiny, given the long-entrenched controversy over Traditional Chinese Medicine (TCM). TCM in the contemporary Chinese media landscape has received much visibility as it rides on the tide of neoliberal self-care discourse (Zhu, 2021). The ancient healing philosophy, yangsheng (meaning nurturing and nourishing life) is promoted to guide people in their food and lifestyle choices. These TCM-related media programmes promote individual responsibility for healthcare by portraying people as free to choose their own lifestyles without acknowledging the inequality that constrains their choices. The same neoliberal approach to regulate and manage one’s health is also manifested in the newly emerging digital knowledge industry in China. In particular, knowledge media are professional online content outlets that create and monetise knowledge products. For example, Western medical professional groups like Dingxiang Doctors launch their health knowledge media in the form of WeChat official accounts (Wang, 2018). To debunk health rumours, such accounts also provide paid services like online consultation and online health courses. In this sense, this study explores how WeChat rumour management is embedded in China’s digital knowledge industry by focusing on the discursive struggle between TCM and Western medicine. Foucault also theorised on the internal procedures to control discourse in society. What is relevant to rumour management are the commentary and author. The commentary or secondary text has an intertextual relationship with the original text. Rumour debunking is therefore a commentary to the rumour text as it evaluates the
4 Orders of Discourse and the Ecosystem of Rumour Management …
51
status of information and the associated actors. The author, or more precisely, the authorship of a text, is used as “a principle of grouping of discourses, conceived as the unity and original of their meanings, as the focus of their coherence” (Foucault, 1971, p. 14). For example, an artwork becomes authentic, hence valuable, once we ascertain that it is the original copy from an author. We interpret the meanings of an artwork in the context of its author’s artistic style. However, not every text genre in society derives meanings from its author. The examples given by Foucault are decrees and contracts: the formal documents requiring signatories, or technical instructions which circulate anonymously to inform the users of the technologies. As mentioned earlier, the morphology of rumours is constituted by collective sense-making, instead of individual knowledge. In this sense, rumours are also widely circulated texts without an easily identifiable author, but they call upon readers to believe in and/or further disseminate them. This directs us back to Hu’s (2009) discussion on the official attribution of rumours in China where the false information is attributed to an author with malicious intention. The commentary and author dimensions of discourse control allow us to explore rumour-related actors, including the rumour fabricator, disseminator and believers, who are attributed to rumour management practices.
4.2.3 Online Content Regulation in China Rumour management practices on WeChat have to be situated within the larger context of how the government regulates digital platforms and online information in China. The Cyberspace Administration of China (CAC) is the agency responsible for directing, coordinating and supervising online content management and handling the administrative approval of online news services. As an office within the State Council, CAC is under the supervision of the Party’s Central Leading Group for Cyberspace Affairs (Creemers, 2017). Importantly, there are also local CAC commissions in different provinces and cities. Various content regulation campaigns have been coordinated by both the central and local CAC. For instance, in 2021, the Qinglang (campaign literally meaning a clean and righteous online environment) was initiated to target several categories of online content that were deemed as harmful (CAC, 2021; Deng & Qu, 2022). Besides internet rumours, the categories also included historical nihilism, astroturfing, algorithmic abuse, toxic content for teenagers, the excessive use of popularity metrics for entertainment content and verbal aggression in fandom communities (CAC, 2021). While campaigns are initiated by the central CAC and implemented at local governmental levels, the platforms may be invited for “regulatory talk”, a type of administrative law enforcement, to carry out tangible technological solutions for content moderation. Existing research on anti-rumour campaigns typically cites two high-profile cases (Creemers, 2017; Jiang, 2016; Ng, 2015). In 2013, two opinion leaders on Weibo (Qin Huohuo and Lier Chaixi) were sentenced to jail because they fabricated a series of rumours including the ones related to Wenzhou Train Collision. Later, Charles Xue, an entrepreneur and a public intellectual on Weibo was arrested for a
52
M. Hou
highly disputed reason—group licentiousness, a crime referring to sexual activities involving multiple people. This crime is under heated domestic debate given its interference in private lives and sexual freedom. Both cases received wide mass media coverage and the plaintiffs confessed their wrongdoings on public television. These two cases marked the beginning of China’s media policies to regulate online speech of public actors. It also started the association of gongzhi (the vernacular term for public intellectual) with rumour fabricators who have a malicious intent (Hou, 2021). Jiang (2016) observed that the news reports of these anti-rumour campaigns publicly shamed a few opinion leaders. Such negative publicity also intimidated others who are politically expressive online. Thus, debunking rumours is not to completely deprive the visibility of the rumour text. Instead, a certain level of visibility is kept so that the truthful message has the room to restore the order of discourse. During the intense ideological confrontation of WWII, rumour and media psychology scholar Knapp (1944) recommended the rumour control measure to expose rumours for its inaccuracies and falsehood or to make the rumourmonger a subject of caricature. Such delicate control of information visibility informs the analysis of rumour debunking texts in this study.
4.3 Methodology This study used digital ethnography to explore the rumour management ecosystem on WeChat in China. An ethnographic approach views rumour management practices as locally situated experiences that involve specific social contexts, platforms and the ways in which social activities are conducted through linguistic and multimodal signs (Varis & Hou, 2019). This approach elicits an in-depth understanding of both the technical solutions for and the ideological significance of rumour management. In particular, an ethnographic approach contextualises the local rumour debunking practices against the broader landscape of internet governance, the digital knowledge industry and political discourses in China. Two platform terminologies are key to the WeChat functionalities. As an infrastructural platform, WeChat enables individual, corporate users and public institutions to launch their own WeChat official accounts within the platform. These accounts are a publishing channel that enables content creators to post blog-like articles. It is comparable to the verified brand, institutional and public figure accounts on Facebook and Twitter. WeChat official accounts are also programmable, meaning that a third party’s services can be provided via the interactive functions of these accounts. Such accounts are termed as mini programmes by WeChat. For instance, a restaurant can design a mini programme menu and ask customers to order through the mini programme on their own mobile phones when they dine in at the restaurant. In this study, the publishing-oriented accounts are addressed as official accounts. The interactive and service-oriented accounts are referred to as mini programs.
4 Orders of Discourse and the Ecosystem of Rumour Management …
53
App walkthrough was used as a method for data collection (Light et al., 2018). Digital platforms are designed with expectations of how they will be used by users in real-life scenarios. The app walkthrough method helped me to establish the use environment of rumour debunking functions on WeChat. The walkthrough was performed at several locations of WeChat. Initially, I explored how WeChat public posts are flagged as rumours. This was done by experimentally pressing the flagging button of a post, then following the directions provided on the interface step-by-step, and arriving at the suggested categories of the problematic content. This procedure shed light on the digital practices one needs to perform to flag rumours and the types of content that are categorised as rumours by WeChat. Secondly, keyword searches using “rumour debunking” on WeChat search engine was conducted to identify the actors and content of rumour debunking. This brought me to the most visible rumour debunking functions—a myriad of mini programmes and official accounts. I documented and categorised all the actors who launched the mini programmes and official accounts that were involved in rumour debunking (see Sect. 4.4). Then, I focused on the mini programmes and official accounts from each category that have the most regular updates. In other words, I purposively sampled the most active accounts for further observation. The sample includes five rumour debunking official accounts, three mini programmes and three official accounts which published articles that were later debunked. App walkthrough was then carried out on these sample accounts to observe how their interfaces direct user activity, and present the rumour and debunking texts. While browsing all the debunking texts posted on these interfaces since 2021, I selected the recurring, the uncommon and the most controversial debunking cases as data examples. These cases functioned as the key incidences to demarcate the patterns of rumour debunking practices. Data collection took place from 15 to 26 October 2021, and a total of 68 screenshots from the keywords search results, the sampled accounts and WeChat’s content flagging functions were collected. The screenshots included several types of data: rumour texts, debunking texts, lists of rumour debunking actors and procedures to flag content as rumour. To analyse the multimodal data (text, images, videos and platform interface design), I applied digital discourse analysis (Jones et al., 2015). This method considers digital discourse, in the case of this study, rumour-related texts, pictures and app interface, as digital practices fulfilling certain social actions. The analysis was performed around particular rumour debunking events and focused on the text, context, action and interaction and power and ideology dimensions of the discourse. In so doing, the analysis surfaced both the contexts of internet governance and the power struggles among different knowledge authorities and actors. To examine action and interaction designed by the rumour debunking functions, the analysis focused on the WeChat official account and mini programme interface features and how the debunking texts address different audiences.
54
M. Hou
4.4 Results 4.4.1 The Multiplicity of Rumour Management Actors A wide array of actors engaged with rumour debunking practices on WeChat using mini programmes and official accounts. A search using the keyword “rumour debunking” on the platform yielded 24 mini programmes and 141 official accounts run by regulatory institutions, online news media, traditional media, individual content creators, knowledge media, other social media platforms and WeChat itself. Content creators publish rumour debunking posts on official accounts and users can search for rumours using keywords on mini programmes. See Tables 4.1 and 4.2 for the different actors involved. Firstly, although individual content creators accounted for the largest category of the rumour debunking actors, their official accounts and mini programmes were not active in updating content and many of them did not contain any content. This suggests that individual users are not the key players in rumour management. Secondly, cross-platform efforts were observed. Weibo, the Chinese microblogging service, also runs a WeChat official account exposing rumour cases that were debunked on Weibo. Pinduoduo, an e-commerce platform in China, debunked false rumours relating to their services on WeChat. Noticeably, Pinduoduo used its rumour debunking account as an outlet for brand reputation management by clarifying false customer complaints relating to their service. Figure 4.1 illustrates a rumour debunking example from Pinduoduo. A customer alleged that they bought a fake SONY camera from Pinduoduo, which turned out to be a door lock. Pinduoduo clarified that the customer misappropriated Table 4.1 Actors running rumour debunking mini programmes Rumour debunking actor
Number
Examples
Individual content creators
9
“Rumor debunking Q&A”; “Rumor debunking Expert”
Regulatory institutions
5
“(Jiangsu) Cyber Police Rumor Debunking Assistant”; “Emergency Management of Henan Province”
Traditional media
3
“3·15 Rumor Debunking Platform by China Central Television”, “Jining Omni-media Rumor Debunking”
WeChat platform
2
“WeChat Rumor Debunking Assistant”; “Tencent All Citizens Seeking Truths”
Knowledge media
4
“China Association for Science and Technology”; “China Family Doctor Journal”
1
“Qingdao News”
Other platforms and online media Total
24
4 Orders of Discourse and the Ecosystem of Rumour Management …
55
Table 4.2 Rumour debunking actors running WeChat official accounts Rumour debunking actor
Number
Examples
Individual content creators
90
“A Few Things on Rumor Debunking”; “Arthur’s Rumor Debunking Classroom”
Regulatory institutions
34
“Nanjing Internet Rumor Debunking (by Nanjing CAC)”; “Xilingol Rumor Debunking (by Internet Public Opinion Center of Xilingol League)”
Other platforms and online media
7
“Weibo Rumor Debunking (by Sina)”; “Toutiao Rumor Debunking (by ByteDance)”
Knowledge media
7
“China Food Safety (by China Food Newspaper)”; “Sichuan Huaxi Hospital”
Traditional media
2
“Shanghai Rumor Debunking (by Liberation Daily)”; “Yantai Networked Rumor Debunking Platform (by Yantai radio station)”
1
Rumor Debunking Filter
WeChat platform Total
141
Fig. 4.1 Pinduoduo rumour debunking as brand image management
a photo of a door lock that was sold on Jingdong, a competitor e-commerce platform. The debunking text disclosed that the consumer complaint was a “mischievous rumour” and the debunking text implied that the post was a prank by the competitor platform. In this way, the debunking text functions to maintain the brand reputation. Thirdly, regulatory institutions had a visible presence in the rumour management landscape on WeChat. Noticeably, the institutions were of different levels, ranging from the central CAC to CAC offices of local administrative divisions in county-level cities and districts. These official accounts of local CAC offices debunked rumours based on the principles and practices promoted in the ongoing Qinglang campaign managed by the central CAC (see Sect. 4.2). They also adhere to a strict discourse
56
M. Hou
regime from the central CAC by using the same words, phrases and rhetoric of the original campaign text. The rumours mentioned by these local CAC were all tied to the local regions. This finding supports Creemer’s (2017) observation of China’s new internet governing landscape—content regulation does not only take place at the central level but is systematically passed onto the local governments. While rumours are perceived as part of the information disorder and a problem that concerns the whole country, the manifestation of the rumours is localised. Local regulatory bodies also used their WeChat official accounts as an outlet for propagating the dominant political ideology. Not all posts clarified or debunked rumours. Figure 4.2 shows a post by Tianjin CAC. It reposted an article from People’s Daily, the official newspaper of the Communist Party of China (CPC). In this article, the top Party leader of Tianjin stated that the city would “forcefully support the authority and centralised leadership of the CPC Central Committee”. This phrase illustrates the official political scripts in China. It appears as a fixed expression and the exact wording is used. When local governments repeat it, it is a practice of pledging loyalty. Rumour debunking is a particular format of commentary discourse that corrects the primary text. However, this post did not make any corrections; it reposted the allegiance discourse of the local government. It therefore repeated and amplified official political scripts. WeChat collaborates with rumour debunking institutions and runs the “WeChat Rumor Debunking Assistant” mini programme and “Rumor Filter” official account. On the mini programme, users can search for rumours using keywords and view the posts which they had read previously but were subsequently labelled as rumours. The debunked posts are listed on the main page of mini programme (see Fig. 4.3), and curated into weekly and monthly charts on the Rumor Filter official account (see Fig. 4.4) that are pushed to subscribers. Users report problematic content which is then distributed to participating rumour debunkers for review. The mini programme lists all the institutions that debunk the rumour (see Fig. 4.5). According to WeChat, about 780 institutions participate in the initiative and they include the China Academy of Science, China Association for Science and Technology, knowledge media like Dingxiang Doctor, traditional media like Xinhua News Agency and People’s Daily and also law enforcement institutions (e.g., the cyber police division of local governments).
4.4.2 The Discursive Structure of Rumour Debunking Texts The discursive structure of rumour debunking on the WeChat Rumor Debunking Assistant mini programme merits more in-depth discussion. Figure 4.6 shows the debunking page that one will see when clicking on a rumour post that is listed on the main page. The interface is arranged into three sections from top to bottom. The rumour article titled “Men need to party and drink with friends at least twice per week to maintain physical and psychological health” and the author’s official account (named “In-depth research”) are shown in the top section. A red stamp with
4 Orders of Discourse and the Ecosystem of Rumour Management …
57
Fig. 4.2 Tianjin CAC reposting article from China Daily
the Chinese character rumour is placed next to the title, symbolising invalidation by authoritative voices. The second section, “Rumor Debunking Opinion”, provides the rationale for invalidation. In this example, the debunker, Dingxiang Doctor, cited medical facts on alcohol consumption and claimed that alcohol is a type of carcinogen and may lead to other diseases. This claim is used to refute the claim in the rumour text that said men should gather with their friends and drink to improve their psychological and physical health. The last section, “Rumor Debunking Source”, provides the link to the full debunking article on Dingxiang Doctor’s WeChat official account. This debunking page illustrates how rumour debunking commentary is used to control discourse. In this case, the secondary text—the debunking opinions—both repeated and refuted the primary rumour article. When doing so, the debunking page repeated only an excerpt from the rumour article. The section that presented the main argument of the rumour did not contain a hyperlink, and the primary text (the rumour) had been deleted by the system after being confirmed as a rumour. The commentary conveyed disapproval by bringing in more medical information which went beyond the scope of the primary text.
58
M. Hou
Fig. 4.3 A list of rumours on the WeChat Rumor Debunking Assistant main page
However, such a debunking practice may not be effective due to several reasons. Firstly, the potential rumours come from crowdsourced user flagging. The first gatekeeper of credible information is neither experts nor the authorities, but users. For such an approach to work, high information literacy among users is required. Such an approach also ignores a feature of rumour transmission—the plausibility and credibility of information is socially embedded, meaning that different information receivers have different perceptions of what is credible (Fine, 2007). Existing research recommends modelling rumour transmission trajectories to identify rumours based on the shape of transmission instead of rumour content (Li & Yu, 2018). Moving away from semantics, this approach may help to bypass the varied perception of credibility and plausibility of individual users. Secondly, the high transmissibility of rumours motivates many WeChat official accounts to publish false sensational content to attract more followers. For instance, articles that make similar claims to the “men drinking twice per week” article or even the exact same article remained on other official accounts. When WeChat debunks a rumour, it regards a specific article as a discrete digital object and removes only that one object from the system. However, the replicability of digital content (boyd, 2010) means that content creators can easily adapt popular articles and publish them on their own accounts. The rumour management strategy on WeChat only deals with an individual information disorder token, instead of incentivising quality content creation on the platform.
4 Orders of Discourse and the Ecosystem of Rumour Management …
59
Fig. 4.4 Monthly rumour chart on the Rumor Filter official account
Moreover, the commentary structure as described in the previous section shows that WeChat rumour debunking does not prohibit discourse. It does not remove a rumour completely. This is because certain content (of the rumour) has to be presented and then recontextualised to a new discourse in which it is refuted. Equating this content moderation practice as silencing political dissident voices is hence mistargeted criticism. Ng (2015) investigated the censoring mechanism of WeChat and found that the platform block chat messages and automatically filter public posts that contain politically sensitive words. Thus, sensitive political topics do not receive any visibility on WeChat in the first place. The contrast between debunking rumours and censoring political dissent is clear: rumours are negotiable discourse and the legitimacy of the dominant political power will not be challenged if the false information can be discredited.
4.4.3 The Stretched Definition of Rumour When users report problematic content, WeChat requires them to classify the content under its 10 pre-defined categories of problematic content, including “false information” (see Fig. 4.7). In line with the normative rumour studies, the platform also
60
M. Hou
Fig. 4.5 Institutions that debunk rumours on the WeChat Rumor Debunking mini programme
adopts a narrow definition of what constitutes a rumour, considering its falsity rather than its unverified status. False information is further divided into four sub-categories (see Fig. 4.8). Among the 100 debunked rumours found in the Rumour Debunking Assistant mini programme published between May and November 2021, only 2 per cent had political content by mentioning governmental policies and politicians. Medical and health information posts and posts on social events accounted for 74 and 24 per cent of rumour posts respectively. This finding confirms previous research on the prevalence of medical and health-related rumours on WeChat (Li & Yu, 2018). This distribution also shows that the prohibition of political sensitive content does not manifest majorly in the discursive practices of rumour debunking. Rumour is different from gossip and urban legend because it informs people’s reactions towards situations of ambiguity and threat (DiFonzo & Bordia, 2007). Rumour therefore meets people’s informational and psychological needs when they feel insecure. This implies that the context of rumour is often concrete and situated, like an incident; or the context is about a particular time and space, like a war. However, for medical and healthcare rumours on WeChat, what is considered as a rumour is often debatable medical knowledge. As shown in Fig. 4.6 “drinking alcohol benefiting the psychological and physical status of men” as false medical knowledge is disproved. The definition of rumour in the WeChat context is thus expanded to include debatable institutionalised knowledge in an everyday context.
4 Orders of Discourse and the Ecosystem of Rumour Management …
61
Fig. 4.6 The discursive structure of rumour debunking page
A prominent debate is a discursive struggle between TCM and Western medical knowledge. Figure 4.9 shows an example of a debunked rumour about certain fruits (e.g., pear, water chestnut and monk fruit) that help one clear up toxins in the lungs, thus being beneficial to smokers. In TCM discourse, Zangfu (viscera) theory refers to how the “Heart”, “Spleen”, “Liver”, “Lung” and “Kidney” are not just part of the human anatomy but correspond to specific bodily systems in a metaphysical sense (Zhang et al., 2011). For example, the “Heart” corresponds to one’s circulatory system. TCM also applies food therapy, based on the principle that certain foods can cure diseases. The debunking text in Fig. 4.9 started by using Western medicine discourse as the premise, ignoring the ontological status of Zangfu. This was done by explaining that “Food is processed in the digestive system and become molecules in the blood”. “Molecules” and “the digestive system” are the words pointing towards the Western medical system. Under this system, “food cannot remove the dust from lungs nor remove particles from blood”. However, the “Lung”, addressed by the original rumour text is a metaphysical TCM concept, and it is appropriated as an anatomical category the “lung” in the debunking text. This example shows that WeChat rumour debunking incorporates the epistemological struggle between different knowledge institutions. The debunking text strives
62
Fig. 4.7 The 10 categories of problematic content
Fig. 4.8 The four categories of false information
M. Hou
4 Orders of Discourse and the Ecosystem of Rumour Management …
63
Fig. 4.9 The discursive struggle between TCM and Western medicine
to control the order of discourse by exerting the truth. In particular, the epistemological status of Western medicine, based on modern science, prescribes the possible, observable, measurable, classifiable objects to be known. Medical knowledge that falls outside of this epistemological system is regarded as rumours on WeChat. So far, the analysis has shown the scope of WeChat rumour ranged from false information about a particular event to debatable institutionalised medical knowledge. The following analysis illustrates how rumours are extended to include debates in discourses on international politics. The central CAC runs a Networked Internet Rumour debunking Platform, which has a presence on several social media services. On WeChat, it has a rumour debunking mini programme and an official account. Figure 4.10 shows an article disproving CNN’s news report on the human right issues in the Uyghur area in China. By refuting CNN’s report on this rumour debunking platform and suggesting the news source was an “experienced actor”, the central CAC claimed that CNN’s journalistic practices are untrustworthy. Here, the concept of rumour is expanded to include the content format that would be described as “fake news” in the Western context. Figure 4.11 shows a press conference held by the Chinese Ministry of Foreign Affairs. The Chinese spokesperson was responding to G7 trade ministers’ claim on forced labour in China. The post explicitly stated that G7 “hyped up” “the rumour”. This post is therefore a diplomatic
64
M. Hou
discourse. However, by publishing it on a rumour debunking platform, the central CAC claimed that G7’s opinions on Chinese domestic and international policies were rumours. These two examples illustrate that rumours on WeChat also include the perceived untrustworthy and ill-intended international political discourse that comes from the ideological opponents. Managing such rumours therefore requires managing the order of and asserting the right to discourse in geopolitical struggles.
Fig. 4.10 Central CAC criticises CNN
Fig. 4.11 Press conference of Foreign Mistry refuting G7’s claim
4 Orders of Discourse and the Ecosystem of Rumour Management …
65
4.5 Discussion and Conclusion In this study, I have investigated the rumour debunking ecosystem by examining rumour contents and rumour debunkers. As medical, healthcare and food safety rumours account for the largest rumour category on WeChat, WeChat should enact stricter policies to limit the possibility to monetise via these types of content to accredit professionals. A wide array of actors are involved in debunking rumours. In response to government policies, WeChat has implemented processes to address and remove rumours. However, it outsources the task to individual users and knowledge institutions. Although this approach leverages collective intelligence, it does not change the incentive structure of rumour creation and dissemination. It also removes the specific rumour as an individual and discrete rumour incident, ignoring the replicability of digital content. Rumours on WeChat have become an umbrella term for online falsehoods. It encompasses event-based false information, debatable institutionalised knowledge and perceived false political claims from China’s ideological opponents in international politics. To control the spread of rumour is therefore more than distinguishing truth from false, but a meta-discursive practice to assert truth by different professional groups and the dominant political power. This rumour management ecosystem has cultural significance for maintaining and challenging the orders of discourse (Foucault, 1971) in contemporary China. Firstly, it is inaccurate to equate local-level rumour management practices with the political strategy of silencing dissident voices (Creemers, 2017; Jiang, 2016; Ng, 2015). Rumour debunking is a commentary where selective visibility of the rumour text is retained so that its falsity can be clarified. In contrast, political sensitive words on WeChat can be immediately prohibited (Ng, 2015). The analysis in this study does not depoliticise the nature of rumour management but proposes to investigate the political struggle of discourse by distinguishing different practices that govern the discourse. Different levels of CAC offices carry out online content moderation campaigns to amplify dominant political narratives. Expanding the concept of rumour to incorporate the negative claims on China made by ideologically alternative countries demonstrates the state’s struggle for discursive power in international politics. Moreover, analysing rumour management on WeChat broadens the scope of observation when discussing the order of discourse. Scholars have criticised a prevailing reductionist approach when studying the Chinese internet as existing studies focus on the dualism between the authoritarian state and subversive grassroots (Guan, 2019; Leibold, 2011; Schneider, 2018). This study shows that knowledge media and the digital healthcare industry participate in debunking rumours serving as a type of content production for their brands. China’s online falsehoods are thus connected to globalised neoliberal socioeconomic orders. The long-entrenched debate between TCM and Western medicine manifests in rumour debunking practices as well, suggesting that different professional and intellectual groups are participating in
66
M. Hou
this process. Future research on these professional and intellectual groups should be conducted to understand how they impact the trustworthiness of different health information.
References Allport, G. W., & Postman, L. (1946). An analysis of rumor. Public Opinion Quarterly, 10(4), 501–517. Boyd, D. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In P. Zizi (Ed.), A networked self (pp. 47–66). Routledge. Braun, J. A., & Eklund, J. L. (2019). Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism, 7(1), 1–21. Buckner, H. T. (1965). A theory of rumor transmission. Public Opinion Quarterly, 29(1), 54–70. CAC. (2021, October 19). Central CAC launched “clean and Righteous Summer Adolescent Intent Environment Management” program. http://www.cac.gov.cn/2021-07/21/c_1628455293 580107.html Couldry, N., & Turow, J. (2014). Advertising, big data and the clearance of the public realm. International Journal of Communication, 8(1), 1710–1726. Creemers, R. (2017). Cyber China: Upgrading propaganda, public opinion work and social management for the twenty-first century. Journal of Contemporary China, 26(103), 85–100. Deng, I., & Qu, T. (2022, March 12). China’s internet watchdog pushes for deeper engagement with internet platforms in 2022 to clean, control online content. South Morning China Post. https://www.scmp.com/tech/policy/article/3170153/chinas-internet-watchdog-pus hes-deeper-engagement-internet-platforms DiFonzo, N., & Bordia, P. (2007). Rumor, gossip and urban legends. Diogenes, 54(1), 19–35. Dou, Y., Li, H., Zhang, P., & Wang, B. (2018). Research on ecological governance countermeasures of WeChat political rumors. Journal of Modern Information, 38(11), 33–38 [窦云莲, 李昊青, 张鹏, 王斌 (2018) 微信政治谣言的生态治理对策研究. 现代情报, 38(11), 33–38]. Fine, G. A. (2007). Rumor, trust and civil society: Collective memory and cultures of judgment. Diogenes, 54(1), 5–18. Foucault, M. (1971). Orders of discourse. Social Science Information, 10(2), 7–30. Gelfert, A. (2018). Fake news: A definition. Informal Logic, 38(1), 84–117. Guan, T. (2019). The ‘authoritarian determinism’ and reductionisms in China-focused political communication studies. Media, Culture & Society, 41(5), 738–750. Guo, L., & Zhang, Y. (2020). Information flow within and across online media platforms: An agenda-setting analysis of rumor diffusion on news websites, Weibo and WeChat in China. Journalism Studies, 21(15), 2176–2195. Harwit, E. (2017). WeChat: Social and political development of China’s dominant messaging app. Chinese Journal of Communication, 10(3), 312–327. Hou, M. (2021). The authenticity, cultural authority and credibility of Weibo public intellectuals. In S. Gao & X. Wang (Eds.), Unpacking discourses on Chineseness: The cultural politics of language and identity in globalizing China (pp. 146–166). Multilingual Matters. Hu, Y. (2009) Rumor as a social protest. Journal of Communication and Society, 2009(9), 67–94 [ 胡泳(2009). 謠言作為一種社會抗議 傳播與社會學刊, 2009(9), 67–94]. Jiang, M. (2016). The coevolution of the Internet, (un)civil society, and authoritarianism in China. In J. deLisle, A. Goldstein, & G. Yang (Eds.), The Internet, Social media, and a Changing China (pp. 28–48). University of Pennsylvania Press. Jones, R. H., Chik, A., & Hafner, C. A. (2015). Discourse analysis and digital practices: Doing discourse analysis in the digital age. Routledge. Knapp, R. H. (1944). A psychology of rumor. Public Opinion Quarterly, 8(1), 22–37.
4 Orders of Discourse and the Ecosystem of Rumour Management …
67
Leibold, J. (2011). Blogging alone: China, the internet, and the democratic illusion? The Journal of Asian Studies, 70(4), 1023–1041. Li, B., & Yu, G. (2018) Th rumors of discourse rhetoric and communication mechanism in the post-truth era: Studying 4660 rumors on WeChat network. Journalism Bimonthly, 2018(2), 103–121 [李彪 & 喻国明“后真相”时代网络谣言的话语空间与传播场域研究——基于微 信朋友圈4160条谣言的分析, 新闻大学, 2018(2), 103–121]. Light, B., Burgess, J., & Duguay, S. (2018). The walkthrough method: An approach to the study of apps. New Media & Society, 20(3), 881–900. Ng, J. Q. (2015, July 20). Politics, rumors, and ambiguity: Tracing censorship on WeChat’s public accounts platform. Citizenlab. https://citizenlab.ca/2015/07/tracking-censorship-on-wec hat-public-accounts-platform/ Plantin, J. C., & De Seta, G. (2019). WeChat as infrastructure: The techno-nationalist shaping of Chinese digital platforms. Chinese Journal of Communication, 12(3), 257–273. Schneider, F. (2018). China’s Digital nationalism. Oxford University Press. Tu, F. (2016). WeChat and civil society in China. Communication and the Public, 1(3), 343–350. Van Dijck, J., Poell, T., & De Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press. Varis, P., & Hou, M. (2019). Digital approaches in linguistic ethnography. In K. Tusting (Ed.), The Routledge handbook of linguistic ethnography (pp. 229–240). Routledge. Venturini, T. (2019). From fake to junk news: The data politics of online virality. In D. Bigo, E. Isin, & E. Ruppert (Eds.), Data politics (pp. 123–144). Routledge. Wang, Y. (2018, January 25). The former doctor fighting China’s health rumor epidemic. Sixtone. https://www.sixthtone.com/news/1001612/the-former-doctor-fighting-chinas-healthrumor-epidemic Wardle, C. (2018). The need for smarter definitions and practical, timely empirical research on information disorder. Digital Journalism, 6(8), 951–963. WeChat Open Lecture Pro. (2022, January 7). Retrieved on 20 May from https://developers.weixin. qq.com/community/business/course/000c4290b60478ebff4da87c35840d Zeng, M., Tan, K., Luo, J., & Huang, W. (2019). WeChat rumor transmission mechanism against the context of new media. Journal of News Research, 10(15), 25–28 [曾梦雄, 覃可奕, 罗靖仪, 黄婉君, (2019). 新媒体时代下微信谣言的传播机制, 新闻研究导刊, 10(15), 25–28]. Zhang, Q. M., Wang, Y. G., Yu, D. L., Zhang, W., & Zhang, L. (2011). The TCM pattern of the six-zang and six-fu organs can be simplified into the pattern of five-zang and one-fu organs. Journal of Traditional Chinese Medicine, 31(2), 147–151. Zhou, G. (2019). The transmission motivation and mechanism of WeChat rumor and coping strategy. New Media Research, 2019(13), 8–12 [周高琴, (2019). 微信谣言传播的动力机制机及其因 对策略, 新媒体研究, 2019(13), 8–12]. Zhu, G. (2021). A neoliberal transformation or the revival of ancient healing? A critical analysis of traditional Chinese medicine discourse on Chinese television. Critical Public Health, 1–11.
Chapter 5
Understanding the Flow of Online Information and Misinformation in the Australian Chinese Diaspora Anne Kruger, First Draft Team, and Stevie Zhang
Abstract Our study of the patterns of activity in the consumption and dissemination of information on the platforms and closed messaging apps used by the Chinese diaspora of Australia has revealed how these communities are vulnerable to misand disinformation. In outlining the complexities, this chapter explains the challenges of monitoring mis- and disinformation in these multilingual spaces as well as the interplay with mainstream platforms such as Facebook and Twitter; and how China’s “image problem” (He, 2020) in the media affects diaspora communities. Overt and covert influence efforts in Australia by Chinese state agents and others such as the anti-Chinese Communist Party Himalaya movement (Bogle & Zhao, 2020) are outlined. Findings aim to inform solutions to support and seek support from Chinese diaspora communities globally. This chapter updates and expands on data collected for the 2021 report Disinformation, stigma and Chinese diaspora: policy guidance for Australia (Chan et al., 2021) by First Draft’s APAC bureau, which was written to inform policymakers ahead of Australia’s May 2022 federal election. Keywords Chinese diaspora · Australia · First Draft · Misinformation · Disinformation · Geopolitics
5.1 Introduction Nearly one-third of Australia’s resident population were born overseas (Australian Bureau of Statistics, 2022). Those born in China are the third-largest group of migrants to move to Australia, numbering over 596,000, made up of permanent and temporary migrants (Department of Home Affairs, 2022), some with family still in China or elsewhere overseas. This Chinese diaspora community uses both A. Kruger (B) · First Draft Team · S. Zhang First Draft, Woolloomooloo, NSW, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_5
69
70
A. Kruger et al.
mainstream social media platforms that many may be more familiar with globally, such as Facebook and Twitter, as well as Chinese-language social networks, such as the multifunctional messaging app WeChat, microblogging site Weibo, or the newer Instagram-like Xiaohongshu. For non-Mandarin speakers around the world, WeChat may still be something of a mystery. For Chinese diaspora communities, it is the go-to app they rely heavily on to connect with family, news and culture. This aspect is not lost on foreign leaders who have joined the app in an effort to embrace and reach out to their countries’ growing Chinese communities, as well as potential voters. And it is also a tool for China to inform members of diaspora communities globally. A Lowy Institute report (Hsu & Kassam, 2022) noted “most ChineseAustralians rely on the app for Chinese-language news (86 per cent) and contact with friends and family (66 per cent)” but also noted that “More Chinese-Australians rate Australian media platforms as being fair and accurate (71 per cent) than news shared on WeChat (49 per cent)”. Simultaneously, Chinese diaspora members use platforms like Facebook for purposes such as acculturation or to “manage their multiple cultural identities, and to maintain their various social networks in different cultures”, particularly given the popularity of Facebook in English-speaking countries (Mao & Qian, 2015). An examination of misinformation and disinformation on Chinese-language social media platforms gave rise to our research questions: How does information flow through platforms used by the diaspora; what challenges does this virtual communication habit potentially raise for further study? This included, as a case study, a review of relevant design aspects of WeChat. We also asked what fact checking mechanism is in place in WeChat, and how does this compare with other social media apps and platforms? The findings set up further research opportunities into the challenges and consequences arising from the way information is exchanged in Chinese-language online spaces. Additionally, through the lens of misinformation and disinformation, we asked if there is evidence that WeChat, a platform used primarily by those of Chinese ancestry and very rarely by the broader Australian population, might shape the perspectives of the Chinese diaspora about Australia, and further, how it might shape the perspectives of the diaspora about China? This study also looked at how Australia’s socio-political image might be portrayed over WeChat, setting up further and ongoing research into what implications this may have on the Chinese diaspora in Australia. As the solutions in the conclusion section note, the detection of inaccurate information, hate speech and/or biases in these online spaces would benefit from engagement with disinformation experts (Zhang & Chan, 2020). These questions are posed alongside the backdrop of Australia–China bilateral relations, a crucial context to this study which begins the research in the next section; as well as the geopolitical landscape and further nuances in social media use for the Chinese diaspora. For this chapter, we refer to the “Chinese diaspora” as a collective, formed by groups with varying characteristics that often overlap. As noted in the Routledge Handbook of the Chinese Diaspora, there is no singular English term that can accurately cover all those that may be “Chinese” worldwide (Tan, 2013, 3). In Chinese, terms like huaren or huaqiao are used to describe those with Chinese ancestry or
5 Understanding the Flow of Online Information and Misinformation …
71
ethnicity residing outside of mainland China, Hong Kong, Macau and Taiwan, but we note that those belonging to the “Chinese diaspora” may also include émigrés from places like Malaysia or Indonesia. Significant populations of Chinese birth reside in these countries but they may not necessarily identify with the label “Chinese”. Due to this limit in language and the fluid nature of the group this chapter is on, we use the term “Chinese diaspora”.
5.2 The Two Landscapes of Geopolitics and Social Media 5.2.1 Geopolitical Backdrop This section outlines the geopolitical context and nuances relevant to the Chinese diaspora in Australia and their unique uses and interplay of various social media spaces. Australia is “home to more than 1.2 million people of Chinese ancestry”. (Australian Bureau of Statistics, 2018) and hosts the largest Chinese diaspora in Oceania. China is Australia’s largest trading partner (Department of Foreign Affairs and Trade, 2020), and Beijing’s politics and diplomatic approaches directly affect the stability of the Asia-Pacific region. However, Australia–China relations hit their lowest ebb in decades as the pandemic struck in 2020 (Hsu, 2021). Tensions between the two countries had been simmering for some time—in particular, China considered the foreign interference laws Australia passed in 2018 “an insult” (Cave & Williams, 2018). China subsequently accused Australia of mass espionage (Visiontay, 2020) and “racism” against Asians, including the Chinese diaspora (Global Times, 2021). The fallout captured global headlines as Beijing imposed trade blocks on a range of Australian products (Bagshaw, 2020) in the form of “economic punishment” (Dziedzic, 2021) and warned citizens against travelling (Uribe, 2020) and studying (Birtles, 2020) in Australia. The warnings focused on Australian industry sectors that are particularly reliant on China, such as tourism and education. For example, prior to the pandemic in 2019, China topped the short-term arrival list at around 15 per cent of the total number of short-term arrivals that year (Tourism Australia, 2019), and about 30 per cent of total overseas student enrolments (Department of Education, Skills and Employment, 2018). This tit-for-tat continued to play out in the media (see Green, 2020) and caught in the middle is the Chinese diaspora in Australia, including ethnic Chinese who were born and raised in the country or migrated there later in life. This political landscape has affected the ability of Chinese Australians to participate in domestic politics. If a member of the Chinese diaspora chooses to enter the political arena, they will often face speculation or accusations about their purported ties to the Chinese Communist Party (CCP) from the public or media outlets, for example, in articles such as: “Labor MP mentored by executive tied to Chinese Communist Party” in The Australian newspaper (Burrell, 2018). Time and again, allegations about their alleged ties to the CCP are based on their involvement in local
72
A. Kruger et al.
Chinese community groups. A 2021 study from the Lowy Institute found that while most participants were aware of the Chinese government’s interest in influencing Australian society and politics, motivations for Chinese diaspora members to join such groups are often unrelated, such as to adjust to Australian life (McGregor et al., 2021). These accusations can lead to candidates dropping out over fear of harassment (Galloway & Chung, 2020). Further demonstrating this situation, three Chinese Australians spoke to a Senate inquiry into issues facing diaspora communities in October 2020 about the difficulties the Chinese diaspora face in participating in the political arena, leading to reluctance to participate in public debate (Hurst, 2020). In response, a conservative senator asked the three to “unequivocally condemn” the CCP. The question proved the three inquiry participants’ points—Yun Jiang, a China analyst who has worked closely with the Australian government, called the question “some sort of loyalty test”; Wesa Chau, a board member of the Australian government’s National Foundation for Australia–China Relations, called it “race-baiting McCarthyism”; Osmond Chiu, who has written extensively about the experience of the Chinese diaspora in Australia, said the question implied “divided allegiances” simply due to his ethnicity (Hurst, 2020). On the international stage, the US’s FBI has labelled the Chinese government’s “counterintelligence and economic espionage efforts” as well as policies and programmes that seek to “influence lawmakers and public opinion to achieve politics that are more favourable to China” as the “China threat” (Federal Bureau of Investigation, 2019). As tensions heighten between global superpowers China and the US, so too is the rhetoric around a possible “new Cold War”. This provides important context for how geopolitical dynamics can fuel the spread of mis- and disinformation, and what, such as identity and ethnicity, may be weaponised.
5.2.2 Platform Policy and Politics In recent years, social media such as Facebook and Instagram have taken steps to address the problem of misinformation circulating on their platforms, such as facilitating fact checking initiatives by engaging third-party fact checking units (Meta, 2021a). This is the result of close scrutiny by researchers, journalists, non-profits and tech regulators on the role social media have played in the spread of harmful rumours, misinformation and conspiracy theories, notably since the 2016 US election (Allcott & Gentzkow, 2017). Discussions on platform accountability, transparency and user safety, which was also part of the agenda of the 2022 World Economic Forum Annual Meeting (see Lalani & Li, 2022), have prompted the fine-tuning of content moderation policies and improvement of response time to crises. For instance, Meta issued a statement on the steps it would take to protect users just two days after the Russian-Ukrainian war broke out in February 2022 (Meta, 2022). These conversations around policy review and calibration mean there is a higher probability that users will encounter corrective information on major platforms than
5 Understanding the Flow of Online Information and Misinformation …
73
in closed online spaces, where there is generally a lack of, or insufficient consent moderation mechanism. However, this accessibility can be a double-edged sword and risks being exploited by individuals or groups targeting Chinese-speaking audiences. For example, researchers previously identified alleged China-backed information operations that targeted the 2019 Hong Kong protests (Uren et al., 2019) and amplified coronavirus conspiracies (Gallagher, 2020) on platforms such as Twitter (Kao & Shuang Li, 2020). Overseas Chinese use both Chinese-language platforms and Western social platforms, often bypassing traditional media gatekeepers in the ways they consume, discuss and share information (Zhang & Chan, 2020). It is important to note though that labels applied to misinformation in “open” spaces such as Twitter and Facebook do not travel with false or misleading posts when shared on other platforms including WeChat (Chan et al., 2021). As of September 2020, daily active WeChat users in Australia number nearly 700,000 (WeChat, 2020), relying on the platform’s messaging function to stay connected to their culture and families while abroad. A June 2021 survey (SBS Chinese, 2021) also found young Chinese Australians use the app frequently to communicate with their parents. The Chinese diaspora’s reliance on Chinese-language social media (Urbani, 2019) and the cross-pollination of multilingual information on Chinese and Western platforms form a unique social media landscape. Compared to Facebook and Instagram, steps taken by WeChat against the circulation of misinformation are far more opaque. The platform itself has started initiatives such as the “Rumour Filter” (谣言过滤器) from 2014, in cooperation with media outlets such as People’s Daily and China National Radio, after receiving upwards of 20,000 misinformation reports daily (WeChat). The “Rumour Filter” is an official account that addresses a variety of topics, including social, popular science, medical and health topics and contains the “WeChat Rumour Busting Helper” (微信辟谣助 手) function started in 2017, which allows the user to browse a list of articles that contain debunked rumours. According to the account, it has published over 8,600 “rumour-busting” articles as of August 9, 2022, from 791 “rumour-busting institutions” (see Fig. 5.1). Articles that have been flagged by these institutions will be replaced by a relevant debunk, and users who have encountered debunked articles will receive a notification. However, critic such as Wei Xing, the founder of the website Chinafactcheck.com, has pointed out that WeChat only engages with government departments, medical institutions and universities for fact checking (Fang, 2021), without considering the use of other third-party fact checkers (as noted above). Furthermore, it has been observed that only folk rumours are addressed and never official rumours (Fang, 2021), which feeds into the disproportionate power official sources may hold in shaping the information in circulation. Adding to this complexity is that a lot of content on WeChat about international news is often mistranslated or misinterpreted. As we discuss later, the design of WeChat also offers specific difficulties in combating misinformation—for example, the opaqueness of WeChat’s semi-closed “Friend Circles” makes it virtually impossible to monitor the flow of mis- and disinformation there on the platform (Chiu & Huang, 2020). While the “WeChat Rumour
74
A. Kruger et al.
Fig. 5.1 Three menus of the “WeChat Rumour Busting Helper” function, showing from left to right: “rumour-busting” articles, a list of “rumour-busting institutions”, and a list of articles the user has encountered containing debunked rumours (Screenshot by Stevie Zhang)
Busting Helper” may be helpful for publicly published articles, misinformation seems to primarily circulate in closed chat groups on the platform (Xiao et al., 2021). Young Chinese Australians reported a large number of unproven claims about COVID-19, the vaccines and health in general circulating on WeChat (SBS Chinese, 2021). A handful of WeChat-based Chinese-language fact checkers are able to verify or debunk information but are not officially recognised by the platform. For example, two such groups are the Centre Against Overseas Rumours (反海外谣言中心), which consists of a team of 21 people as of 2021 and No Melon Group (反吃瓜联盟).
5.3 The Challenges—By Design 5.3.1 Flying Under the Radar The previous section set the scene for how Australia–China’s bilateral relations serve as crucial context to the study and shed light on the nuances in social media use for the Chinese diaspora. To answer the research questions of how information flows through WeChat, and the challenges and consequences potentially arise from these
5 Understanding the Flow of Online Information and Misinformation …
75
flows, we first examine the design of WeChat as compared to other social media apps, and how this might affect user behaviour and experience. Chinese-language social media platforms have a significant influence over Chinese diaspora communities. Platforms such as WeChat and Weibo are able to reach a large database of users and have multiple functionalities outside of messaging, and hence are “more powerful than traditional ethnic print media in explaining and promoting positions and opinions expressed by Chinese media consumers on a wide range of issues”, according to a study by Wanning Sun (2019a). Due to the numerous functionalities WeChat possesses, the app is so ubiquitous in mainland China that going about daily life without an account can be challenging, as it combines numerous functions from apps like Facebook, Uber, PayPal and TikTok into just one platform. Among the social functions, WeChat has instant messaging, a “Moments” feature (see Fig. 5.2), and “Friend Circles”, an equivalent of the Facebook or Instagram feed, only for users who have added or “friended” one another. If two people are WeChat contacts, they would have access to each other’s posts in their “friend circle” feed, which otherwise is generally unsearchable. The Facebook-like Chinese-language microblogging site Weibo is perhaps less popular among Chinese people in Australia because WeChat offers similar features and more.
5.3.2 Semi-Closed and Censored As First Draft noted in Disinformation, stigma and Chinese diaspora: policy guidance for Australia (Chan et al., 2021), there are two key challenges to countries with a significant Chinese-language social media base using apps such as WeChat. First, the closed or semi-closed design and censored nature of these apps means misinformation, disinformation and propaganda might fly under the radar while quietly shaping users’ beliefs and attitudes. Second, regulation of the quality and accuracy of information in these spaces is lacking. However, abundant censorship from China is the norm. Toronto-based online watchdog Citizen Lab (Ruan et al., 2020) monitored the availability of coronavirus information on WeChat and YY (a Chinese livestreaming platform) at the start of the pandemic. It found that the scope of censorship “may restrict vital communication related to disease information and prevention”. First Draft (Chan et al., 2021) also reported censorship practices are stringent over matters considered a threat to the authority of the Chinese government and the peace and stability of China—even then Australian Prime Minister Scott Morrison was not exempt. Morrison used WeChat to connect with Chinese speakers (Morrison, WeChat, 2021) such as during Chinese New Year and the Mid-Autumn Festival. In 2020, he attempted to soothe the country’s Chinese population when the two nations traded barbs (ABC News, Politics, 2020) after grim findings of Australian war crimes in Afghanistan were released (see Knaus, 2020). However, WeChat removed Morrison’s post a day after it was published (Smyth & Shepherd, 2020). The platform said Morrison’s article “violates regulations” and “involves using misleading words, images, and videos to make up popular topics, distort historical events, and deceive
76
A. Kruger et al.
Fig. 5.2 A tutorial on Moments published on WeChat’s Twitter account (Screenshot by First Draft https://twitter.com/WeChatApp/status/1354638134969921537)
the public”. Once removed, the article displayed the message “Unable to view this content because it violates regulations”. WeChat does not appear to have any policies in place specifically addressing mis- or disinformation. However, prohibited activities listed under its acceptable use policy (WeChat, 2021a), such as impersonation and fraudulent, harmful, or deceptive activities, can cover some aspects of information disorder. WeChat removes individual posts it deems in violation of its terms of service (WeChat, 2021a) or acceptable use policy (WeChat, 2021b). This can be done post-publication (as the above example with Scott Morrison illustrated). The platform also deploys censorship in real time through keywords and keyword combinations to pick up posts prior to publication. Citizen Lab’s research (Ruan et al., 2020) shows that such censorship occurs so the post or message is removed prior to publishing or sending, while violations of platform policy result in a takedown after publication. Another way WeChat monitors and censors its users is to block messages containing blacklisted keywords (Kenyon, 2020) without notifying the sender or receiver.
5 Understanding the Flow of Online Information and Misinformation …
77
5.3.3 Undermining the Diversity of Political Discussions—Ramifications for Australia In Australia, these issues have both political and social ramifications as has been seen in the last two federal election campaigns, as reported at the time by First Draft (Chan et al., 2021). During the federal election in 2019, the Australian Broadcasting Corporation (ABC) reported (Hui & Blumer, 2019) that a Chinese candidate was linked to the promotion of “scare campaigns” about an opposing party on closed messaging apps, including WeChat. Offline methods, such as placards that resembled official Australian Electoral Commission (AEC) signage written in Chinese instructing voters of the “right way to vote” for specific candidates, were subject to scrutiny, although this particular case was ultimately dismissed by the AEC, as reported by Razak (2019). However, the online methods were unaddressed (according to the AEC, no complaints were raised to the body about such “scare campaign” messages), showing how disinformation and propaganda can blur in closed spaces and can also be overlooked as vehicles for political debate and civic participation (Wanning, 2020). Meanwhile, during Australia’s most recent federal election in 2022, several scare campaigns used issues pertaining to the Chinese diaspora in Australia (Bergin, 2022). For example, conservative lobby group Advance Australia ran mobile billboards that implied the Labor Party was the preferred party of Chinese President Xi Jinping (Blair, 2022); while an outspoken critic of the CCP Drew Pavlou sponsored billboards that discouraged voters from voting for the then-Member of Parliament Gladys Liu, making a similar claim that Liu was Xi Jinping’s candidate (Bergin, 2022). On WeChat, campaign material that falsely alleged the Labor and Greens parties were planning on “[funding] school programmes to turn students gay, impose new taxes and destroy Chinese wealth” were shared in private groups, including in a group for volunteers canvassing for Liu (Davies & Kuang, 2022a). The material had no authorisation so it was unclear where they originated, and the AEC said that while it would respond to complaints, if any, that were made against the material, it was not specifically monitoring political activity on WeChat. Also circulating on WeChat was campaign material supporting then-Treasurer Josh Frydenberg as well as recruitment advertising, despite Frydenberg not having an active WeChat account (Davies & Kuang, 2022c). With the above in mind and given that Australia–China relations are at an all-time low, a lack of monitoring and research into closed and semi-closed platforms may run the risk of undermining the diversity of political discussions and the chance to mitigate security concerns.
78
A. Kruger et al.
5.3.4 Search Constraints and Unlimited Forwarding Pose Issues As First Draft (Chan et al., 2021) reported, other features unique to WeChat act as roadblocks to researchers and policymakers who want to monitor mis- and disinformation within the app. There are no forwarding limits on WeChat messages, as there are on WhatsApp (WhatsApp, 2021), to curb the spread of misinformation. There is a limit on the number of people to whom users can broadcast messages—a maximum of 200 people at a time (WeChat, 2021c)—but no limit to the number of broadcasts one can conduct. Theoretically, a message can be shared unlimited times among the app’s more than 1.2 billion active users (Thomala, 2021). There are no measures in place to stop its travel, unless it violates WeChat’s terms of service or acceptable use policy. The no-holds-barred approach can cause real-world harm prompted by a WeChat message that contains false, harmful information. WeChat’s search bar only allows simple keyword searches and lacks an advancedsearch capability, such as that on Weibo. Searches often return results about the most-read articles, not relevant users and certainly not any content within private direct or group messages, or in Moments (WeChat, 2021d). The simplistic search function also presents a hurdle for meaningful analysis of misinformation, because it is difficult to collect a large enough dataset to derive any sort of conclusion on how a misleading claim is broadly shaping the opinions of a user group. Adding to the challenge is that unlike Facebook, which, at the time of writing, offers social media analytics tool CrowdTangle with public content engagement insights (Meta, 2021b), WeChat lacks monitoring tools conducive to the gathering of data from the app, partly because of privacy concerns. For these reasons, conducting research on WeChat is highly labour- and time-intensive. Researchers at Deakin University are in the process of creating a computational research tool called WeCapture (unreleased at the time of writing), which would collect articles from WeChat and make research easier. Through a beta version of the tool, they conducted an analysis of how news about the 2022 Australian federal election was received by WeChat users (Davies & Kuang, 2022b).
5.4 Overt/Covert Influence—A Closer Look First Draft’s monitoring research shows that robust and persistent attempts to influence ethnic Chinese in Australia are notable from both Chinese state actors as well as organisations against the CCP. Many of these attempts have also used other social media apps to promote narratives that serve their causes, whether to encourage loyalty to the CCP or to promote activism and diplomatic changes against the CCP. As noted above, WeChat is censored by authorities; this next section explores how influence can also be directed at the Chinese diaspora on apps that are not subject to direct state censorship. The 2019 Hong Kong protests quickly inspired similar protests and
5 Understanding the Flow of Online Information and Misinformation …
79
counter protests internationally, including in Australia. The below section outlines how misinformation and disinformation campaigns about Hong Kong played out on open social media platforms such as Twitter and Instagram, and how tensions boiled over into Australia on Twitter and Facebook and were reinforced through WeChat.
5.4.1 Hong Kong–Australia Reverberations Researchers have previously identified alleged China-backed information operations that targeted the 2019 Hong Kong protests (Uren et al., 2019). Anti-government protests heated up on June 9 against an extradition bill that would allow criminal suspects to be extradited to mainland China. On 19 August 2019, Twitter disclosed a “significant state-backed information operation”, originating from within the People’s Republic of China (PRC) targeting the pro-democracy movement in Hong Kong (Twitter, 2019). It removed 936 accounts and suspended approximately 200,000 accounts its investigation found were illegitimate. Twitter released an archive of the tweets and accounts, and the company (Twitter, 2019) announced it was banning all advertising from state-controlled news media entities: “Any affected accounts will be free to continue to use Twitter to engage in public conversation, just not our advertising products”, adding that the ban would not apply to entities that are taxpayer-funded but independent. Later that day, Facebook (Gleicher, 2019) reported it had removed seven pages, three groups and five accounts belonging to a small network that originated in China and focused on Hong Kong. By the end of the same week, YouTube announced it had also taken down channels spreading disinformation: “We disabled 210 channels on YouTube when we discovered channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong”, Shane Huntley (2019) of Google’s security threat analysis group said in a company online post. Journalists, despite not being the organisers of the protests in Hong Kong, were targeted negatively online, many receiving death or rape threats against themselves and family for reporting on what they witnessed. First Draft conducted in-depth interviews1 with two bilingual journalists who had a high profile during the coverage— female photojournalist Laurel Chor and male Hong Kong freelance journalist Eric Cheung—about the online trolling they experienced. Laurel Chor noted the peak of the trolling was on Instagram in September 2019. “A lot of the trolling is gendered, they like to point out stuff about your appearance”, Chor said. “It seemed to irk trolls more that you are a woman. One incident where a real-life troll made a collage of about 64 all-female ethnically Chinese journalists— they took every single one of our profile pictures and tweeted it [with a threatening tone]”. Chor found it easier to ignore comments on Twitter.
1
Ethics approval granted by University Technology Sydney ETH19-4109.
80
A. Kruger et al.
“On Twitter there was a pretty big army of trolls that would go after you, my DMs were open most of the time”, Chor said. Chor noted she felt more affected by what she experienced on Instagram, where she writes long captions under her photos. Some of the DMs were pretty graphic, disgusting stuff about my parents dying, rape my mum — so Instagram was really bad. Most journalists don’t use Insta like I do — I write lengthy captions — mini articles. I didn’t have that many followers, yet I had over a thousand comments — I’d be getting comments every few minutes, and it was all kind of the same thing, like ‘(expletive) your mother’ and emojis calling her a prostitute, and Chinese slurs. I couldn’t help but go down a few rabbit holes but a lot of them were real Chinese people living abroad and they went out of their way to do this.
“I reached out to someone who works at Instagram on more high-profile accounts, and was advised on how to switch it to a creator account where there is this option where you can actually filter out certain words or emojis”. After making this switch, Chor noted a slowdown of the harassment. Meanwhile, Eric Cheung noted before Twitter’s takedown of the state-backed information operation accounts, he regularly received messages criticising him for being a “Chinese traitor” because he would report on breaking news. “So if you write you saw police chasing protests — they criticise you, but they don’t see that earlier you may have reported something about the protesters”, Cheung said. They try to discredit us — they take a screenshot of one post to accuse us of being biased.
After Twitter banned accounts from the “significant state-backed information operation”, Cheung’s followers dropped by the day, and the messages almost stopped. As reported by First Draft, the “harassment and hate speech” (Kruger, 2019) that erupted on social media platforms during the 2019 Hong Kong protests quickly crossed geographical boundaries where protests and counter protests took place internationally, including in Australia. First Draft interviewed a student reporter, Nilsson Jones, from the University of Queensland who witnessed and reported on the events on campus which gained international attention.2 Jones was a 4th year Journalism and Public Policy and Political Science student at the time of the protests. In his capacity as a student reporter, Jones attended a Hong Kong pro-democracy student rally on campus over freedoms in China and Hong Kong (Hamilton-Smith, 2019), organised by Hong Kong international students and domestic students to coincide with the university’s market day, but things turned violent when it was interrupted. When tensions escalated, Jones tweeted a video of what he witnessed, alongside the words: “Tensions escalated halfway through the protests” and hashtags which located the event: #uqprotest #china #hongkong #uq (Jones, 2019). 2
Ethics approval granted by University Technology Sydney ETH19-4109.
5 Understanding the Flow of Online Information and Misinformation …
81
Initially, Jones said he received a few “random messages” on Twitter and Facebook “such as ‘**** you, Hong Kong is China’” and there was one to the effect of “I hope you die”. But within two days, after Jones’ reporting had received further reach and coverage on radio and television, he received a more obvious death threat. And at that time, I changed my name on Facebook for 60 days until everything had cooled off as I didn’t want people to be able to trace it back to my family — it wasn’t until I received the Facebook message that it was a little bit more real to me, as that is linked to my family more than Twitter.
Organisers of the University of Queensland protests, meanwhile, were alerted to threatening messages about them circulating on WeChat. University of Hong Kong researcher Ann Choy observed that posts on WeChat made a variety of unsubstantiated accusations against the protest organisers (Kruger, 2019).
5.4.2 State Media and Negative Framing of Australia As First Draft reported in Disinformation, stigma and Chinese diaspora: policy guidance for Australia (Chan et al., 2021), the Chinese diaspora is sometimes targeted by propaganda-style image campaigns on WeChat. A July 2021 report by the Australian Strategic Policy Institute (ASPI) (Zhang, 2021) found that anti-Asian violence amid the COVID-19 pandemic saw “Chinese diaspora communities continue to be an ‘essential target’ of Chinese-state-linked social media manipulation”. This followed a 2020 report by ASPI (Wallis et al., 2020) into a “large-scale influence campaign linked to Chinese state actors” targeting Chinese-speaking people outside of China in an attempt to sway online debate, particularly surrounding the 2019 Hong Kong protests and the pandemic. In 2021, First Draft’s online monitoring found that on a more nuanced level, Chinese state actors also attempted to shape the debate about the spiralling Australia– China relations. This included broad issues (such as political stance and trade practices) as well as the personal (focusing on how the Chinese diaspora should be worried). Among the narratives reported in the Chinese state-owned Global Times, the diaspora may not be safe in Australia in the face of “a wave of rising racism”, (Hong, 2020), and the publication pushed the line that Australia had become less popular in China, and therefore, “It’s wishful thinking in Australia that the bilateral relationship won’t affect the economic exchange and services trade” (Chen et al., 2021). Online monitoring by First Draft researchers also found evidence of more covert efforts to push anti-Australian political rhetoric, sometimes in response to a statement or comments from Chinese authority figures (Chan et al., 2021). Following the release of an Australian government report into war crimes committed by its Defence Force during the 2015–2016 war in Afghanistan (Department of Defence, 2020), Chinese Foreign Ministry spokesperson Zhao Lijian issued a condemnation in a tweet (Zhao,
82
A. Kruger et al.
2020). However, the tweet included a provocative image that angered the Morrison administration and the general public. An analysis of the 10,000 replies to Zhao’s tweet by Queensland University of Technology researcher Tim Graham showed “recently created accounts are flooding the zone by replying to [Zhao’s] tweet” and that a majority of these accounts state they are located in China, indicating either possible bot activity or a coordinated campaign (Graham, 2020). Following this, the Chinese propaganda machine “dol[ed] out insults” against Australia (Birtles, 2021). On Twitter, Hu Xijin, then editor-in-chief of Global Times, called Australia the “urban-rural fringe” of Western civilisation and said that the killings of Afghan civilians prove Canberra’s “barbarism” (Hu, 2020). The minister of the Chinese Embassy to Australia, Wang Ximing, reportedly attacked Australia’s “constitutional fragility” and “intellectual vulnerability” (Butler, 2020), while at the same time calling for “respect, goodwill, fairness” in a speech about the China– Australia relationship (Xinhua, 2020). A display of this war of words can be seen below (see Fig. 5.3). Once again, the Chinese diaspora bore the brunt of these tensions. This type of media framing is not new. Ahead of Australia’s 2019 federal election, First Draft collaborated with University of Hong Kong researcher Ann Choy (Choy, 2019) to monitor related responses on WeChat. A number of examples on WeChat show a tendency to emphasise anti-Chinese sentiment from Australia or overemphasise negative aspects such as drugs or crime in Australia. Populist rightwing leader Pauline Hanson and her One Nation Party were targeted heavily to make Australia look “silly” (Choy, 2019). For example, One Nation’s Perth candidate Jackson Wreford had reportedly posted risqué images of himself on his social media account in 2018—subject matter that again was used to frame populist candidates in a negative light. WeChat users responded with sensational coverage and
Fig. 5.3 A collage of screenshots showing tweets from Chinese government officials and state media targeting Australia between 2020 and 2021 (Screenshots by First Draft)
5 Understanding the Flow of Online Information and Misinformation …
83
memes using Hollywood figures. Pauline Hanson was also labelled as “number one villain” (Fig. 5.4) on WeChat. However, this comes as hardly a surprise given “the historically xenophobic nature” of Pauline Hanson’s One Nation Party and frequent remarks targeting immigrants during her campaign (Choy, 2019). Problems within Pauline Hanson’s One Nation Party were seized upon and discussed on WeChat. For example, the WeChat article below (see Fig. 5.5) roughly translates to “Prostitution, Selling the country out, Anti-Chinese—this Australian political party will soon lead to its own downfall” (Choy, 2019). It used a screenshot from commercial Australian breakfast television, then listed in detail recent party blunders and Pauline Hanson’s anti-immigration stance. The articles were also reshared and generated on other “news” accounts.
Fig. 5.4 Pauline Hanson was labelled as “Number One Villain” on WeChat Screenshot by Ann Choy
Fig. 5.5 WeChat article roughly translates to “Prostitution, Selling the country out, Anti-Chinese, this Australian political party will soon lead to its own downfall” (Screenshot by Ann Choy)
84
A. Kruger et al.
Fig. 5.6 Positive coverage of Sam Crosby on WeChat Screenshot by Ann Choy
On the other hand, Choy noted Labor Party Sam Crosby’s exposure on the WeChat articles was much more positive, such as the article shown in Fig. 5.6. “Touting him as a politician who respected Chinese tradition and culture, he would visit and understand the work in suburbs like Campsie which has a large Chinese community”, Choy said. Inevitably like in any social network, WeChat users in China and globally are inundated daily with a plethora of information and misinformation. However, research about misinformation on WeChat remains limited. Academic and media discussions have focused on whether or not WeChat is used “primarily” as a tool for the Chinese Communist Party to dispense propaganda to its citizens. The University of Technology Sydney Professor Wanning Sun said “it is not” (Wanning, 2019b), but conceded that it is subject to censorship. Foreign countries—and their politicians— with growing Chinese populations such as Australia, turned to the app to connect with their Chinese communities, some say, with little consideration over security risks (Xia, 2020).
5.4.3 Anti-CCP Discourse On the other side of the coin are groups that strive to promote anti-Chinese Communist Party (CCP) ideologies. One of those groups is the Himalaya movement (喜馬拉
5 Understanding the Flow of Online Information and Misinformation …
85
雅), an anti-CCP (Hui & Cohen, 2020) group founded by exiled Chinese billionaire Guo Wengui and co-led by Guo and former Donald Trump adviser Steve Bannon (Bass, et al., 2020). It recruits members of the Chinese diaspora, including those in Australia. However, there appears to be no mention of the movement on WeChat (there are old articles from 2017 and 2018 against Guo, who fled China in 2014). This could be a result of censorship on WeChat targeting anti-CCP groups. Originally named the New Federal State of China (新中國聯邦) and sometimes known as the Whistleblower Movement, the group has been expanding throughout Asia-Pacific, North America and most recently Europe via “farms” (“farms” being an allusion to their ideal paradise life) around the world (Gnews, 2020). It has shared anti-CCP and anti-Joe Biden narratives, misinformation about the 2020 US election being fraudulent or rigged, as well as coronavirus conspiracies. The group organised and exchanged information on private Discord servers and promoted their beliefs on Twitter and YouTube. The group also has a dedicated website called Gnews, a video platform called Gtv and a Twitter-like platform Gettr. Launched in early July 2021 by a former Trump spokesperson, Gettr claims to promote freedom of speech with no censorship, thereby quickly attracting conspiracy theories and hate speech as monitored by First Draft for its Daily Briefing (First Draft, 2021a). Politico reported in 2021 that Gettr is financially backed by Guo (Nguyen, 2021). Gettr’s predecessor, Getter, was a platform used by Guo to communicate with his followers, and there are signs that the ties between the new platform and the movement run deeper than funding (First Draft, 2021b). Members of the Himalaya movement have made coordinated efforts to populate the website and use it to promote pro-Trump and anti-CCP narratives, including the idea that the coronavirus is a bioweapon created by China. These unproven or misleading narratives have also been shared offline via pamphlets, flyers and posters. First Draft’s monitoring found the movement also has its own group of translators working to share promotional materials in multiple languages and has organised protests and rallies around the world, including Australia, New Zealand, Japan and Italy. An example of an image showing a pamphlet distributed by Himalaya Australia in August 2021 can be seen in Fig. 5.7. However, First Draft researchers found that in 2021, some of the Himalaya’s translation channels had been “terminated” by YouTube for violating its Community Guidelines (YouTube, 2021). One “celebrity” who was associated with the group (and has since distanced herself from Guo) was Chinese virologist Dr. Yan Li-meng, who came under the umbrella of the movement in mid-2020 after fleeing to the United States. She became one of the strongest proponents for the movement after promoting the claim that the coronavirus is a bioweapon created in a Chinese lab (Wengui, 2020). Yan used her previous employment as a virologist in China as evidence of her credibility, despite her last workplace, the University of Hong Kong, distancing itself from her, (The University of Hong Kong, 2020) and other institutions discrediting her work as “baseless” (Koyama et al., 2020). However, through these credentials and the movement’s ties with high-profile figures in the US, Yan had garnered over 113,000
86
A. Kruger et al.
Fig. 5.7 A screenshot of an image showing a pamphlet distributed by Himalaya Australia in August 2021 (Screenshot by First Draft)
followers on Twitter after numerous appearances and interviews with outlets such as Fox News, Newsmax and the Daily Mail. In the run-up to the 2020 US presidential election, First Draft saw some crossover between Himalaya and QAnon-supporting accounts, with the mutual goal of seeing Trump re-elected. The movement also has ties with former New York mayor Rudy Giuliani, who was at the forefront of Gnews reports with claims about Hunter Biden and the “collusion between the Bidens and Communist China” (Stella, 2020). First Draft researchers discovered that rumours about a hard drive that was said to belong to Hunter Biden were first mentioned online by then Himalaya pundit Wang Dinggang, also known as Lu De (LUDE Media, 2020, September). The video was uploaded weeks before the New York Post’s “smoking gun” article was published (Morris & Fonrouge, 2020). A number of QAnon-supporting Twitter accounts retweeted Himalaya’s hard drive claims as so-called evidence that Hunter’s father, Joe
5 Understanding the Flow of Online Information and Misinformation …
87
Biden, should not be elected. Some Himalaya supporters have also promoted QAnon claims about the “Deep State”, which they spun as having a mutually beneficial relationship with the CCP. Members of the movement pay close attention to press coverage and sometimes issue strong rebuttals. For instance, ABC Australia journalist Echo Hui, who is ethnically Chinese, was called out in an “open response” (AU Jenny, 2020a) to her email inquiry to the group’s Australia branch, despite the fact that Hui was not the only journalist who had worked on that story. Himalaya followers also marched to the public broadcaster’s Brisbane office following its report on the movement (Hui & Cohen, 2020). The group accused ABC Australia for having been “weaponised to spread leftist ideology” and producing “fake news against the New Federal State of China” (AU Jenny, 2020b). On the other hand, when The New York Times published its feature on the movement (Qin et al., 2020), Lu De covered it in his livestream show in November 2020 (LUDE Media, 2020b), saying he was happy that the movement had made it to the pages of the Times, as this meant the movement and its ideology had become mainstream (“承认了这个成为了主流思潮了”). The Himalaya movement has an interest in spreading its ideology and theories to the mainstream media. The Hunter Biden hard drive scandal was not the first time Himalaya has surfaced something that later made it into the broader public sphere. First Draft monitoring found (Zhang, 2021) the movement was first to post online about a book published in China in 2015 containing conspiracy theories about the use of coronaviruses as bioweapons. The book has been discredited as “a conspiracy theory document written by some people who have some affiliation with the Chinese government” by Robert Potter, a former Australian government defensive cyber specialist (Galloway & Bagshaw, 2021). The Himalaya movement initially publicised the existence of the book in February 2021, claiming it as proof that COVID-19 is a CCP bioweapon. By May, the national newspaper The Australian echoed the possibility of the claim in an exposé-style article about the “chilling” book (referred to in the newspaper as a “document”) (Markson & Hazlewood, 2021). News outlets should be wary of possible amplification of unverified claims in general, but especially those from a group like the Himalaya, which has a clear goal of reaching a wider audience via mainstream media. While publication about their claims may not always be necessary, research on groups such as Himalaya is essential because they operate on a member system and organise in semi-closed spaces. A lack of insight to the kinds of narratives they are pushing among the Chinese-speaking population in Australia could pose ideological and security risks. An important part of monitoring by this chapter’s authors is tracking how narratives move throughout various online spaces, including closed networks such as WeChat and social media such as Facebook and YouTube. Judgements can then be made about the potential for harmful or misleading information to reach a larger audience, for example; if discussions are staying in anonymous or closed networks with small or limited numbers; if these narratives then move to conspiracy communities; and whether or not the communities are growing. First Draft’s approach then considers if these narratives have the potential to keep growing or spreading into wider or more open spaces on social media where they can be available to large numbers
88
A. Kruger et al.
Fig. 5.8 The Trumpet of Amplification: how content moves from the anonymous web to the major social media platforms and the professional media (Source firstdraftnews.org)
of the public, and even more so through the media and politicians or those with prominent positions. This is illustrated via Fig. 5.8 and known as the “Information Disorder: Trumpet of Amplification”.
5.5 Conclusion Policymakers who wish to engage with the Chinese diaspora community should hire Chinese speakers who are familiar with popular social media platforms and messaging apps such as WeChat, Weibo and Australia Today (Media Today Group, n.d.)—the latter an all-in-one app where users can access news, classified ads, a Quora-style Q&A board and a dating/matchmaking service. With the necessary language skills, cultural awareness and Chinese social media know-how, they can help formulate strategies and social media campaigns tailored to the needs of the community to protect it from malinformation. A substantial amount of relevant data about the Chinese diaspora is needed for an outreach programme to be effective. This can be achieved by setting up online monitoring systems to facilitate social listening, using free tools such as Google Alerts or CrowdTangle’s Link Checker. For those who are unfamiliar with information disorder and social analytics, these practices may seem daunting and hard to execute. Policymakers can seek the opinions and expertise of industry experts who are well-versed in the Australian and Chinese diaspora social media landscape. Tapping into diaspora research by disinformation experts as reported earlier by First Draft (2020); setting up a tip line for reports of
5 Understanding the Flow of Online Information and Misinformation …
89
election-related mis- and disinformation, as has been successful for election monitoring projects such as “ElectionLand” which included diaspora issues (Fitzgerald Rodriguez et al., 2020); and funding surveys or projects to gain insights into particular issues are ways to achieve a clearer picture of this community’s social media habits. Content insights about the types of information they choose to share—and more importantly, the relationship between their online interactions and real-life actions, including their voting decisions—are invaluable, as these insights can help researchers tackle the monumental task of manually monitoring usage of WeChat by narrowing down possible topics to research. High-level roundtables with disinformation experts, officials from defence and foreign affairs and social media companies can bring stakeholders together for a better understanding of the risks mis- and disinformation pose. Crucially, policymakers must grasp the fact that online discussions based on false information, half-truths and conspiracy theories can snowball into a serious problem, even a national security threat. The January 6 US Capitol insurrection is a textbook example (Dilanian & Collins, 2021). As First Draft reported in Disinformation, stigma and Chinese diaspora: policy guidance for Australia (Chan et al., 2021), the Chinese diaspora’s ancestry and language skills make it a possible target of exploitation, but these are exactly the areas that can be turned into invaluable assets for authorities who wish to embrace the increasingly diverse population in Australia. The Chinese regime does not represent all people of Chinese origin, so a more appropriate approach is to stop vilifying people based on their race or ethnicity—starting with the media. Social media platforms should continue to tighten enforcement of their hate speech rules to reduce online discrimination. As our analysis shows that language skills alone might not be enough to get a sense of how misinformation flows through the Chinese diaspora, an acute understanding and active monitoring of where, when and how this community converges online is crucial. Instead of leaving misunderstandings unaddressed and allowing racist stigma to run deeper and become more dangerous, policymakers and information providers should seek to understand, support and engage this community to formulate more effective policies both for them and for the rest of Australia. We hope this chapter provides a template for research into other diaspora communities globally.
References ABC News, Politics. (2020, December 1). Chinese officials accuse Scott Morrison of stoking nationalism in response to fake Afghan tweet as PM defends position on WeChat. ABC News, Politics. https://www.abc.net.au/news/2020-12-01/china-accuses-scott-morrison-of-stoking-nation alism-afghanistan/12939734 Allcott, H., & Gentzkow M. (2017, Spring). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://web.stanford.edu/~gentzkow/research/fak enews.pdf
90
A. Kruger et al.
AU Jenny. (2020a, November 3). Open response to Echo Hui, reporter of ABC Australia. Gnews. https://gnews.org/524751/ AU Jenny. (2020b, November 15). Our fight against ABC Australia’s fake news. Gnews. https:// gnews.org/560867/ Australian Bureau of Statistics. (2018, February 16). ABS reveals insights into Australia’s Chinese population on Chinese New Year. ABS Chinese New Year Insights. Media release. https://www.abs.gov.au/ausstats/[email protected]/mediareleasesbytitle/D8CAE4F74B82D44 6CA258235000F2BDE?OpenDocument Australian Bureau of Statistics. (2022, April 26). Australia’s population by country of birth. Australian Bureau of Statistics. https://www.abs.gov.au/statistics/people/population/australiaspopulation-country-birth/latest-release Bagshaw, E. (2020, December 14). Australian coal blocked indefinitely by Beijing. The Sydney Morning Herald. https://www.smh.com.au/world/asia/australian-coal-blocked-indefinitely-bybeijing-20201214-p56ne7.html Bass, K., Bannon, S., & Wengui, G. (2020, June 4). Declaration of the New Federal State of China. Gnews. https://s3.amazonaws.com/gnews-media-offload/wp-content/uploads/2020/06/ 03195945/%E3%80%90%E8%8B%B1%E6%96%87%E3%80%91Declaration-of-the-NewFederal-State-of-China.pdf Bergin, J. (2022, May 11). Ratcheting up a red line: How China is being used in the Australian election campaign. First Draft. https://firstdraftnews.org/articles/ratcheting-up-a-red-line-howchina-is-being-used-in-the-australian-election-campaign/ Birtles, B. (2020, June 9). China cautions students about ‘racist incidents’ during coronavirus pandemic if they return to Australia. ABC News. https://www.abc.net.au/news/2020-06-09/ china-warns-students-not-to-return-to-australia-after-coronaviru/12337044 Birtles, B. (2021, June 22). China’s most belligerent journalists used to be the ones doling out insults online. Now they’re the targets. ABC News. https://www.abc.net.au/news/2021-06-19/cancelculture-comes-to-chinas-global-times-tabloid/100220756 Blair, A. (2022, April 6). Election billboards claiming Chinese Communist Party supports Labor spotted around Australia. News.com.au. https://www.news.com.au/national/federal-election/ election-billboard-claiming-chinese-communist-party-supports-labor-spotted-in-melbourne/ news-story/717693d5078afd8a441cdfca757ad7c3 Bogle, A., & Zhao, I. (2020, October 8). Anti-Beijing group with links to Steve Bannon spreading COVID-19 misinformation in Australia. ABC News Science. https://www.abc.net.au/news/sci ence/2020-10-09/anti-beijing-group-with-links-to-steve-bannon-misinformation/12735638 Burrell, A. (2018, December 13). Labor MP mentored by executive tied to Chinese Communist Party. The Australian. https://www.theaustralian.com.au/nation/politics/labor-mp-mentored-byexecutive-tied-to-chinese-communist-party/news-story/ec5f622659bf9bdd127637ed361ca975 Butler, J. (2020, August 26). The New Daily Chinese diplomat says virus didn’t come from Wuhan, slams Australians for ‘whining’. https://thenewdaily.com.au/news/2020/08/26/chinesediplomat-wang-xining-coronavirus-wuhan/ Cave, D., & Williams J. (2018, June 20). Australian law targets foreign interference: China is not pleased. The New York Times. Chan, E., Zhang, S., & Kruger, A. (2021, July 31). Disinformation, stigma and Chinese diaspora: Policy guidance for Australia. firstdraftnews.org. https://firstdraftnews.org/long-form-article/dis information-stigma-and-chinese-diaspora-policy-guidance-for-australia/ Chen, Q., Zhao, Y., Xie, J., & Xu, K. (2021, June 23). Chinese less favorable to Australia amid strained ties: GT poll. Global Times. https://www.globaltimes.cn/page/202106/1226840.shtml Chiu, O., & Huang, K. (2020, March 13). Letter to Committee Secretary, Foreign Interference through Social Media Submission 12. Via Australian Parliament House. https://www.goo gle.com/url?q=https://www.aph.gov.au/DocumentStore.ashx?id%3Dfbfb7d4a-4d41-4bba-93e 15892817c1c83%26subId%3D679565&sa=D&source=docs&ust=1638258273662000&usg= AOvVaw1AhwJZMWNv2zPRJAUuOai7, https://www.aph.gov.au/DocumentStore.ashx?id= fbfb7d4a-4d41-4bba-93e1-5892817c1c83&subId=679565 Choy, A. (2019). First Draft Australia, email interview with Ann Choy, University of Hong Kong. CrossCheck Australia 2019 Federal Election.
5 Understanding the Flow of Online Information and Misinformation …
91
Davies, A., & Kuang, W. (2022a, May 11). WeChat posts spread misinformation saying Labor plans ‘to turn children gay’ and ‘destroy Chinese wealth’. The Guardian Australia. https://www.theguardian.com/australia-news/2022/may/11/wechat-postswarn-labor-plans-to-turn-children-gay-and-destroy-chinese-wealth Davies, A., & Kuang, W. (2022b, May 10). ‘Josh Frydenberg for PM’: Stories in Chinese circulating on WeChat are positively gushing over the treasurer. The Guardian Australia. https://www.theguardian.com/australia-news/2022/may/10/josh-frydenberg-for-pmstories-in-chinese-circulating-on-wechat-are-positively-gushing-over-the-treasurer Davies, A., & Kuang, W. (2022c, May 13). Chinese-speaking voters critical of Coalition’s ‘militaristic’ stance on China in lead-up to 2022 election, WeChat study shows. The Guardian Australia. https://www.theguardian.com/australia-news/2022/may/13/chinese-speaking-voterscritical-of-coalitions-militaristic-stance-on-china-in-lead-up-to-2022-election-wechat-studyshows Department of Defence. (2020, November). Australian Government Department of Defence. Afghanistan Inquiry Report. https://afghanistaninquiry.defence.gov.au/ Department of Education, Skills and Employment. (2018). International student data 2018. Australian Government, Department of Education, Skills and Employment. https://internationa leducation.gov.au/research/International-Student-Data/Pages/InternationalStudentData2018. aspx Department of Foreign Affairs and Trade. (2020). Trade and investment at a glance 2020. Department of Foreign Affairs and Trade Publications. https://www.dfat.gov.au/publications/tradeand-investment/trade-and-investment-glance-2020 Department of Home Affairs. (2022, updated May 13). Country profile—People’s Republic of China. Department of Home Affairs, Australian Government. https://www.homeaffairs.gov.au/ research-and-statistics/statistics/country-profiles/profiles/peoples-republic-of-china Dilanian, K., & Collins, B. (2021, April 20). There are hundreds of posts about plans to attack the Capitol: Why hasn’t this evidence been used in court? NBC News. https://www.nbcnews.com/politics/justice-department/we-found-hundreds-posts-aboutplans-attack-capitol-why-aren-n1264291 Dziedzic, S. (2021, July 7). Chinese official declares Beijing has targeted Australian goods as economic punishment. ABC News. https://www.abc.net.au/news/2021-07-07/australia-chinatrade-tensions-official-economic-punishment/100273964 Fang, S. (2021, August 2). 微信上的谣言, 为什么怎么辟谣都辟不干净. Wainao.me. https://www. wainao.me/wainao-reads/fea-wechat-fake-news-02082021 Federal Bureau of Investigation. (2019). The China threat: What we investigate, counterintelligence. FBI, United States Government. https://www.fbi.gov/investigate/counterintelligence/the-chinathreat First Draft. (2021a, July 21). Conspiracy theories are populating new Gettr platform. The Daily Briefing, Firstdraftnews.org. https://firstdraftnews.org/articles/conspiracy-theories-are-popula ting-new-gettr-platform/ First Draft. (2021b, July 6). Is Gettr tied to the anti-CCP movement? The Daily Briefing, Firstdraftnews.org. https://firstdraftnews.org/articles/is-gettr-tied-to-the-anti-ccp-movement/ Fitzgerald Rodriguez, J., Lin, S., & Huseman, J. (2020, November 2). Misinformation image on WeChat attempts to frighten Chinese Americans out of voting. ProPublica. https://www.propublica.org/article/misinformation-image-on-wechat-attempts-tofrighten-chinese-americans-out-of-voting Gallagher, R. (2020, May 13). China’s disinformation effort targets virus, researcher says. Bloomberg Quint. https://www.bloombergquint.com/technology/china-s-disinformation-cam paign-targets-virus-and-businessman Galloway, A., & Chung, A. (2020, October 12). Beijing influence or racism? China debate hits Melbourne council elections. The Age. https://www.theage.com.au/politics/victoria/beijing-inf luence-or-racism-china-debate-hits-melbourne-council-elections-20201012-p5644c.html
92
A. Kruger et al.
Galloway, A., & Bagshaw, E. (2021, May 13). Going viral: How a book on Amazon inspired the latest COVID conspiracy. Sydney Morning Herald. https://www.smh.com.au/world/asia/goingviral-how-a-book-on-amazon-inspired-the-latest-covid-conspiracy-20210512-p57r6e.html Gleicher, N, (2019, August 19). Removing coordinated inauthentic behavior from China. Meta. https://about.fb.com/news/2019/08/removing-cib-china/ Global Times. (2021, March 3). China deeply concerned about racial discrimination against Chinese in Australia: FM. Global Times. https://www.globaltimes.cn/page/202103/1217190.shtml Gnews. (2020, August 9). Himalayan farms. Gnews. https://www.google.com/url?q=https:// gnews.org/zh-hans/291184/&sa=D&source=docs&ust=1638498613933000&usg=AOvVaw 0dZMWU41_3xBvEj6uZ-HMn Graham, T. (2020, November 20). Twitter post @Timothyjgraham. Accessed via archive. https:// archive.vn/8XDaH Green, A. (2020, October 14). Chinese documentary accuses Australian Strategic Policy Institute and ABC of driving anti-Beijing sentiment. ABC News politics. https://www.abc.net.au/news/ 2020-10-14/documentary-accuses-aspi-abc-of-driving-anti-china-sentiment/12765972 Hamilton-Smith, L. (2019, July 24). UQ student protest turns violent in clash of views on freedom in China and Hong Kong. ABC News. https://www.abc.net.au/news/2019-07-24/uq-student-pro test-anger-over-hong-kong-chinese-minorities/11343130 He, K. (2020, October 13). China’s image problem is worsening globally: It’s time for Beijing to consider a diplomatic reset. The Conversation. https://theconversation.com/chinas-image-pro blem-is-worsening-globally-its-time-for-beijing-to-consider-a-diplomatic-reset-147901 Hong, L. (2020, September 20). For Australia, working hand in hand with the US is a dead-end strategy. The Global Times. https://www.globaltimes.cn/content/1201462.shtml Hsu, J. (2021, May 21). Australia-China relations: More hurdles ahead. The Interpreter, Lowy Institute. https://www.lowyinstitute.org/the-interpreter/australia-china-relations-more-hurdlesahead Hsu, J., & Kassam, N. (2022, April 7). Five key findings from the being Chinese in Australia survey. The Interpreter, Lowy Institute. https://www.lowyinstitute.org/the-interpreter/five-keyfindings-being-chinese-australia Hu, X. [@HuXijin_GT]. (2020, November 20). Australia is “urban-rural fringe” of Western civilization where gangsters roamed. If the US wants to do something bad, it seeks thugs in such a place. Australian army’s killing of Afghan civilians and @ScottMorrisonMP’s attitude prove Canberra’s barbarism. [Tweet]. Accessed via archive. https://archive.vn/OjqAo Hui, E., & Blumer, C. (2019, May 17). Federal election sees supporters for Liberal Gladys Liu spread scare campaigns in hidden chatrooms. ABC Investigations. https://www.abc.net.au/news/201905-17/liberal-supporters-spreading-fake-news-in-hidden-chat-rooms/11121194?nw=0 Hui, E., & Cohen, H. (2020, November 1). They once peddled misinformation for Guo Wengui and Steve Bannon: Now they’re speaking out. ABC News. https://www.abc.net.au/news/202011-01/behind-the-scenes-of-the-guo-and-bannon-led-propaganda-machine/12830824 Huntley, S. (2019, August 19). Maintaining the integrity of our platforms. The Keyword, Company News, Public Policy, Google. https://www.blog.google/outreach-initiatives/public-policy/mai ntaining-integrity-our-platforms/ Hurst, D. (2020, October 16). Eric Abetz refuses to apologise for demanding Chinese-Australians denounce Communist party. The Guardian Australia. https://www.theguardian.com/austra lia-news/2020/oct/16/eric-abetz-refuses-to-apologise-for-demanding-chinese-australians-den ounce-communist-party Jones, N. [@nilssonjones_]. (2019, July 24). Tensions escalated halfway through the protests #uqprotest #china #hongkong #uq [Tweet], [video]. Archive. https://perma.cc/DJ2P-DVVY https://twitter.com/nilssonjones_/status/1153903723292192769?s=20 Kao, J., & Shuang Li, M. (2020, March 26). How China built a Twitter propaganda machine then let it loose on coronavirus. ProPublica. https://www.propublica.org/article/how-china-built-atwitter-propaganda-machine-then-let-it-loose-on-coronavirus
5 Understanding the Flow of Online Information and Misinformation …
93
Kenyon, M. (2020, May 7). WeChat surveillance explained. The Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto. https://citizenlab.ca/2020/05/wechatsurveillance-explained/ Knaus, C. (2020, November 19). Key findings of the Brereton report into allegations of Australian war crimes in Afghanistan. The Guardian Australia. https://www.theguardian.com/australianews/2020/nov/19/key-findings-of-the-brereton-report-into-allegations-of-australian-war-cri mes-in-afghanistan Koyama, T., Lauring, A., Gallo, R., & Reitz, M. (2020, September 25). Reviews of “Unusual features of the SARS-CoV-2 genome suggesting sophisticated laboratory modification rather than natural evolution and delineation of its probable synthetic route”. Rapid Reviews Covid-19. https://rapidreviewscovid19.mitpress.mit.edu/pub/78we86rp/release/2 Kruger, A. (2019, August 15). Harassment and hate speech spill over from the Hong Kong protests through social media. firstdraft.org. https://firstdraftnews.org/articles/harassment-and-hate-spe ech-is-spilling-over-from-the-hong-kong-protests/ Lalani, F., & Li, C. (2022, 19 May). Here’s how to boost digital safety now and in the future. World Economic Forum. https://www.weforum.org/agenda/2022/05/how-to-ensure-digital-safety-dur ing-wartime/ LUDE Media. (2020a, September 24). Lu De/Wang Dinggang. YouTube Video unavailable https://www.youtube.com/watch?v=y9fYcoaWzTI. Accessed via archive. https://web.archive. org/web/20201102031512/https://www.youtube.com/watch?v=LrKHPNW7UH0 LUDE Media. (2020b, November 20). Livestream. YouTube Video unavailable https://www.you tube.com/watch?v=LrKHPNW7UH0. Accessed via archive https://web.archive.org/web/202 01120152002/https://www.youtube.com/watch?v=CpBCExFl_Ko&gl=US&hl=en Mao, Y., & Qian, Y. (2015, January). Facebook use and acculturation: The case of overseas Chinese professionals in Western Countries. International Journal of Communication, 9(1), University of Southern California. Markson, S., & Hazlewood, J. (2021, May 7). Chinese military scientists discussed weaponising SARS coronaviruses. The Australian. https://www.theaustralian.com.au/nation/politics/chi nese-military-scientists-discussed-weaponising-sars-coronaviruses/news-story/850ae2d2e268 1549cb9d21162c52d4c0 Media Today Group. (n.d.). https://www.mediatodaygroup.com/download/ Meta. (2021a). Meta journalism project. Facebook’s Third-Party Fact Checking Program. https:// www.facebook.com/journalismproject/programs/third-party-fact-checking Meta. (2021b). CrowdTangle. Meta Journalism Project. https://www.facebook.com/journalismpr oject/tools/crowdtangle Meta. (2022a, February 26). Meta’s ongoing efforts regarding Russia’s invasion of Ukraine. Meta. https://about.fb.com/news/2022/02/metas-ongoing-efforts-regarding-russiasinvasion-of-ukraine/ McGregor, R., Kassam, N., & Hsu, J. (2021, November). Lines blurred: Chinese community organisations in Australia. Lowy Institute. https://www.lowyinstitute.org/publications/lines-blurredchinese-community-organisations-australia Morris, E., & Fonrouge, G. (2020, October 14). Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad. New York Post. https://nypost.com/2020/10/14/ email-reveals-how-hunter-biden-introduced-ukrainian-biz-man-to-dad/ Morrison, S. (2021, May). WeChat, Scott Morrison: Stronger, Safer, We Work Together. Scott Morrison post. Accessed via archive https://archive.vn/NlEo9 Nguyen, T. (2021, July 1). The newest MAGA app is tied to a Bannon-allied Chinese billion. Politico. https://www.politico.com/news/2021/07/01/maga-app-bannon-chinese-billionaire-497767 Qin, A., Wang, V., & Hakim, D. (2020, November 20). How Steve Bannon and a Chinese billionaire created a right-wing coronavirus media sensation. The New York Times. https://www.nytimes. com/2020/11/20/business/media/steve-bannon-china.html
94
A. Kruger et al.
Razak, I. (2019, Nov 7). AEC dismisses impact of purple Chinese-language signs on election of Josh Frydenberg and Gladys Liu. ABC News. https://www.abc.net.au/news/2019-11-07/aec-dis misses-claims-chinese-language-signs-influenced-election/11681840 Ruan, L., Knockel, J., & Crete-Nishihata, M. (2020, March 3). Censored contagion: How information on the coronavirus is managed on Chinese social media. The Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto. https://citizenlab.ca/2020/03/censoredcontagion-how-information-on-the-coronavirus-is-managed-on-chinese-social-media/ SBS Chinese. (2021, June 18). WeChat news Chinese parents share with their kids. SBS. https:// www.sbs.com.au/chinese/english/wechat-news-chinese-parents-share-with-their-kids Smyth, J., & Shepherd, C. (2020, December). Chinese app WeChat censors Australian PM Scott Morrison’s post. Financial Times. https://www.ft.com/content/9c5376e5-5d94-4942-ba3c-e55 3a75508cb Stella. (2020, November 6). Rudy Giuliani’s exclusive reaction on the collusion between the Bidens and Communist China. Gnews. https://gnews.org/533161/ Tan, C. B. (2013, February). Routledge handbook of the Chinese diaspora. The University of Hong Kong. (2020, July 11). HKU responds to the media concerning a former staff member’s TV interview. HKU Press Release. https://www.hku.hk/press/press-releases/det ail/21274.html Thomala, L. L. (2021, August 25). Number of monthly active WeChat users from 2nd quarter 2011 to 2nd quarter 2021. Statista.com. https://www.statista.com/statistics/255778/number-ofactive-wechat-messenger-accounts/ Tourism Australia. (2019). Short-term visitors arrival. https://www.tourism.australia.com/content/ dam/assets/document/1/7/9/g/v/2018479.pdf Twitter. (2019, August 19). Information operations directed at Hong Kong. Twitter Safety. Blog. Twitter. https://blog.twitter.com/en_us/topics/company/2019/information_operations_d irected_at_Hong_Kong Urbani, S. (2019, November 20). The 5 closed messaging apps every journalist should know about and how to use them. Firstdraft.org. https://firstdraftnews.org/articles/the-5-closed-messagingapps-every-journalist-should-know-about-and-how-to-use-them/ Uren, T., Thomas E., & Wallis J. (2019, September 3). Tweeting through the Great Firewall. Australian Strategic Policy Institute. https://www.aspi.org.au/report/tweeting-through-great-fir ewall Uribe, A. (2020, June 6). China warns citizens not to travel to Australia. The Wall Street Journal. https://www.wsj.com/articles/china-warns-citizens-not-to-travel-to-australia-11591446223 Visiontay, E. (2020, June 30). China accuses Australia of mass espionage, peddling rumours and stoking confrontation. The Guardian Australia. https://www.theguardian.com/australia-news/ 2020/jun/30/china-accuses-australia-of-mass-espionage-peddling-rumours-and-stoking-confro ntation Wallis, J., Uren, T., Zhang A., Hoffman S., Li, L., Pascoe, A., & Cave, D. (2020). Retweeting through the great firewall. Australian Strategic Policy Institute. International Cyber Policy Centre. Policy Brief 33/2020. Wanning, S. (2019a, April 1). Chinese-language digital/social media in Australia: Double-edged sword in Australia’s public diplomacy agenda. Media International Australia. Sage Journals. https://doi.org/10.1177/1329878X19837664; https://journals.sagepub.com/doi/10.1177/ 1329878X19837664?icid=int.sj-abstract.similar-articles.2 Wanning, S. (2019b, April). Is there a problem with WeChat? China Matters Explores. China Matters. http://chinamatters.org.au/wp-content/uploads/2019/04/China-Matters-ExploresApril-2019.pdf Wanning, S. (2020, August 10). Why Trump’s WeChat ban does not make sense—And could actually cost him Chinese votes. The Conversation. https://theconversation.com/why-trumpswechat-ban-does-not-make-sense-and-could-actually-cost-him-chinese-votes-144207 WeChat. 微信联合多家机构 设立辟谣公众号“谣言过滤器” https://weixin.qq.com/cgi-bin/rea dtemplate?lang=zh_CN&t=weixin_rumor_filter
5 Understanding the Flow of Online Information and Misinformation …
95
WeChat. (2020, September 30). Letter to committee chair, foreign interference through social media submission 30. WeChat International Pte. Ltd. Accessed via https://www.google.com/ url?q=https://www.aph.gov.au/DocumentStore.ashx?id%3D773089c5-92bc-4d02-a418-717 f2f55f48c%26subId%3D692212%23:~:text%3DWeChat%2520is%2520a%2520widely%252 0used,registered%2520Australian%2520mobile%2520phone%2520number.&sa=D&source= docs&ust=1638170394932000&usg=AOvVaw2UZMwyhXlQx7r40BwCvXnj WeChat. (2021a, August 19). WeChat—Terms of service. WeChat. https://www.wechat.com/en/ser vice_terms.html WeChat. (2021b, August 19). WeChat—Acceptable use policy. WeChat. https://www.wechat.com/ en/acceptable_use_policy.html WeChat. (2021c). How do I use broadcast messages? WeChat Help Center. WeChat. https://help. wechat.com/cgi-bin/micromsg-bin/oshelpcenter?opcode=2&lang=en&plat=ios&id=150820 ui6N7v150820EVVjmU WeChat. (2021d). WeChat help center moments. https://help.wechat.com/cgi-bin/newreadtemplate? t=help_center/topic_list&plat=ios&lang=en&Channel=helpcenter&detail=1001088 Wengui, G. (2020, July 29). Dr. Limeng Yan on War Room: the CCP virus is made from backbone virus owned only by the PLA labs. YouTube Video unavailable https://www.youtube.com/watch? v=CpBCExFl_Ko. Accessed via archive https://web.archive.org/web/20210325155047/https:// www.youtube.com/watch?v=y9fYcoaWzTI WhatsApp. (2021). Forwarded vs forwarded many times. Help Center, WhatsApp. https://faq.wha tsapp.com/general/chats/about-forwarding-limits/?lang=en Xiao, B., Aualiitia, T., Salim, N., & Yang, S. (2021, March 4). Misinformation about COVID vaccines is putting Australia’s diverse communities at risk, experts say. ABC News. https://www. abc.net.au/news/2021-03-04/covid-19-vaccine-misinformation-cald-communities/13186936 Xinhua. (2020, August 27). Chinese envoy calls for respect, goodwill, fairness, vision to promote China-Australia relationship. Xinhua. Accessed via http://en.people.cn/n3/2020/0827/c900009739101.html Xia, Y. (2020, May 4). The battle between the Federal Election and WeChat. Vision Times. Accessed via archive https://web.archive.org/web/20200804201201/http://chinamatters.org.au/views-inchinese/wechat-and-the-australian-election/ YouTube. (2021). Community guidelines alert. YouTube. https://www.youtube.com/channel/UC6 K3m7kzxk5GXCkaUEP96kQ/videos. Archived March 12, 2021: https://web.archive.org/web/ 20210312045459/https://www.youtube.com/channel/UC6K3m7kzxk5GXCkaUEP96kQ/vid eos and https://archive.vn/IyljE YouTube. (n.d.). Video unavailable. YouTube. https://youtu.be/CpBCExFl_Ko Zhang, A. (2021, July 1). #StopAsianHate: Chinese diaspora targeted by CCP disinformation campaign. The Strategist. Australian Strategic Policy Institute. https://www.aspistrategist.org. au/stopasianhate-chinese-diaspora-targeted-by-ccp-disinformation-campaign/ Zhang, S. [@stevievzh]. (2021, May 14). My monitoring for First Draft has found the Himalaya movement - led by exiled Chinese billionaire Miles Guo, and closely linked to Bannon - had also been pushing this book. The group initially publicised the existence of this book in February this year [Tweet] https://twitter.com/stevievzh/status/1393098829243699203 Zhang, S., & Chan, E. (2020, December 11). It’s crucial to understand how misinformation flows through diaspora communities. firstdraft.org. https://firstdraftnews.org/articles/misinfo-chinesediaspora/ Zhao, L. [@zlj517]. (2020, November 29). Shocked by murder of Afghan civilians & prisoners by Australian soldiers. We strongly condemn such acts & call for holding them accountable. [photo]. [Tweet]. Accessed via archive. https://web.archive.org/web/20201130024208/ https://twitter.com/zlj517/status/1333214766806888448?ref_src=twsrc%255Egoogle%257 Ctwcamp%255Eserp%257Ctwgr%255Etweet
Chapter 6
The Battle between the Thai Government and Thai Netizens Over Mis/Disinformation During COVID-19 Pijitra Suppasawatgul
Abstract The spread of the COVID-19 virus as well as false information on the pandemic not only harmed citizens’ health but also undermined the credibility of many governments during their crisis management. In Thailand, false information on COVID-19 vaccines created anti-government sentiments and had a negative impact on public health. The Thai government tried to suppress false information by launching a centralised COVID-19 information centre. It also announced an Emergency Decree on 26 March 2020 that placed the country in lockdown. Its other measures included appointing official spokespersons and setting up a Facebook page for the Centre for the COVID-19 Situation Administration (CCSA) to control communication of the emerging situation. The Anti-Fake News Centre, a fact checking unit under the Ministry of Digital Economy and Society (MDES), legitimised the government’s claims as truths during the crisis. As a result, tensions among government agencies, journalists and netizens arose. This chapter adopts an ecology approach to examine how the Thai government combatted online falsehoods during the pandemic. It explores online falsehoods through three waves of the pandemic crisis and provides an analysis of how the Thai government attempted to control communication through laws and regulations, actions by government agencies and through social media campaigns. This chapter also discusses the gap between government information and citizens’ trust regarding the efficacy of the Sinovac vaccine using data collected from social listening tool Zocial Eye. This chapter illustrates the confrontation between the state and civil society in anti-falsehood policy during the pandemic. Keywords Vaccine falsehoods · Thailand · Fact checking · Social listening · Government control
P. Suppasawatgul (B) Department of Journalism and New Media, Faculty of Communication Arts, Chulalongkorn University, Bangkok, Thailand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_6
97
98
P. Suppasawatgul
6.1 Overview of Online Falsehoods in Thailand During COVID-19 The number of falsehoods relating to public health increased significantly during the pandemic on different social media platforms. In 2021, YouTube, Facebook and LINE were ranked among the most popular mobile applications with active users in Thailand while TikTok was the most downloaded application during the pandemic (Hootsuite, 2021). Twitter connected more with the younger generation, as Thai young men and women used the platform to receive news and to protest against the government (InfoQuest, 2021). LINE, on the other hand, the largest Mobile Instant Messaging Service (MIMS) in Thailand, served as a platform for peer-to-peer connections. Co-workers, colleagues, friends and family engaged in conversations and shared information via groups formed on LINE. However, owing to Facebook’s claim of being able to reach around 51 million users, various government agencies and media organisations used Facebook as their main channel to connect with and inform the public of issues pertaining to COVID-19. The number of fact checking organisations in Asia has increased from 35 in 2019 to 82 in 2020 (Kajimoto, 2021). This trend reflects the increased awareness of the problem of online falsehoods and the growing demand for fact checkers in the region. In Thailand, there are four main fact checking entities—(i) Anti-Fake News Centre (AFNC), a state-owned fact checking organisation that was founded in 2019 by the Ministry of Digital Economy and Society, (ii) Co-Fact Thailand, an independent crowdsourcing fact checker, (iii) Agence France-Presse (AFP), the only fact checking group certified by the International Fact-Checking Network (IFCN) that operates in Thailand and (iv) Sure and Share, a television programme that verifies false claims relating to health and science issues. From January 2020 to June 2021, major social media platforms in Thailand, namely Facebook, Twitter, YouTube and LINE generated 18,429,723 messages COVID-19 issues in the Thai language that saw close to 2.9 million engagements—an estimated 63 engagements per second (Bangkok Post, 2021a, Post, 2021b). Among these messages, fact checkers flagged 1,922 messages as having fake content. Facebook and Twitter were the main platforms that provided citizen feedback to many of the government’s top-down decisions. While the research conducted by Thailand’s Ministry of Health found that Facebook was the top source of falsehoods relating to COVID-19 (51.1 per cent of the respondents said they encountered falsehoods on the platform), and traditional media and LINE messaging came in second and third (where falsehoods were encountered by 28.8 per cent and 15.6 per cent of the respondents respectively) (National Health Commission Office, 2020).
6 The Battle between the Thai Government and Thai Netizens Over …
99
6.2 Three Waves of “COVID-19 Fake News” as Flagged by Fact Checkers The falsehoods relating to COVID-19 increased could be categorised under three waves of the COVID-19 outbreak (Kularb et al., 2021). a. The first wave outbreak (January to May 2020): This period saw the largest volume of COVID-19 online falsehoods. Fact checkers flagged 961 false messages across major social media platforms since the first confirmed case was found in Thailand on 13 January 2020. By March, the first case of death and the formation of a large cluster at a boxing stadium was reported. The government declared a State of Emergency on 26 March 2020 (Office of the National Security Council, 2020). The volume of false messages peaked in April 2020 at 282 messages during the Thai New Year public holiday, which coincided with an increased spread of the virus resulting from the long holiday travel. The spread of COVID-19 prompted the government to impose a lockdown. The lockdown led to the closing of schools, people working from home and the banning of congested gatherings on public transport. b. The second wave outbreak (November 2020 to March 2021): The second wave was linked to immigrant workers from a seafood market in one of Bangkok’s suburban provinces, Samut Sakorn. A total of 284 false messages were shared during the weeks before and after Christmas as well as during the New Year holiday. During this period, the volume of online conversations and the number of online falsehoods were not as high as those shared during the first wave. The media also treated this wave as a local crisis that was able to be contained. c. The third wave outbreak (April to June 2021): The third wave stemmed from a super-spreader event in Bangkok’s entertainment district. This case involved high-ranking government officials and members of parliament who had already received their first vaccination jab. A total of 517 false messages were confirmed by local fact checkers. The main theme of the online falsehoods was distrust of the government’s vaccine programme and vaccination efforts. The online falsehoods that spread during the first wave focused on issues relating to the prevention of the virus and people’s health. The number of false messages increased after the government announced a national lockdown. During the second wave, the number of false messages shrunk by about one-third of the first wave. This was because the pandemic was limited to a suburban area and confined to a group of immigrant workers. However, online falsehoods increased during the third wave with highly emotional and critical sentiments expressed towards the government. According to a report published by the government, the falsehoods that gained the most traction between 1 January 2020 and 23 December 2021 were those relating to government policies. Online falsehoods relating to government policy received an average of 53,017 engagements per day, compared with falsehoods on economic issues that saw an average of 15,966 engagements per day (Ministry of Digital Economy and Society, 2021). The two factors that contributed to the surge of online
100
P. Suppasawatgul
falsehoods were the ambiguity of policy and the lack of collaboration between the Centre for COVID-19 Situation Administration (CCSA) and other government agencies. One example of contradictory policies was when the Bangkok Metropolitan Administration (BMA) announced that five temporarily closed businesses categories including public parks would reopen on 1 June 2021, only to have the CCSA say that it would override the policy and reassess the situation on 14 June (Parpart & Wilson, 2021). Internet users in Thailand leveraged peer-to-peer networks on social media to communicate their complaints and feedback to the government. In many cases, they dramatised and exaggerated their stories to attract the government’s attention, especially on issues relating to the COVID-19 vaccine. Thai netizens were dissatisfied with the choices of vaccines that were provided by the government, and the perceived lack of transparency in government vaccine purchasing and distribution. I used a social listening tool, Zocial Eye, to analyse only data relating to the pandemic. A total of 332,968 messages published from 8 February to 7 June 2021 on social media platforms such as Facebook, Twitter and major news websites were analysed. Figure 6.1 presents the timeline of the topics that were discussed. In the middle of 2021, there were high numbers of COVID-19 infections with an average of 200 deaths per day (Theptong, 2021). The government’s delay in importing vaccines and its refusal to participate in the United Nation’s COVAX programme led to many criticisms online. This affected the government’s overall credibility. The data collected highlighted people’s concerns over the safety of the vaccines (e.g., “vaccine side effects”, “death from vaccine”). The primary source of such concerns stemmed from the Prime Minister’s rescheduling of his Astra Zeneca vaccination over fears of blood clots on 12 March 2021. Public fears and unhappiness also increased after several parliamentary ministers contracted the virus at a club in the heart of Bangkok on 7 April 2021 when they were reported to have already been inoculated with the Sinovac vaccine. From 8 February to 7 June 2021, Sinovac received the most mentions among the vaccines on Facebook and Twitter, generating 55,949 messages. The second most mentioned vaccine was Astra Zeneca with 31,992 messages, followed by Pfizer, Sinopharm and Moderna. The messages expressed scepticism over the vaccines’ side effects and dissatisfaction with the quality of the vaccines chosen by the government. During that time, the government had launched a campaign to promote its vaccination programme and said that those who expressed negative sentiments towards the vaccines were threatening public welfare. The lack of trust resulted in netizens creating false information to challenge government campaigns that aimed to increase public acceptance of the vaccines (Co-Fact, Thailand, 2021). The following section presents how the Thai government attempted to control people’s criticisms using information campaigns and regulations.
Fig. 6.1 Timeline of messages on social media from 8 February to 7 June 2021
6 The Battle between the Thai Government and Thai Netizens Over … 101
102
P. Suppasawatgul
6.3 Top-Down Control During the Pandemic In March 2020, Thailand extended the emergency decree for the 15th time to control the pandemic. The decree enabled the Thai government to have centralised command and the ability to transfer ministerial powers that cover 31 laws to the direct control of Prime Minister Prayuth Chan-Ocha. The decree granted the authorities temporary broad powers purported to combat COVID-19 and to protect the people from false information. The powers not only allowed for government control over immigration, health and the procurement of vaccines but also over several areas of defence and cybersecurity, through laws such as the Communicable Disease Control Act, Drug Act, National Vaccine Security Act, Air Navigation Act, Navigation Act, Emergency Medical Act and National Cyber Security Act (VOA News, 2021). As chairperson, the Prime Minister had centralised command through the Centre for the COVID-19 Situation Administration (CCSA). This centre makes key decisions pertaining to the pandemic and distributes policy guidelines via government agencies and regional offices. Government policies and guidelines related to COVID19 (e.g., lockdowns, vaccine procurement, vaccine distribution across the country) come under the jurisdiction of the CCSA. The government adopted a three-prong approach to combat falsehoods relating to the pandemic—laws and regulations, government agencies and social media campaigns (see Fig. 6.2). Laws and regulation: In Thailand, the government is given the right to combat falsehoods under the Computer Crimes Act of 2017. However, the Ministry of Digital Economy and Society had made several amendments (Article 19, 2021):
Laws/ regulations Government Agencies Social Media Campaigns
Fig. 6.2 Approaches to combat falsehoods
•Computer Related Crime Act 2017 •Emergency Decree - 31 laws • The Royal Gazette - Control Media/Online Content • Centre for the Covid-19 Situation Administration (CCSA) • Office of the National Broadcasting and Telecommunications Commission (NBTC) •Anti-Fake News Center Thailand(AFNC) •Anti-Fake News Corp (ANSCOP) • "Thais Fight Covid", official Fan Page of CCSA @Thaimoph • "Covid information Cenrer", Official Fan Page of PR Department @informationcovid • IO - Information Operation
6 The Battle between the Thai Government and Thai Netizens Over …
103
• On 18 May 2021, the Prime Minister signed an executive order establishing the Committee on Suppression and Correction of Dissemination of False Information on social media. • On 27 May 2021, the Minister of Digital Economy and Society (MDES) established three new sub-committees: one for the supervision of social media, one for enhancing law enforcement measures to prevent and solve problems relating to social media and one for drafting ministerial regulations under the Computer Crimes Act. • On 8 June 2021, the Prime Minister and the Minister of Defence assigned the Council of State to review and study Thai and foreign laws, with the aim of regulating social media platforms such as Twitter and Facebook (Bugher, 2021). • On 13 August 2021, the Minister announced that the Thai government would require social media companies operating in the country to establish offices in Thailand, collect their network traffic data and hand over the information to the government when needed (The Secretariat of the Cabinet, 2021a). • The most serious of government actions that received significant public backlash was that of the Royal Gazette No. 27, Section 9 that was announced on 29 July 2021. It prohibits the dissemination of news and online information deemed as falsehoods and spreading fear. The penalties include up to two years of imprisonment and a 40,000 baht fine (The Secretariat of the Cabinet, 2021b). Government agencies: Laws and regulations are enforced by government agencies. During the COVID-19 crisis, the government concentrated its decision-making and communication mainly through the Centre for the COVID-19 Situation Administration (CCSA). The CCSA appointed a team of doctors and medical staff as spokespersons to announce government policies, provide updates on the pandemic situation and respond to public criticism. The Office of the National Broadcasting and Telecommunications Commission (NBTC) also supported the CCSA by regulating fake news in broadcasting. In July 2021, the government granted NBTC the authority to command media channels and internet service providers (ISPs) to remove false content. Furthermore, NBTC can also require ISPs to check IP addresses and immediately suspend services without asking for permission from the court. ISPs that do not comply with the order would be deemed as failing to follow operating requirements as granted by their licences and would be subjected to further action by the NBTC. The NBTC was also tasked to help the public develop a sense of caution towards false information (The Secretariat of the Cabinet, 2021b). The state-owned fact checker Anti-Fake News Centre (AFNC) plays a role in combating online falsehoods. In verifying information published by local media outlets, the work done by AFNC led to tensions between the government and journalists. Additionally, in 2020, the Technology Crime Suppression Police Bureau was set up to monitor cybercrime, including “fake news”. The bureau has nine regional offices that investigate online falsehoods and enforce cybercrime laws. Social media campaigns: The government used social media platforms as its primary communication channel. The Thai government set up a COVID-19 Informational Centre “Fan Page” to provide updates on COVID-19 cases and evolving
104
P. Suppasawatgul
Fig. 6.3 Fake accounts that were part of the government’s “troll army”
government policies. Although the fan page saw high engagement, the engagement was mostly made up of criticisms against the government. The government’s communication strategies also took the form of “troll armies”, comprising fake Facebook and Instagram accounts that countered criticisms of the government (see Fig. 6.3 for examples of fake accounts). In 2020, Twitter identified 926 accounts linked to progovernment and pro-army information operations (Stanford Internet Observatory, 2020). As seen in the above examples, the fake accounts had few posts in their feed and few followers. Most of these fake accounts primarily focused on supporting the government.
6.4 False Information on MIMS Given their closed nature, MIMS such as LINE posed significant challenges to efforts in combating false information. The encryption of messages on MIMS prevents the government and technology companies from identifying, fact checking and issuing warnings against false content. In Thailand, LINE is the most popular messaging platform (Hootsuite, 2021). According to Co-Fact, a local fact checking organisation that verifies messages on LINE, the majority of false messages that were shared during the pandemic were related to herbal cures and the prevention of COVID-19 (Co-Fact, Thailand, 2021). The false information that was circulated on LINE could be categorised into three types—misinformation relating to health and science, nationalistic propaganda and anti-vaccine false information (Co-Fact, Thailand, 2021): Misinformation on health and science: The majority of falsehoods on LINE dealt with how
6 The Battle between the Thai Government and Thai Netizens Over …
105
Fig. 6.4 False claims of local herbs curing COVID-19 (A member from the Democrat Party claiming that a local herb can cure one of COVID-19 [Left image]. A correction that was published by AFNC that states that the herb cannot cure one of COVID-19 [Right image])
one could protect oneself from contracting the COVID-19 virus. The false claims that were fact checked by Co-Fact and had received top engagement rankings were: (i) vaccines kill rather than cure, (ii) vaccines inject magnets into the body and (iii) philanthropist and Microsoft co-founder Bill Gates calling for the withdrawal of COVID-19 vaccines and (iv) how local herbs (e.g., Kratom) could cure COVID-19 (see Fig. 6.4). Nationalistic propaganda: Another type of false claim was nationalistic propaganda that promoted domestic vaccine factories. There were two pieces of false claims that were circulated widely on LINE. The first claimed that the COVID-19 Vaccines Global Access (COVAX) had failed and the government made the right decision not to participate in the initiative (see Fig. 6.5). The false claim also promoted local vaccine production factories as suppliers of vaccines for the region. The second false claim stated that Chinese vaccines were ranked by the New York Times as the top four safest vaccines (see Fig. 6.6). Anti-vaccine false information: This category of false information discredits vaccines. Typically, an expert’s comments are included as a reference. One wellknown case of anti-vaccine false information relating to mRNA vaccines was a message that referenced a Thai doctor who worked in Germany. Another false claim that was widely shared included a video that was linked to the anti-vaccine movement in the US. The clip claimed that the “Covid virus [being] just like the common flu” and that “500,000 Americans had died from vaccinations” (Fuellmich, 2021). See Fig. 6.7. The false information on LINE such as the above examples has been shared among family groups across generations. From studies, it has been found that the behaviour of LINE users, particularly among those of the younger generation, do not take LINE messages from senior family members seriously. They treat many forwarded LINE messages carrying false claims as noise rather than as credible voices. It is worth mentioning, however, that corrections received by a close friend or family member are re-shared more than corrections received by a casual acquaintance, particularly among close friends who agree politically. In addition, users in MIMs are willing to fact check only for their own sake and they are willing to correct other MIM users
106
P. Suppasawatgul
Fig. 6.5 Discrediting the COVAX programme and promoting a home-grown vaccine factory (“COVAX has not been able to deliver large shipments to any country. This is because many of the countries with COVID-19 vaccine production facilities such as the US, England, Russia, India, Thailand, South Korea, among others, have delayed their deliveries to COVAX. These countries have decided to keep the vaccines for their citizens. Those countries participating in the COVAX programme have wasted vast sums of money already paid. They must start from square one and contact vaccine facilities in countries such as Thailand and India” [Left image]. “In a surprising turn, COVAX has caused an uproar among many participating countries. Meanwhile, other countries are buying vaccines from Thailand’s Siam Bioscience. In many countries in and outside of ASEAN, government administrations have made a grave mistake by participating in the COVAX programme alongside WHO” [Right image].)
only if they are close friends with the person who originally sent the misinformation (Irene, 2020). Such content can spread around not only within MIMs but across other social media platforms and amplify through the network of whom believe in the false claim.
6.5 Conclusion During a crisis such as the COVID-19 pandemic, the government uses state emergency laws to keep the public calm and minimise ramifications. On the other hand,
6 The Battle between the Thai Government and Thai Netizens Over …
107
Fig. 6.6 False claims of Chinese vaccines being ranked the four safest vaccines by the New York Times
Fig. 6.7 Anti-vaccine false information (A post on LINE that referenced a Thai doctor working in Germany [Left image]. A video interview with a medical specialist in the US who endorsed not vaccinating [Right image])
the public and the media have a right to check and question government decisions that impact society. Citizens view social media platforms as a public space where they can question policy and provide critical feedback, and demand for government transparency during crisis management. In Thailand, there is a trust gap between government and the citizens. The government’s response to false information has generated grave concerns over its suppression of freedom of expression to cover up its mistakes in managing the pandemic. The Emergency Decree has stoked fears that Thailand is sinking into another period of strongman rule. Paul Chambers, an academic at
108
P. Suppasawatgul
the Center of ASEAN Community Studies at Naresuan University points out that “Thailand has become a classic example of leaders with autocratic preferences who use COVID-19 to rationalise a descent into dictatorship” (VOA News, 2021). Thus, managing the hazards of online falsehoods with minimal cost to democratic freedoms remains a key challenge in Thailand. Managing false information in closed and encrypted platforms like MIMS requires a different approach. In Thailand, MIMS such as LINE Thailand connect users to legitimate and authoritative content relating to COVID-19 (e.g., information on COVID-19 clusters, hospital services and travel guidance), newsrooms and universities. It also provides officially updated information via its information hub. Additionally, LINE also supports local journalists with “LINE Today” which provides daily news and information to users. As presented earlier, the Thai government has been using a three-prong approach to manage online falsehoods. However, absolute control of information is impossible for any democratic government and social media governance requires multistakeholder efforts.
References Article 19. (2021, June 11). https://www.article19.org/resources/thailand-fake-news-undermine-fre edom-of-expression/ Bangkok Post. (2021a, July 30). PM order ban on fake news. www.bangkokpost.com: https://www. bangkokpost.com/thailand/general/2156827/pm-orders-ban-on-fake-news Bangkok Post. (2021b, November). The infodemic fake news. https://www.bangkokpost.com/spe cials/data-visualization/?fbclid=IwAR1BFO-2xfLNNXVIoIXIxUUAZnwum0xTkoG6E0LH P46uoYZ_CfkoULiz2FA Bugher, M. (2021, June 11). Thailand: Proposed initiatives to combat ‘fake news’ undermine freedom of expression. https://www.article19.org/resources/thailand-fake-news-undermine-fre edom-of-expression/ Co-Fact, Thailand. (2021, October 31). https://blog.cofact.org Fuellmich, R. (2021, Sepeptember 11). Best news here. https://gloria.tv/post/ydFPR2tMq7Pd2qF NgzKfPofgc Hootsuite. (2021, Febuary 11). https://datareportal.com/reports/digital-2021-thailand?rq=tha iland%202021 InfoQuest. (2021). Thailand media landscape 2021. InfoQuest. Irene, P. V. (2020, May). Understanding misinformation on mobile instant message (MIMs) in developing countires. Harvard Kennedy School, Shorenstein Center of Media, Politics and Public Policy. Kajimoto, M. (2021, September). https://newsinasia.jninstitute.org/chapter/faster-facts-the-rapidexpansion-of-fact-checking/ Kularb, P., Deesukon, T., & Panyakam, M. (2021, July). Investigate report on Covid-19’s information by fact-checkers in Thailand. Bangkok. Medeiros, B. (2020). Addressing misinformation on whatsapp in india through intermediary liability policy, platform design modification, and media literacy. Journal of Information Policy, 10, 277. Ministry of Digital Economy and Society. (2021, December 27). https://www.mdes.go.th/news/det ail/5095 National Health Commission Office. (2020). https://infocenter.nationalhealth.or.th/node/28170
6 The Battle between the Thai Government and Thai Netizens Over …
109
Office of the National Security Council. (2020, May 13). https://www.nsc.go.th/wp-content/upl oads/2020/05/CV19-01.pdf Parpart, E., & Wilson, J. (2021, June 1). Covid chaos between bangkok province and central government draws fire from both sides of the aisle. Thai Enquirer. https://www.thaienqui rer.com/28129/covid-chaos-between-bangkok-province-and-central-government-draws-firefrom-both-sides-of-the-aisle/ PR Department . (n.d.). https://www.facebook.com/informationcovid19/. Singh, S. A. (2020). WhatsApp: How internet platforms are combating disinformation and misinformation in the age of COVID-19. New America. Stanford Internet Obsevatory. (2020). Analysis of Twitter takedowns linked to Cuba, the internet research agency, Saudi Arabia, and Thailand. The Secretariat of the Cabinet. (2021a, August 13). http://www.ratchakitcha.soc.go.th/DATA/PDF/ 2564/E/188/T_0009.PD: The Secretariat of the Cabinet. (2021b, July 29). http://www.ratchakitcha.soc.go.th/DATA/PDF/ 2564/E/170/T_0001.PDF Theptong, W. (2021, August 25). https://www.bbc.com/thai/thailand-58321371 VOA News. (2021, April 29). https://www.voanews.com/a/east-asia-pacific_delegation-powersthai-pm-raises-concern-authoritarian-turn/6205195.html
Part II
Impact of Online Falsehoods Transmitted via MIMS
Chapter 7
Users, Technologies and Regulations: A Sociotechnical Analysis of False Information on MIMS in Asia Shawn Goh
Abstract This chapter identifies three understudied but crucial macro factors that affect the problem of false information on mobile instant messaging services (MIMS) in Asia. First, Asia has a predominant and rising “mobile-first, mobile-centric” user demographic that may be particularly vulnerable to false information. This is largely due to poor digital literacy skills, and an unconscious and continued entrenchment of digital illiteracies afforded by mobile devices, such as the routinising narrow information-seeking and information verification practices. Second, the rise of super-apps (especially those with instant messaging functions) in Asia may worsen the microtargeting of false information, especially under the region’s weak data and privacy protection regimes. Furthermore, as super-apps become more deeply embedded in people’s lives, people’s increased reliance on and trust in these platforms may make them less critical of the information that they receive. Third, relying on legislation to combat false information poses both legitimacy and practical challenges to governments in Asia, leading to unintended and paradoxical outcomes that compromise the effectiveness of such solutions. Keywords Sociotechnical analysis · Asia · Digital literacy · Super-apps · Legislation
7.1 Three Gaps in the Literature: Geography, Platforms and Scale Since 2016, there has been a surge in academic scholarship on false information— comprising fake news, misinformation, and disinformation. According to a systematic literature review of over a hundred peer-reviewed articles on fake news by Arqoub et al. (2020), research on the topic has substantially increased after 2016—the number S. Goh (B) Oxford Internet Institute, University of Oxford, Oxford, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_7
113
114
S. Goh
of research articles on fake news published from 2006 to 2016 averaged between one to three articles each year but spiked to about 80 articles between 2017 and 2018. The authors cited the election of Donald Trump during the 2016 United States (US) Presidential Election, as well as Trump’s repeated demonisation of traditional news media and journalism, as possible reasons for the surge in academic interest. This was also corroborated by Di Domenico et al. (2021) in another systematic literature review, who attributed the spike in research to significant political events, including the 2016 Brexit referendum. Despite the significant increase in attention to false information, inequalities persist in the academic literature. Systematic reviews highlight three gaps. First, existing research has largely been done in the context of the West, with insufficient focus on other parts of the world such as Asia. Arqoub et al. (2020) demonstrated this geospatial skew in fake news research—around half of the studies (50.5 per cent) in their sample were about the United States and around 17 per cent were about Europe. Only around 7 per cent of the studies were about Asia. Second, existing research has given little focus to false information on mobile instant messaging services (MIMS), such as WhatsApp, Telegram, WeChat, Line, etc. In the same review, Arqoub et al. (2020) found a skew in terms of the type of media platforms that existing research has focused on. The majority of the studies looked at fake news on traditional media like television (16.5 per cent), as well as on social media platforms like Twitter (10.7 per cent) and Facebook (7.8 per cent). Only one study out of their sample of over a hundred peer-reviewed articles investigated fake news on WhatsApp. One possible reason for why existing studies have largely focused on open communication platforms (e.g., Facebook and Twitter) is better access to data, compared with closed communication ones like MIMS. Third, most of the existing research has been done from the perspective of communication studies, media studies and journalism (Arqoub et al., 2020). Di Domenico et al. (2021) also found that the top three disciplines that research articles on fake news were published in were psychology, information technology/computer science and communication. Consequently, this also means that existing studies have placed a stronger emphasis on the micro-level aspects of the issue by focusing on individuals as audiences or information consumers and examining their interactions with different media and media content. Some prominent examples of micro-level research themes include: (1) the types of false information being communicated (e.g., format, genre, topic) and their dissemination and propagation patterns (e.g., by looking at users’ information sharing behaviours); (2) the technological features and affordances that enable such communications (e.g., bots, algorithms); and (3) the impact of false information on information consumers (e.g., attitudes towards and perceptions of credibility and trustworthiness, and their ability to discern real from false information). In the last few years, scholars have started to address these research gaps. There has been a growing body of literature on false information on MIMS, for two main reasons. First, MIMS are increasingly being used for news and information sharing, especially among close interpersonal networks, where such high-trust environments exacerbate the spread of falsehoods. According to the 2021 Reuters Institute Digital
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
115
News Report, social networking sites like Facebook have become significantly less relevant for news consumption in 2020, whereas MIMS like WhatsApp and Telegram have attracted more users for keeping up with the news (Newman et al., 2021). Second, the end-to-end encryption technology on MIMS makes false information detection and verification especially challenging on these communication platforms, thus demanding more research to better understand the problem. Although this shift in research focus is commendable, preliminary work regarding false information on MIMS still shares the same research gaps with the broader academic scholarship on false information. Existing research focuses largely on the West (e.g., the Americas), despite the fact that users from other geographical regions, particularly Asia, have a predominantly mobile-mediated form of access to the internet. Compared with social networking sites like Facebook, MIMS like WhatsApp also drew the greatest concern in terms of contributing to the spread of false information in countries like Indonesia and India (Newman et al., 2021). Similarly, existing research about false information on MIMS has paid greater attention to the micro-level aspects of the phenomenon. One prominent strand of research focuses on identifying and documenting the various types of false information circulated on MIMS. Machado et al. (2019) performed a content analysis of data collected from over a hundred public WhatsApp groups in the lead up to the 2018 Brazilian Presidential Election and developed a typology for classifying political misinformation. Other studies focused on specific formats of false information, including visual misinformation (Garimella & Eckles, 2017), textual misinformation (Resende et al., 2019) and audio misinformation (Kischinhevsky et al., 2020), to understand whether certain formats gain better traction on MIMS, thereby elucidating the specific informational cues that drive their spread. Another strand of research focuses on identifying the attitudinal factors that drive the spread of false information on MIMS. Herrero-Diz et al. (2020) investigated teenagers’ motivations behind sharing fake news on WhatsApp and found that factors such as personal interest in a topic, trust in information source and the appearance of a piece of information, affected teenagers’ fake news sharing behaviours. Similarly, Talwar et al. (2019) examined the impact of various individual motivations on the sharing of fake news on WhatsApp, such as people’s tendency to engage in social comparison, their desire to assuage a sense of “fear of missing out” and the human inclination to engage in self-disclosure online.
7.1.1 From Micro to Macro Through a Sociotechnical Lens In other words, while recent attempts to plug gaps in the academic literature must continue, more research that goes beyond understanding the micro is also much needed. Specifically, more needs to be done to understand how macro factors shape the social environments in which individuals interact with false information on MIMS, and how that in turn affects people’s vulnerabilities to false information. To do so, I use a sociotechnical lens to identify macro factors that are salient (but
116
S. Goh
understudied) in the context of false information on MIMS in Asia, and point out potential future research directions. The term “sociotechnical” stems from the field of organisational development, which emphasises the interrelatedness of, and interactions between, both the social and technical subsystems within an organisation, and how each influences the other in organisation design. Sociotechnical ways of thinking have since been borrowed by other fields, such as being used in tandem with Domestication Theory in media studies, to understand how the interconnectedness of humans and technologies in modern households reshapes familial roles and relationships (Kayany & Yelsma, 2000). Sociotechnical frameworks have also been used to guide research on misinformation. Marwick (2018) adopted a sociotechnical model of understanding how misinformation spreads through social media, which involved examining (1) actors: how people’s identities influenced the meaning-making of the information they received; (2) messages: how media content is framed to advance certain economic or political agendas; and (3) affordances: how technological features of different media platforms (e.g., television, newspapers, social media algorithms) affect actors and messages. In his book Lie Machines, Howard (2020) conceptualises computational propaganda as a sociotechnical system that is “produced by an assembly of social media platforms, people, algorithms, and big data tasked with the manipulation of public opinion” (p. 31) and argues that observing the interactions between the social and the technological provides a stronger analytical frame. Guided by this sociotechnical approach, this chapter identifies and discusses three macro factors that complicate the problem of false information on MIMS in Asia—(1) user behaviour: the rise of a “mobile-first, mobile-centric” demographic; (2) technological: the rise of super-apps; and (3) regulatory: the reliance on legal approaches to tackle false information.
7.2 “Mobile-first, Mobile-centric” News Information Consumption and Invisible Illiteracies Asia is witnessing a significant rise in a “mobile-first, mobile-centric” demographic of users, an understudied contributing factor to the proliferation of false information on MIMS in the region. This “mobile-first, mobile-centric” group of users refers to individuals who either have their first encounter with the internet via mobile devices (instead of personal computers) or have access to the internet almost exclusively via mobile devices or both. Compared with users in many other parts of the world who have gradually shifted from using desktops and laptops to using smartphones to access the internet, Asia is experiencing an unprecedented wave of users who have skipped the use of personal computers entirely and have moved directly to using mobile phones for going online (Clark et al., 2017; Mihindukulasuriya, 2021). Some scholars refer to this phenomenon as “mobile leap-frogging” (Napoli & Obar, 2013; Puspitasari & Ishii, 2016). This section discusses how this rise in a “mobile-first,
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
117
mobile-centric” demographic of users in Asia is both a symptom of existing digital divides and a potential cause for further deepening this digital chasm and discusses how both conceptualisations impact individuals’ vulnerability to misinformation. Numerous reports have demonstrated the rapid pace of digitalisation in Asia, often accompanied by a significant increase in the number of mobile internet users in the region. According to a report by McKinsey Global Institute (2019), the top three countries experiencing the fastest digital growth based on McKinsey’s Country Digital Adoption Index were all Asian countries, namely Indonesia, India and China. The index compares the rate of digital growth among 17 emerging digital economies by measuring aspects such as digital reach, which refers to the growth in the number of mobile devices and internet users, and the increase in data consumption per user in each country. Similarly, GSMA’s (2021) Digital Societies in Asia-Pacific report stated that mobile devices have become the primary modality used in Asia-Pacific to access the internet and digital services, as reflected by an almost doubling of the number of mobile internet subscribers in the region, from 702 million in 2015 to 1,233 million in 2021. Furthermore, within Southeast Asia, a third of these mobile internet subscribers are new users as a result of the COVID-19 pandemic. The We Are Social (2021) report also found that Asia had one of the greatest year-on-year growth in internet users compared with other regions in the world and that in most emerging Asian markets, many users access the internet primarily through their mobile phones. Mobile communication and mobile news and information consumption are prominent activities of this “mobile-first, mobile-centric” user demographic. The same We Are Social (2021) report found that users in countries like the Philippines, Indonesia, Malaysia and Thailand, had spent more time than the global median on apps like WhatsApp, Facebook Messenger, Line and Telegram. Similarly, according to the 2021 Reuters Institute Digital News Report, while almost three-quarters (73 per cent) of survey respondents across the world said they accessed news information via smartphones (up from 69 per cent who had said the same in 2020), this trend was particularly salient in Asia—80 per cent of survey respondents from Indonesia said smartphones were their main device for obtaining news information, compared with only an average of 54 per cent of survey respondents from European Union member states who had said the same. The report also highlighted how mobile news aggregator apps (e.g., Kakao Talk News, Line News) play a significantly more important role in driving mobile news consumption habits in Asia, compared with in the West (Newman et al., 2021). Another study by Wei and Lo (2021) that looked at mobile news consumption in four leading Asian cities—Shanghai, Singapore, Hong Kong and Taipei—also demonstrated a rise in the number of mobile news users and frequency of mobile news consumption. For example, 57 per cent of users in Taipei used Line for reading the news; 75 per cent of WeChat users in Shanghai used WeChat for reading the news, with two-thirds spending 40 min every day consuming mobile news. Users who are “mobile-first, mobile-centric” may be particularly vulnerable to false information for two key reasons. First, existing research suggests that a “mobilefirst, mobile-centric” use pattern is often symptomatic of a pre-existing lack of digital and information literacies due to underlying digital inequalities, thus predisposing
118
S. Goh
people to being more susceptible to false information. Studies have shown that people from lower education and socioeconomic backgrounds are more likely to use only their smartphones to access the internet (Correa et al., 2020; Gohain, 2021; Gui & Gerosa, 2021; Napoli & Obar, 2014; Tsetsi & Rains, 2017). Furthermore, other studies show that this very same demographic of people are also more likely to share and believe online misinformation (Apuke & Omar, 2020; Pan et al., 2021; Rampersad & Althiyabi, 2020; Soon & Goh, 2021) and possess poorer news and information literacy skills (Adler, 2014; Allcott & Gentzkow, 2017). In addition, many “mobile-first, mobile-centric” users may also demonstrate a poorer understanding of what comprises the internet, what accessing the internet entails, and by extension, the implications of accessing information online. A 2019 Pew Research Center survey on “unconscious internet users”—i.e., people who say that they do not use the internet but in fact do go online—found that the Philippines had the highest proportion of survey respondents (among respondents from 11 developing countries across the globe) who used Facebook, but reported that they were not using the internet. The survey also found that the majority of these “unconscious internet users” lacked computers and laptops, and thus accessed the internet primarily via mobile devices like smartphones (Silver & Smith, 2019). Discussing the Indian context, Badrinathan (2021) points out that although the availability of cheap smartphones and data plans in India have led to better access to the internet and news information, this has paradoxically made first-time mobile internet users potentially less informed and particularly vulnerable to misinformation, due to their unfamiliarity towards and perceived novelty of the internet. Badrinathan (2021) argues that “the expansion of Internet access and smartphone availability in India … generate[d] the illusion of a mythic nature of social media, underscoring a belief that if something is on the Internet, it must be true” (p. 1328). The second reason why “mobile-first, mobile-centric” users may be particularly vulnerable to false information is because a continued reliance on mobile devices to access the internet may lead to monolithic use patterns that translate into limited digital competencies, including inadequate digital and information literacies, which can potentially worsen one’s vulnerability to false information. For example, Geeng et al. (2020) conducted qualitative interviews with frequent users of social media and found that a key reason behind why people were unwilling to further investigate the veracity of Facebook or Twitter posts when viewed on mobile devices was because it was harder to perform information verification, compared with when on desktop browsers. Such illiteracies are sometimes described as “invisible” as they are often overlooked simply because mobile internet users can access the internet and are hence assumed to be not left behind the digital divide (Lim, 2018). However, this is a problematic assumption. Lim and Loh (2019) found that even in countries with a high degree of internet penetration like Singapore, underprivileged youths—whose access to the internet is predominantly mediated by mobile devices and smartphones—tend to perform internet activities largely confined to using social messaging apps (e.g., WhatsApp, Snapchat, Instagram), as well as entertainment apps like YouTube, with limited navigation of the broader online space. In other words, a mobile-centric mode of accessing the internet tends to “reproduce[e] existing practices of communication
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
119
and entertainment rather than capital-enhancing purposes … in turn affect[ing] … a different set of digital skills [that revolves] around information-based competencies that prize the effective and critical marshalling of diverse bodies of knowledge” (Lim & Loh, 2019, p. 139). Similar outcomes have also been observed in other parts of the world (HydeClark & Tonder, 2011; Napoli & Obar, 2014). A case study of Chile by Correa et al. (2020) found that people who only accessed the internet via mobile devices possessed poorer digital skills and that accessing the internet via computers better facilitated the development of digital skills due to the wider range of affordances provided by computers, such as allowing more in-depth forms of information-seeking. Similarly, a 2016 report by the British telecommunications regulator, Ofcom, found that people who were “smartphone by default” struggled to develop key digital skills, such as effective online information navigation and management. Those who were “smartphone by default” also found it particularly challenging to compare multiple sources of information due to the limited amount of information that can be displayed on a smartphone screen at any one time. They were also less likely to engage with news information from a diverse range of sources and would often rush through their information-seeking activities due to concerns about limited mobile data (Ofcom, 2016). In other words, engaging with information via mobile devices is likely to be as less immersive (Humphreys et al., 2013), more akin to “skimming the surface” (Isomursu et al., 2007, p. 262), and significantly more constrained (Ghose et al., 2013) than when performed on computers. Relying on a mobile-centric mode of accessing the internet may thus normalise detrimental information-seeking and information verification habits that increase people’s vulnerability to false information. In short, the rise of Asia’s “mobile-first, mobile-centric” demographic of users potentially aggravates the problem of false information in the region. Not only do such users already possess inadequate digital literacy skills by virtue of pre-existing digital inequalities in the region, they also tend to be less familiar with, and less perceptive of, the implications of engaging with news and information online. They may also find it challenging to broaden their repertoire of essential digital literacy skills due to a monolithic way of accessing the internet. Future research should focus on investigating the extent to which this confluence of user behaviour factors increases their susceptibility to false information on MIMS and understand how policymakers and practitioners can design tailored interventions for this unique demographic to mitigate the problem.
7.3 The Rise of Super-Apps in Asia In addition to the rise in a “mobile-first, mobile-centric” demographic of users in Asia, current technological trends in the region, particularly in consumer-facing mobile applications, may also further complicate the scourge of false information on MIMS. This section discusses two key concepts—“hyper-personalisation” and
120
S. Goh
“embeddedness”—that underpin how the rise of super-apps in Asia may impact people’s vulnerability to false information. Asia is experiencing the rise of super-apps such as WeChat, Line, KakaoTalk, Gojek and Grab. For example, WeChat, launched ten years ago, now boasts over one billion active users in China. Super-apps are mobile applications that combine multiple functions and services—e.g., social media, communication, e-payment and transport—that are typically provided by a multitude of separate applications, into a single application on users’ mobile devices to provide a seamless user experience when navigating across different services (Galloway, 2021).1 While the academic literature on super-apps remains scant, observers have highlighted a few reasons for why the rise of super-apps has been more prominent in Asia compared with the West. Super-apps have more room for growth in markets with a significant mobilefirst userbase such as in Asia, compared with places like the West, where people’s formative experiences with accessing the internet (especially via personal computers) have already been heavily shaped by pre-existing infrastructure provided by dominant companies like Microsoft and Google. The forms of convenience brought by super-apps are often designed primarily with a mobile user experience in mind, which further resonates with Asia’s mobile-centric user demographic (Dobberstein, 2021; Evans, 2018). Moreover, differences in socio-political and cultural factors between Asia and the West are also contributing factors. Some have argued that factors like stricter anti-trust regulations and stronger privacy concerns hinder superapps from effectively appealing to users and gaining market dominance in the West (Rodenbaugh, 2020).
7.3.1 The Rise of Super-Apps and the Hyper-Personalisation of False Information The current discussion on the rise of super-apps has largely been focused on the economic aspects of the phenomenon, with inadequate attention being given to consider the potential social implications of this technological wave. In particular, there is a lack of research that seeks to understand the extent to which the big data profiling and microtargeting capabilities of super-apps, coupled with inadequate data governance frameworks, can place users at risk of false information. By consolidating different services into a single app, super-apps collect extensive and comprehensive data about user behaviours—often across various aspects of life (e.g., financial behaviours, e-commerce consumption patterns)—thus facilitating the building of user profiles in a more seamless and integrated manner (hence, hyperpersonalisation). While such predictive analytics are typically used to personalise 1
While there is a plurality of both established and emerging super-apps in Asia, this section refers to “super-apps” as those that feature a prominent social media and instant messaging function, such as WeChat, Line and KakaoTalk. Other prominent super-apps that lack this key affordance, such as Grab and Gojek, are less relevant for our discussion on MIMS and false information.
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
121
user experience (e.g., for convenience), the same capabilities can also be used to personalise the information that users receive, as in the case of news information search results on WeChat (Chen, 2019). Along the same vein, this may also facilitate the microtargeting of misinformation at specific target audiences by appealing to their worldviews and emotions and by nudging them to disregard the authenticity and veracity of the information (Ivanova, 2020). The Cambridge Analytica scandal is the poster child example of how big data profiling and microtargeting can be used to push personalised misinformation to influence social media users’ attitudes and behaviours in the context of elections. The controversy gained global attention after investigations revealed that the Facebook data of around 87 million users in the United States (US) were inappropriately harvested and analysed to create psychographic profiles of users and to microtarget voters with political advertisements in the lead up to the 2016 US Presidential Election (Cadwalladr & Confessore, 2018; Graham-Harrison, 2018). In response to the scandal and other controversies in the landscape, lawmakers around the globe— most prominently in Europe—have been increasingly cracking down on the practices of big tech companies. In addition to the landmark enforcement of the General Data Protection Regulation (GDPR) in 2018, European Union (EU) regulators also unveiled the Digital Services Act, Digital Markets Act and the Artificial Intelligence (AI) Act. Specifically, the Digital Services Act aims to establish EU wide standards in terms of regulating online advertising on digital platforms and impose transparency measures on platforms’ algorithms (Lomas, 2021; Sartor et al., 2021). In tandem, private companies themselves have also been engaging in greater self-regulation. For example, Apple launched its App Tracking Transparency Framework that made user tracking and the sharing of user data from third-party applications with other companies for the purpose of targeted advertising more difficult (Haskell-Dowland & Hampton, 2021). Google also announced a phasing out of third-party cookies for its Chrome browser that will happen by 2024 (Morrison, 2021). In contrast, however, Asia lacks a similar consistent digital governance framework across the region. In fact, the data protection and privacy regimes for many developing Asian countries are still evolving in response to rapid shifts in the digital space. According to the 2020 Asia-Pacific Privacy Guide by Deloitte, the privacy regulatory frameworks of countries like Bangladesh, Cambodia and Myanmar still lack key attributes like the protection of personal data from misuse and the protection of individuals’ rights to objecting to the processing of personal data (Deloitte, 2020). While privacy is not a panacea for combatting false information, scholars have argued that enforcing privacy rights can help address the root causes of false information by minimising the extent to which personal data is misused for big data profiling and microtargeting (Ivanova, 2020). The rise of super-apps in Asia poses a different challenge relating to data and privacy. While current regulatory approaches address the issue of tracking user behaviours across different websites and across different apps, super-apps have the potential of circumventing such regulations as the data about user behaviours in different contexts and services are now collected within a single application. In short, without the appropriate data governance frameworks,
122
S. Goh
super-apps will continue to tap on their big data profiling and microtargeting capabilities and potentially supercharge the spread of hyper-personalised false information on MIMS.
7.3.2 The Rise of Super-Apps and Their Increasing Embeddedness in People’s Lives With a highly saturated market of mobile apps, the goal of super-apps is to become more deeply integrated into people’s lives by providing users with greater convenience. Another understudied social implication of the rise of super-apps is the extent to which this increasing embeddedness of super-apps in people’s lives affects their susceptibility to false information. Existing studies suggest that an increased reliance on and high intensity of use of platforms can influence people’s online responses and susceptibility to false information. Schmidt et al. (2021) investigated different types of user behaviours towards fake news on Facebook and found that the more frequently respondents used Facebook, the more likely they engaged with fake news on the platform, either by liking, commenting and sharing false information. Dabbous et al. (2021) found that respondents’ frequency of use of a social media platform had a direct positive impact on their trust in the social media platform as a source of information. Furthermore, research also suggests that the more deeply embedded a piece of technology is in people’s lives, the less critical people may be of the potential or invisible harms of the technology. Southerton and Taylor (2020), who conducted qualitative interviews with youths to understand the relationship between social media routines and youths’ attitudes towards privacy and data surveillance, found that the “deep embeddedness of social media in the lives of young people [was] made possible by habitual and repeated ‘checking in’ and engagement, which at the same time reduces awareness of the intense surveillance of the platform” (p. 7). To put it differently, while the rise of super-apps in Asia undoubtedly brings about efficiency and convenience, more research is needed to better understand the overlooked social implications of this technological wave. First, more needs to be done to examine the extent to which super-apps can supercharge key drivers of the spread of false information, such as through big data profiling and microtargeting, and whether existing data and privacy protection regimes in the region are adequate to address these issues. Second, relating to the concept of embeddedness, more needs to be done to investigate the impact of increased reliance on super-apps on users’ trust in platforms—and by extension the instant messaging environments in superapps through which users receive news and information—and their savviness in recognising potential digital harms, which can in turn influence their vulnerability to the false information they receive on MIMS.
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
123
7.4 Legitimacy and Practicality Challenges in Legislative Approaches Finally, the reliance on legal tools as a solution to the scourge of false information may further complicate the problem due to legitimacy and practical challenges, which are especially pronounced in the context of Asia. The pervasiveness of false information online has become a common justification for introducing new legal instruments across the world, including in Asia. According to a press release on online content moderation and internet shutdowns by the United Nations Human Rights Office, about 40 new laws relating to social media have been passed worldwide in the last two years, with 30 more under consideration (United Nations Human Rights Office, 2021). While legislative approaches undoubtedly play a role in combatting false information, this global drive to regulate the internet has also led to concerns about the infringement of freedom of speech, and censorship. The 2021 Freedom on the Net report by Freedom House found that global internet freedom has declined for the 11th consecutive year as the push to regulate social media was often exploited to clamp down on freedom of expression (Freedom House, 2021). This shift towards legislative approaches to counter fake news is also observed in Asia. Smith et al. (2021) conducted a legal review on the development of antifake news legislation in Southeast Asia and found that all ASEAN (Association of Southeast Asian Nations) jurisdictions have either enacted new laws or have amended existing cybercrime and telecommunications laws to tackle fake news. Governments in Asia face the distinct challenge of building and sustaining the legitimacy of such top-down regulatory approaches. Discussing the impact of political trust on the effectiveness of fake news legislations, Goh and Soon (2019) point out that governments in Southeast Asia may find it particularly challenging to legitimately tackle online falsehoods using repressive laws due to stronger cynicism among the public for two main reasons. First, existing legal frameworks in many Southeast Asian countries already infringe upon freedom of expression (e.g., defamation laws, speech laws), and the further adoption of heavy-handed legal approaches to combat false information is often perceived as an attempt by governments to expand their control over speech and the internet. In examining the variations in governance approaches of fake news in Asia–Pacific countries, Neo (2021) argues that broad and sweeping anti-fake news legislations are less effective in addressing false information given that most have been “more motivated by political circumstances than by the objective resolution of the issue” (p. 3). In a case study of Singapore’s fake news law—the Protection from Online Falsehoods and Manipulation Act (POFMA)— Özdan (2021) concluded that POFMA may violate fundamental human rights like the right to freedom of expression and opinion and that policymakers should reasonably adhere to the principles of upholding international human rights to build and sustain the legitimacy of the law. Similarly, Anansaringkarn and Neo (2021) argue that Thailand’s regulatory measures to combat fake news significantly increase state power without the appropriate checks and balances, thus making the approach unideal for addressing fake news legitimately.
124
S. Goh
Second, Goh and Soon (2019) also argue that some governments in Asia, who are themselves producers of false information, face a bigger conundrum. For example, during the 2019 Philippine midterm election, the Duterte administration deployed a sophisticated multi-platform social media machinery comprising fake news, bots and trolls and online celebrities and influencers, to seed pro-Duterte content and to harass dissidents and critics of Duterte (Howard, 2020; Ong et al., 2019). Similarly, in India, the spread of anti-Muslim and Islamophobic content, such as the Love Jihad conspiracy, has been found to be associated with Prime Minister Narendra Modi’s Bharatiya Janata Party (Bhattacharya, 2021). In such instances, governments in Asia have to “contend with the dilemma of how to legitimately tackle [false information], given that much of its credibility has been tarnished by the fact that they have been perpetrators of [false information] in the first place” (Goh & Soon, 2019, p. 527). Attempts to extend such regulatory approaches to tackle false information on MIMS add another layer of complexity and sensitivity due to potential over-surveillance of the personal realm, and infringing upon other fundamental rights like privacy. For example, the Cambodian government recently expanded its monitoring of social media to private messaging apps in a bid to target fake news, raising concerns about the infringement of privacy and freedom of expression (Jamal, 2021). India’s new Intermediary Liability Rules, which led to WhatsApp blocking more than two million users in a month for violating the new rules, have also been seen by critics as a move by the government to clamp down on political dissent (Channel NewsAsia, 2021). On top of legitimacy challenges, governments also have to contend with the practical challenges of relying on legal tools to combat false information on MIMS. A prominent case in point is Singapore’s POFMA. Passed in May 2019, POFMA empowers ministers to decide if a piece of information is false and order either a “Correction Direction” or a “Stop Communication Direction” against the falsehood if it is deemed to undermine the public interest. In the former, the falsehood is allowed to remain online for members of the public to “see both sides of the story” and to make their own informed judgement about the matter. In contrast, for highly egregious falsehoods, a “Stop Communication Direction” may be issued instead to ensure that it is taken down from the internet, thereby swiftly stemming the spread of the falsehood. According to POFMA’ed, a database that documents instances of the use of POFMA, the law has been invoked nearly 90 times since it was passed and enforced (Teo, 2021).2 A large majority of these instances involved invoking the law against online falsehoods that were circulated on Facebook—around 70 per cent of the falsehoods that were targeted by POFMA had circulated on Facebook. Online falsehoods that were circulated on websites (e.g., Truth Warriors, Singapore Uncensored) and discussion forums (e.g., HardwareZone) were next most frequently targeted by the law. Thus far, POFMA has yet to target false information that was circulated on MIMS like WhatsApp and Telegram. This is despite the fact that the government has clearly maintained that the ambit of POFMA includes closed communication channels that are end-to-end encrypted, given that the law takes a platform-agnostic 2
This information is accurate as at 24 December 2021.
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
125
approach. As Senior Minister of State for Law, Edwin Tong, said in a parliamentary debate, “[POFMA] therefore recognises that platforms that are closed are not necessarily private. They can be used not only for personal and private communications, but also to communicate with hundreds or thousands of strangers at a time” (Lim, 2019). This is also despite the fact that there had been multiple COVID-19related online falsehoods that had circulated on MIMS, which subsequently caught the attention of the government and were addressed through its fact checking arm, Factually (Choo & Koh, 2020). Some examples include clarifications relating to falsehoods alleging that the Ministry of Health had amended its public health protocols after discovering that COVID-19 was caused by a bacterium instead of a virus, and falsehoods claiming that COVID-19 was spread through postal articles. In other words, although POFMA has been designed to take a platform-neutral approach to combat online falsehoods, the law has largely been used on social networking sites and websites, but not on MIMS. Experts have opined that legal approaches have limited efficacy on MIMS for a few reasons. First, it is difficult to trace how and where each falsehood had started on MIMS, making it challenging for a law like POFMA, designed to be used in a calibrated and targeted fashion, to be effective (Choo & Koh, 2020). Second, the platform affordances available on social networking sites that allow the issue of Correction Directions do not map neatly onto the affordances of messaging apps, thus making the issue of correction notices ineffective and impractical on MIMS (Seah & Tham, 2021). Finally, pursuing a topdown approach to tackling false information, in general, may result in unintended and counterproductive consequences, such as paradoxically driving false information and its producers into online spaces that are more opaque, thus making it even more challenging for timely detection and verification. For example, administrators of prominent public Telegram groups, where anti-vaccination content has been circulating, shared that “[POFMA] forced their move onto the encrypted platform” as the law had compelled the removal of content on platforms that they had been previously operating (i.e., social networking sites and websites) (Guest et al., 2021). In short, the reliance on laws to address false information poses legitimacy challenges to governments especially in Asia, and practical challenges that limit the effectiveness of legal solutions in the context of MIMS. Adopting a legislative approach to address falsehoods circulating on open communication platforms may also have unintended spill-over effects onto closed communication platforms. While no simple feat, future research should seek to understand how anti-fake news laws impact the state of false information, both directly and indirectly, and whether these effects differ across different platforms (e.g., social networking sites versus MIMS). For example, preliminary research has found that the removal of misinformation on one platform can lead to an increased production of similar misinformation on other platforms, thus raising concerns about the overall effectiveness of such regulatory interventions (Mitts et al., 2022). Further research explicating such impact of anti-fake news laws can help inform the design and implementation of legal interventions and minimise counterproductive outcomes.
126
S. Goh
7.5 Conclusion In conclusion, by leveraging a sociotechnical analytical lens, this chapter has argued on how macro factors can complicate the scourge of false information on MIMS in Asia. First, the rise of a “mobile-first, mobile-centric” demographic of users may worsen the problem in the region as many of these users not only lack adequate digital literacy skills to protect themselves from online falsehoods but may also unconsciously entrench digital illiteracies by routinising narrow information-seeking and information verification practices afforded by mobile devices. Second, the rise of super-apps in Asia may supercharge technological capabilities like big data profiling and microtargeting, thus driving the possibility of hyper-personalised forms of false information, especially under weak data and privacy protection regimes. With superapps (and MIMS within super-apps) becoming more deeply embedded in people’s lives, people’s increased reliance on and trust in these platforms may simultaneously make them less critical of the information that they receive. Third, relying on legal tools to combat false information poses both legitimacy and practical challenges to governments in Asia, and in some cases, even results in unintended and paradoxical outcomes that compromise the effectiveness of such solutions. Moving forward, future research should investigate each of these factors more closely, and continue to identify new ones that are unique to Asia to provide a more contextualised and regional understanding of the problem of false information.
References Abu Arqoub, O., Elega, A. A., Efe Özad, B., Dwikat, H., & Oloyede, F. A. (2020). Mapping the scholarship of fake news research: A systematic review. Journalism Practice, 1–31. Adler, B. (2014, March 6). News literacy declines with socioeconomic status. Columbia Journalism Review. Retrieved from https://archives.cjr.org/news_literacy/teen_digital_literacy_divide.php Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. Anansaringkarn, P., & Neo, R. (2021). How can state regulations over the online sphere continue to respect the freedom of expression? A case study of contemporary ‘fake news’ regulations in Thailand. Information & Communications Technology Law, 30(3), 283–303. Apuke, O. D., & Omar, B. (2020). Modelling the antecedent factors that affect online fake news sharing on COVID-19: The moderating role of fake news knowledge. Health Education Research, 35(5), 490–503. Badrinathan, S. (2021). Educative interventions to combat misinformation: Evidence from a field experiment in India. American Political Science Review, 115(4), 1325–1341. Bhattacharya, A. (2021, October 26). Despite the many warning signs, Facebook hasn’t done enough to fix its hate-speech problem in India. Scroll. Retrieved from https://scroll.in/art icle/1008616/despite-the-many-warning-signs-facebook-hasnt-done-enough-to-fix-its-hatespeech-problem-in-india Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-uselection
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
127
Channel NewsAsia. (2021, July 16). WhatsApp blocks 2 million Indian users over messaging violations. Retrieved from https://www.channelnewsasia.com/news/asia/whatsapp-blocks-2-millionindian-users-over-messaging-violations-15232952 Chen, W. (2019, September 23). WeChat embraces new feed algorithm to boost Official Accounts content. KrASIA. Retrieved from https://kr-asia.com/wechat-embraces-a-new-feed-algorithmto-boost-the-platform-official-accounts-content Choo, D., & Koh, A. (2020, May 16). Pofma—but not for WhatsApp so far. Our Class Notes. Retrieved from https://web.archive.org/web/20200928044955/www.ourclassnotes.com/ post/pofma-but-not-for-whatsapp-so-far Clark, M., Coward, C., & Rothschild, C. (2017). Mobile information literacy: Building digital and information literacy skills for mobile-first and mobile-centric populations through public libraries [Paper presentation]. In 2nd AfLIA Conference & 4th Africa Library Summit proceedings, Yaoundé, Cameroon. Comscore. (2017, December 17). Vietnam’s Mobile-First Millennials and multi-platform consumption. Retrieved from https://www.comscore.com/Insights/Infographics/Vietnams-Mobile-FirstMillennials-and-Multi-Platform-Consumption Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The New York Times. Retrieved from https://www.nytimes.com/2018/04/04/us/politics/ cambridge-analytica-scandal-fallout.html Correa, T., Pavez, I., & Contreras, J. (2020). Digital inclusion through mobile phones?: A comparison between mobile-only and computer users in internet access, skills and use. Information, Communication & Society, 23(7), 1074–1091. Dabbous, A., Aoun Barakat, K., & de Quero Navarro, B. (2021). Fake news detection and social media trust: A cross-cultural perspective. Behaviour & Information Technology, 0, 1–20. Deloitte. (2020). The Asia Pacific privacy guide 2020–2021: Stronger together. Retrieved from https://www2.deloitte.com/content/dam/Deloitte/ph/Documents/risk/ph-risk-asia-pacificprivacy-guide.pdf Di Domenico, G., Sit, J., Ishizaka, A., & Nunan, D. (2021). Fake news, social media and marketing: A systematic review. Journal of Business Research, 124, 329–341. Dobberstein, L. (2021, October 25). Asia’s ‘superapps’ bundle ride-share, food delivery, even financial services–and they’re beating big tech. The Register. Retrieved from https://www.theregister. com/2021/10/25/asias_superapps/ Evans, M. (2018, March 21). 4 reasons super apps like WeChat would struggle in the U.S. Forbes. Retrieved from https://www.forbes.com/sites/michelleevans1/2018/03/21/four-reasonswhy-super-apps-like-wechat-would-struggle-in-the-us/?sh=30ced72f3154 Freedom House. (2021). Freedom on the Net 2021: The global drive to control big tech. Retrieved from https://freedomhouse.org/sites/default/files/2021-09/FOTN_2021_Complete_ Booklet_09162021_FINAL_UPDATED.pdf Galloway, S. (2021, November 24). Super-apps are inevitable get ready for the first $10 trillion tech company. New York Magazine. Retrieved from https://nymag.com/intelligencer/2021/11/ facebook-metaverse-super-apps.html Garimella, K., & Eckles, D. (2017). Image based misinformation on WhatsApp. In Proceedings of the Thirteenth International AAAI Conference on Web and Social Media (ICWSM 2019), Washington DC, USA. Geeng, C., Yee, S., & Roesner, F. (2020). Fake news on Facebook and Twitter: Investigating how people (don’t) investigate. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14). Honolulu, USA. Ghose, A., Goldfarb, A., & Han, S. P. (2013). How is the mobile Internet different? Search costs and local activities. Information Systems Research, 24(3), 613–631. Goh, S., & Soon, C. (2019). Governing the information ecosystem: Southeast Asia’s fight against political deceit. Public Integrity, 21(5), 523–536.
128
S. Goh
Gohain, M. P. (2021, July 21). 23% of urban population has access to computers, only 4% of rural: Survey. Times of India. Retrieved from https://timesofindia.indiatimes.com/india/23-of-urbanpopulation-has-access-to-computers-only-4-of-rural-survey/articleshow/77075283.cms GSMA. (2021). Digital societies in Asia Pacific: Accelerating progress through collaboration. Retrieved from https://www.gsma.com/asia-pacific/wp-content/uploads/2021/10/181021Digital-Societies-in-Asia-Pacific-2021_final.pdf Guest, P., Firdaus, F., & Danan, T. (2021, July 28). “Fake news” laws are failing to stem Covid-19 misinformation in Southeast Asia. Rest of World. Retrieved from https://restofworld.org/2021/ fake-news-laws-are-failing-to-stem-covid-19-misinformation-in-southeast-asia/ Gui, M., & Gerosa, T. (2021). Smartphone pervasiveness in youth daily life as a new form of digital inequality. In E. Hargittai (Ed.), Handbook of digital inequality (pp. 131–147). Edward Elgar Publishing. Haskell-Dowland, P., & Hampton, N. (2021, April 28). Apple’s new ‘app tracking transparency’ has angered Facebook. How does it work, what’s all the fuss about, and should you use it? The Conversation. Retrieved from https://theconversation.com/apples-new-app-tracking-transpare ncy-has-angered-facebook-how-does-it-work-whats-all-the-fuss-about-and-should-you-use-it159916 Herrero-Diz, P., Conde-Jiménez, J., & Reyes de Cózar, S. (2020). Teens’ motivations to spread fake news on WhatsApp. Social Media+Society, 6(3), 2056305120942879. Howard, P. N. (2020). Lie machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives. Yale University Press. Humphreys, L., TVon. Pape, & Karnowski, V. (2013). Evolving mobile media use: Uses and conceptualizations of the mobile Internet. Journal of Computer-Mediated Communication, 18, 491–507. Hyde-Clark, N., & Tonder, V. T. (2011). Revisiting the leapfrog debate in light of current trends of mobile phone usage in the greater Johannesburg area, South Africa. Journal of African Media Studies, 3(2), 263–276. Isomursu, P., Hinman, R., Isomursu, M., & Spasojeciv, M. (2007). Metaphors for the mobile Internet. Knowledge, Technology & Policy, 20(4), 259–268. Ivanova, Y. (2020). Can EU data protection legislation help to counter “fake news” and other threats to democracy? In S. Katsikas & V. Zorkadis (Eds.), E-Democracy–safeguarding democracy and human rights in the digital age. e-Democracy 2019. communications in computer and information science, vol. 1111. Springer, Cham. Jamal, U. (2021, February 18). Cambodia expands monitoring of social media to private messaging apps, citing fake news. ASEAN Today. Retrieved from https://www.aseantoday.com/2021/02/ cambodia-expands-monitoring-of-social-media-to-private-messaging-apps-citing-fake-news/ Kayany, J. M., & Yelsma, P. (2000). Displacement effects of online media in the socio-technical contexts of households. Journal of Broadcasting & Electronic Media, 44(2), 215–229. Kischinhevsky, M., Vieira, I. M., dos Santos, J. G. B., Chagas, V., Freitas, M. D. A., & Aldé, A. (2020). WhatsApp audios and the remediation of radio: Disinformation in Brazilian 2018 presidential election. Radio Journal: International Studies in Broadcast & Audio Media, 18(2), 139–158. Lim, A. (2019, May 7). Parliament: Fake news law covers closed platforms like chat groups and social media groups, says Edwin Tong. The Straits Times. Retrieved from https://www.strait stimes.com/politics/parliament-fake-news-law-covers-closed-platforms-like-chat-groups-andsocial-media-groups Lim, S. S. (2018, June 6). Commentary: Singapore’s Digital Readiness Blueprint must also address ‘invisible illiteracies’. Channel NewsAsia. Retrieved from https://www.channelnewsasia.com/ news/commentary/singapore-digital-readiness-blueprint-invisible-illiteracy-10373720 Lim, S. S., & Loh, R. S. M. (2019). Young people, smartphones, and invisible illiteracies: Closing the potentiality–actuality chasm in mobile media. In E. Polson, L. S. Clark, & R. Gajjala (Eds.), The Routledge companion to media and class (pp. 132–141). Routledge.
7 Users, Technologies and Regulations: A Sociotechnical Analysis …
129
Lomas, N. (2021, October 21). Inside a European push to outlaw creepy ads. TechCrunch. Retrieved from https://techcrunch.com/2021/10/21/inside-a-european-push-to-outlaw-creepy-ads/ Machado, C., Kira, B., Narayanan, V., Kollanyi, B., & Howard, P. (2019, May). A study of misinformation in WhatsApp groups with a focus on the Brazilian Presidential Elections. In Companion Proceedings of the 2019 World Wide Web Conference (pp. 1013–1019). San Francisco, USA. Marwick, A. E. (2018). Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technology Review, 2(2), 474–512. McKinsey Global Institute. (2019). Digital India: Technology to transform a connected nation. Retrieved from https://www.mckinsey.com/~/media/mckinsey/business%20functions/mck insey%20digital/our%20insights/digital%20india%20technology%20to%20transform%20a% 20connected%20nation/digital-india-technology-to-transform-a-connected-nation-full-report. pdf Mihindukulasuriya, R. (2021, June 3). By 2025, rural India will likely have more internet users than urban India. The Print. Retrieved from https://theprint.in/tech/by-2025-rural-india-will-likelyhave-more-internet-users-than-urban-india/671024/ Mitts, T., Pisharody, N., & Shapiro, J. (2022). Removal of anti-vaccine content impacts social media discourse. In 14th ACM Web Science Conference 2022 (pp. 319–326). Morrison, S. (2021, March 3). Google is done with cookies, but that doesn’t mean it’s done tracking you. Recode. Retrieved from https://www.vox.com/recode/2021/3/3/22311460/google-cookieban-search-ads-tracking Napoli, P. M., & Obar, J. A. (2013). Mobile leapfrogging and digital divide policy: Assessing the limitations of mobile Internet access. Fordham University Schools of Business. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2263800 Napoli, P. M., & Obar, J. A. (2014). The emerging mobile Internet underclass: A critique of mobile Internet access. The Information Society, 30(5), 323–334. Neo, R. (2021). When would a state crack down on fake news? Explaining variation in the governance of fake news in Asia-Pacific. Political Studies Review, 0, 1–20. Newman, N., Fletcher, R., Schulz, A., Andı, S., Robertson, C. T., & Nielsen, R. K. (2021). Reuters institute digital news report 2021. Reuters Institute for the Study of Journalism. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Dig ital_News_Report_2021_FINAL.pdf Ofcom. (2016). Smartphone by default: A qualitative research report conducted by ESRO for Ofcom. Retrieved from https://www.ofcom.org.uk/__data/assets/pdf_file/0028/62929/smarphone_by_ default_2016.pdf Ong, J. C., Tapsell, R., & Curato, N. (2019). Tracking digital disinformation in the 2019 Philippine Midterm Election. New Mandala. Retrieved from https://www.newmandala.org/wp-content/upl oads/2019/08/Digital-Disinformation-2019-Midterms.pdf Özdan, S. (2021). The right to freedom of expression versus legal actions against fake news: A case study of Singapore. In A. MacKenzie, J. Rose, & I. Bhatt (Eds.), The epistemology of deceit in a postdigital era (pp. 77–94). Springer. Pan, W., Liu, D., & Fang, J. (2021). An examination of factors contributing to the acceptance of online health misinformation. Frontiers in Psychology, 12, 524. Puspitasari, L., & Ishii, K. (2016). Digital divides and mobile Internet in Indonesia: Impact of smartphones. Telematics and Informatics, 33(2), 472–483. Rampersad, G., & Althiyabi, T. (2020). Fake news: Acceptance by demographics and culture on social media. Journal of Information Technology & Politics, 17(1), 1–11. Resende, G., Melo, P., CS Reis, J., Vasconcelos, M., Almeida, J. M., & Benevenuto, F. (2019, June). Analyzing textual (mis) information shared in WhatsApp groups. In Proceedings of the 10th ACM Conference on Web Science (pp. 225–234). Boston MA, USA. Rodenbaugh, R. (2020, October 13). A deep dive into super apps and why they’re booming in the East and not the West. Tech in Asia. Retrieved from https://www.techinasia.com/deep-divesuper-app-booming-east-not-west
130
S. Goh
Sartor, G., Lagioia, F., & Galli, F. (2021). Regulating targeted and behavioural advertising in digital services: How to ensure users’ informed consent. European Parliament. Retrieved from https://www.europarl.europa.eu/RegData/etudes/STUD/2021/694680/IPOL_S TU(2021)694680_EN.pdf Schmidt, T., Salomon, E., Elsweiler, D., & Wolff, C. (2021). Information behavior towards false information and “Fake News” on Facebook: The influence of gender, user type and trust in social media. Universitat Regensburg, Retrieved from https://epub.uni-regensburg.de/44942/1/ isi_schmidt_et_al.pdf Seah, J., & Tham, B. (2021, May 13). Ministries of truth: Singapore’s experience with misinformation during COVID-19. Konrad Adenauer Stiftung. Retrieved from https://medium.com/digital-asia-ii/ministries-of-truth-singapores-experience-with-misinf ormation-during-covid-19-2cd60d0c4b91 Silver, L., & Smith, A. (2019, May 2). In some countries, many use the internet without realizing it. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2019/05/02/insome-countries-many-use-the-internet-without-realizing-it/ Smith, R. B., Perry, M., & Smith, N. N. (2021). Fake news in ASEAN: Legislative responses. Journal of ASEAN Studies, 9(2), 159–179. Soon, C., & Goh, S. (2021). Singaporeans’ susceptibility to false information. IPS Exchange Series No. 19. Retrieved from https://lkyspp.nus.edu.sg/docs/default-source/ips/ips-study-on-singap oreans-and-false-information_phase-1_report.pdf Southerton, C., & Taylor, E. (2020). Habitual disclosure: Routine, affordance, and the ethics of young people’s social media data surveillance. Social Media+Society, 6(2), 1–11. Talwar, S., Dhir, A., Kaur, P., Zafar, N., & Alrasheedy, M. (2019). Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. Journal of Retailing and Consumer Services, 51, 72–82. Teo, K. X. (2021). POFMA’ed dataset (v2021.10.09) [Data file]. Retrieved from https://pofmaed. com/ Tsetsi, E., & Rains, S. A. (2017). Smartphone Internet access and use: Extending the digital divide and usage gap. Mobile Media & Communication, 5(3), 239–255. United Nations Human Rights Office. (2021, July 14). Press briefing: Online content moderation and internet shutdowns. Retrieved from https://www.ohchr.org/Documents/Press/Press%20brie fing_140721.pdf We Are Social. (2021). Digital 2021 global overview report. Retrieved from https://wearesocialnet.s3-eu-west-1.amazonaws.com/wp-content/uploads/common/reports/digital-2021/digital2021-global.pdf Wei, R., & Lo, V. H. (2021). News in their pockets: A cross-city comparative study of mobile news consumption in Asia. Oxford University Press.
Chapter 8
No “Me” in Misinformation: The Role of Social Groups in the Spread of, and Fight Against, Fake News Edson C. Tandoc Jr., James Chong Boi Lee, Chei Sian Lee, Joanna Sei Ching Sin, and Seth Kai Seet Abstract Interpersonal and group chats, such as WhatsApp, have become channels for information exchange. Unfortunately, they have also become channels for fake news. But if fake news spreads through chat groups, it begets the question if corrections also be effectively disseminated in the same way. Guided by the frameworks of social identity theory and social presence theory, this study examined the impact of source familiarity (familiar versus unfamiliar) and mode of delivery (interpersonal chat versus group chat) on the perceived credibility of a correction message to debunk misinformation sent on WhatsApp. Through a five-day-long experiment involving 114 student participants in Singapore, this study found no main effect of either source familiarity or mode of delivery on the perceived credibility of the correction message. However, the study found a significant interaction effect—when the correction was sent to a chat group, members rated it as more credible when it was sent by a familiar source than when it was sent by a source they had never met. Keywords Interpersonal chats · Group chats · Social identity theory · Social presence theory · Correction messages E. C. Tandoc Jr. (B) · C. S. Lee · J. S. C. Sin Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] C. S. Lee e-mail: [email protected] J. S. C. Sin e-mail: [email protected] J. C. B. Lee Ministry of Social and Family Development, Singapore, Singapore S. K. Seet Centre for Information Integrity and the Internet, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature B.V. 2023 C. Soon (ed.), Mobile Communication and Online Falsehoods in Asia, Mobile Communication in Asia: Local Insights, Global Implications, https://doi.org/10.1007/978-94-024-2225-2_8
131
132
E. C. Tandoc Jr. et al.
8.1 Introduction The rise of messaging applications has changed many aspects of social life. Similar to short message service (SMS), messaging apps allow people to send messages to each other in real time; but unlike SMS, messaging apps such as WhatsApp, as well as those embedded within social media platforms, such as Facebook Messenger, also allow group chats. Messaging apps enable not only mediated dyadic communication and group-based communication but also provide users with the ability to change from one mode to the other with ease (Boczkowski et al., 2018; Tandoc et al., 2018). With newer features, such as allowing senders to see if their messages have been delivered and seen by their intended recipients, messaging apps arguably make social interaction obligatory (Ling & Lai, 2016). Bought by Facebook in 2014 for US $19 billion (Shead, 2019), WhatsApp has become the most-used messaging app, with more than two billion users worldwide (Porter, 2020). It has been identified as a key purveyor of misinformation and disinformation in many countries (Frayer, 2018; Sin, 2018). In countries such as India, where telecommunication companies provide WhatsApp for free to lure subscribers, WhatsApp is the internet. With no access to the internet, except for their free WhatsApp account, many mobile phone users turn to WhatsApp as their main source of information. This is why fact checking groups as well as news outlets have turned to WhatsApp to disseminate accurate information and corrections to misinformation and disinformation (Tardaguila, 2020). In Singapore, WhatsApp is the most-used messaging app—a survey by the Reuters Institute for the Study of Journalism found that 88 per cent of respondents use WhatsApp in general, higher than the 70 per cent who use Facebook (Tandoc, 2021). During the COVID-19 pandemic, the Singapore government used WhatsApp as a communication channel to disseminate official updates on the outbreak (e.g., the number of new cases) and to debunk fake news that went viral (Ministry of Communications and Information, 2020). By using WhatsApp, the government was able to reach out to more segments of the populations, including seniors, some of whom are active WhatsApp users. Unfortunately, WhatsApp has also facilitated the spread of fake news and other types of falsehoods.
8.2 Literature Review 8.2.1 Fake News and Messaging Apps While misinformation and disinformation are types of false information or falsehoods, misinformation stems from inadvertent sharing while disinformation involves intentional creation and sharing of false information (Wardle, 2018). Fake news is a specific type of disinformation, deliberately created to resemble news in order to
8 No “Me” in Misinformation: The Role of Social Groups in the Spread …
133
deceive. However, those sharing fake news subsequently may have different intentions from those of the original creators of the disinformation. Sometimes, those sharing fake news may actually have noble intentions, such as warning others. But this is also when the original intention to deceive has worked—others share fake news without knowing the original intentions of the creators (Tandoc, 2021). At the Centre for Information Integrity and the Internet (IN-cube), a research centre based in the Nanyang Technological University Singapore, where we track internet behaviour in Singapore, we have conducted studies that established the important role that mobile communication and messaging apps play in how Singaporeans communicate with one another. In a survey involving 1,606 Singapore residents conducted in December 2020, almost a year after COVID-19 was first detected in Singapore, 74.4 per cent of the respondents said their use of smartphones had increased during the pandemic. Similarly, 74.5 per cent said their use of WhatsApp increased during the pandemic (IN-cube, 2020). WhatsApp also figured prominently when we asked young adults at the earlier stages of the pandemic about how they kept track of COVID-19 updates. In another study, we examined messages sent to a WhatsApp group, originally created as a platform for fact checking. Based on our content analysis of messages forwarded to the WhatsApp group between 23 January and 10 June 2020 (n = 238 messages), we found that 50.9 per cent were either completely inaccurate or were a mix of accurate and inaccurate information; 35.3 per cent were completely accurate, while 13.9 per cent were difficult to verify. What was noteworthy was how the proportion of mixed messages—combining accurate and inaccurate details— increased over time as the pandemic situation worsened in Singapore. There are many reasons why messaging apps, such as WhatsApp, have become effective platforms for the spread of different types of falsehoods. First, in many countries, such as India, a messaging app is the only gateway to online communication (e.g., some people only have free WhatsApp access bundled in their mobile phone subscription), hence resulting in high levels of usage and massive reach. Second, messaging apps are not as public as other social media platforms— messages exchanged are visible only among two or more users who have agreed to exchange contact details. Third, the messages exchanged within these spaces may also be considered more intimate and personal, and hence more persuasive. Finally, forwarding messages on WhatsApp is easy but tracing the original creator of the message may be extremely difficult. While WhatsApp had introduced the forwarded tag to stem the spread of fake news on the platform in 2018, a mixedmethod study conducted in Singapore found that some users interpret the forwarded tag as increasing the reliability of the message it accompanies, indicating that the message has not been altered and therefore is reliable (Tandoc et al., 2022). This contradicts the original intention behind the forwarded tag, which is to make message recipients more careful.
134
E. C. Tandoc Jr. et al.
8.2.2 Messaging Apps and Fact Checking If messaging apps facilitate the spread of fake news, it begets the question if they can also be used for fact checking. Fact checking sites debunk misinformation and disinformation that have gone viral by posting and sharing articles that label those viral content as false (Graves & Cherubini, 2016). Studies have found that corrections help in fighting falsehoods. For example, Bode and Vraga (2015) found that exposure to news stories that debunked claims on how consuming genetically modified organisms makes people sick lessened misperceptions among those who had initially believed in the claim. Nyhan et al. (2020) also found that the people exposed to political fact checks held more accurate beliefs than those who were not exposed, even if the fact checks targeted their preferred political candidates. However, other studies found that the effect of misinformation could continue even after exposure to a correction (Lewandowsky et al., 2012). This is similar to what others referred to as belief persistence, or when people exposed to a correction do not believe the corrective information (Thorson, 2016). However, it is also possible that some people might believe the correction, but still revert to their original attitudes shaped by misinformation, or what Thorson (2016, p. 461) referred to as “belief echoes”. Some studies have sought to isolate the factors that affect the extent to which a correction might be effective, such as the format of the corrections (Young et al., 2018) and the embedment of corrections (Sangalang et al., 2019). The perceived source of the information also exerts an important influence (Metzger et al., 2010; Sundar, 2008). For example, studies found that trustworthy sources increase acceptance of messages as well as change in people’s opinions (Flanagin & Metzger, 2007; Sundar & Nass, 2001). However, operationalising source in the social media and messaging app contexts is tricky, as users are exposed to various layers of sourcing, where the immediate source of a post (e.g., a friend who forwarded a message) might be different from the original source that created the content (e.g., a news outlet). Thus, studies have distinguished between proximate and distal sources on social media (see Kang et al., 2011; Tandoc, 2018). In messaging apps such as WhatsApp, the original source (distal) of a viral post is often excluded and hence difficult to trace when users use the forward function to share messages, so the focus of credibility judgments is often on the proximate source (Funke, 2018). WhatsApp users may also find it difficult to judge the credibility of a message on WhatsApp due to a lack of quality control mechanisms and end-to-end encryption, which makes it difficult to track the distal source of the message (de Freitas Melo et al., 2019).
8.2.3 Source Considerations on WhatsApp Studies have found that source credibility affects how message recipients respond to what they receive on messaging apps. For example, a study found that the credibility
8 No “Me” in Misinformation: The Role of Social Groups in the Spread …
135
of the source is a significant predictor of how likely a user will forward rumours received on the messaging app WeChat (Seah & Weimann, 2020). Proximate sources on WhatsApp and other messaging apps tend to be from one’s social network, as users are able to control the people they add and interact with on these platforms. Users tend to add and interact with people they already know. For example, Morris et al. (2010) found that Facebook users tend to ask questions and obtain answers from those within their own social networks. This is because people tend to trust the opinions and answers of people they know over those of people whom they do not know. Indeed, Seo and Lee (2014, p. 247) argued that “the credibility of the information is also influenced by the familiarity of the information seeker with the information provider”. This impact of source familiarity on message credibility can be explained by the framework of social identity theory (SIT) which explains intergroup behaviour and accounts for relationships between groups (Hogg et al., 1995). Belonging to a social group shapes one’s social identity (Tajfel & Turner, 1986). According to SIT, a group is “a collection of individuals who perceive themselves to be members of the same social category, share some emotional involvement in this common definition of themselves, and achieve some degree of social consensus about the evaluation of their group and their membership in it” (Tajfel & Turner, 1986, p. 15). As people navigate through social interactions, they tend to undergo self-categorisation, where they group themselves into meaningful categories based on the interplay between cognition, social attitudes and subjective perceptions of others (Turner, 1999; Turner & Oakes, 1986; Turner et al., 1994). Embedded in self-categorisation is the understanding of in-group and out-group dynamics. In-group and out-group dynamics affect communicative patterns. For example, in-group membership serves as a form of cognitive heuristics, so that individuals tend to readily accept arguments and viewpoints from members of their ingroup (Chaiken, 1980). Indeed, a study found that messages sent by an in-group source were found to be more persuasive than those sent by an out-group source even when the content of the message was irrelevant to the group (Mackie et al., 1990). In-group messages were also found to receive more content-focused processing as compared with out-group messages (Mackie et al., 1992). Online survey experiments in India and Pakistan found that participants were more likely to reshare fact checks when these were sent to them by people whom they had close ties with or whom they perceived to be part of their ingroup (Pasquetto et al., 2022). Thus, guided by the assumptions of SIT and what previous studies found in terms of the impact of source familiarity, we hypothesise that: H1: A familiar source will be rated more credible than an unfamiliar source. H2: A correction message will be rated more credible when it is sent by a familiar source than when it is sent by an unfamiliar source, even when controlling for the perceived credibility of the source.
136
E. C. Tandoc Jr. et al.
8.2.4 Shifting from Personal to Group Communication Messaging apps do not just facilitate interpersonal connections but also group communication (Ling, 2017). By allowing the creation of group chats and enabling group members to send messages to the group, monitor who has seen the message and track everyone’s replies, messaging apps such as WhatsApp have dramatically changed group communication. For example, two members of a chat group can routinely switch to interpersonal chatting, choosing which aspects of their communicative dynamics will be visible to the larger group and which ones to keep to just the two of them (Ling, 2017; Tandoc et al., 2018). However, how messages are perceived across these two different forms of communication remains underexplored, given the relative newness of this communication affordance. Credibility judgments of messages sent to a group chat can also be examined using the framework of SIT. Since users can choose to join or leave a chat group, continued membership in a group can be construed as an expression of a user’s identification with the group. It is plausible then that the user will perceive other members of the chat group as members of one’s ingroup, a perception that will affect how that user perceives the messages communicated through the chat group. Group membership can thus function as a heuristic in judging the credibility of the messages sent to the group by fellow members (Mackie et al., 1990). Also at play in a group setting is the effect of conformity (Hollander, 1958). Individuals’ need to maintain their own status within the ingroup can lead them to conform (Hollander, 1958). This might explain why messages communicated to a group might be more persuasive. Thus, in this study, we control for group tie strength in our analysis. In a series of focus group discussions conducted in Singapore, we found that responding to online falsehoods—how individuals fall for inaccurate claims and how others engage in authenticating information—is essentially a social process (Waruwu et al., 2020). While it may be an individual-level decision, forwarding a post often involves social motivations—to show others we care about them, to warn others or to make others feel they belong (Waruwu et al., 2020). The reasons for authenticating or not authenticating information may also be social—to not lose face among one’s peer group and to avoid arguments with peers. The process of authenticating is also very social—the default for some participants is to authenticate information they are not sure of by asking family or friends, either in person or via WhatsApp (Tandoc, Ling, et al., 2018; Waruwu et al., 2020). The framework of social presence theory (SPT) also predicts that messages communicated privately might be more persuasive than those communicated publicly due to social presence, which refers to “the degree of salience of the other person in the interaction and the consequent salience of the interpersonal relationships” (Short, Williams, & Christie, 1976, p. 65). Within the framework of SPT, an interpersonal message is likely to be perceived as having a higher level of intimacy and co-presence than a mass message broadcasted to the public or to all members of a group. Such perceived social presence may contribute to higher levels of attention and engagement from the receiver (Osei-Frimpong & McLean, 2018; Tu, 2000). Thus, it is
8 No “Me” in Misinformation: The Role of Social Groups in the Spread …
137
also plausible that a message received privately from another person, such as in a messaging app exchange, will be judged to be more credible than a message received through a group chat. Of course, these are only assumptions deduced from theoretical frameworks, as the differences in credibility perceptions between messages sent through interpersonal chats and messages sent through group chats remain underexplored. Given the novelty of this technological affordance offered by messaging apps, we propose the following: RQ1: Which correction will be rated more credible—the one delivered through an interpersonal channel or the one delivered through a group channel? The difference between interpersonal and group correction needs further unpacking. Studies have expressed concern over the spread of falsehoods on messaging apps because, unlike social media platforms which tend to be open (i.e., one’s Facebook post is visible to one’s network), messaging apps tend to be closed platforms (i.e., messages exchanged are visible only to the sender and the receiver) (see Pang & Woo, 2020; Tandoc et al., 2018). Thus, falsehoods are more difficult to track in messaging apps. This should also be true at the micro-level—private messages are safe from public scrutiny. Depending on the relationship between two interactants, this might lead to more honest, and arguably, more trustworthy disclosures. But users might also be more suspicious of messages sent privately by distant acquaintances. Social and group-based credibility evaluations (e.g., checking user-generated ratings, checking other people’s reactions), which have emerged as popular credibility assessment heuristics (Kim et al., 2014; Metzger et al., 2010), are absent in such private mediated interactions. Hence, we also ask: RQ2: Is there an interaction effect between source familiarity (familiar vs. unfamiliar) and mode of delivery (interpersonal vs. group) on the perceived credibility of a correction message? WhatsApp and other messaging apps allow the routine switching from interpersonal to group communication. A mediated but asynchronous conversation may start in a chat group, but two members may continue, or have a side conversation, in an interpersonal exchange by chatting with each other in a separate space. If we are to maximise these spaces for correcting falsehoods, we must compare how users navigate personal and group communication on messaging apps.
8.3 Method We tested our hypotheses and answered our research questions using an elaborate, but a small field experiment in February 2020, when the pandemic was just starting in Singapore. We conducted a 2 (correction source: familiar versus not familiar) × 2 (delivery mode: interpersonal versus group) between-subjects experimental design to explain the extent to which participants perceive as credible a correction to a piece of misinformation.
138
E. C. Tandoc Jr. et al.
8.3.1 Participants The experiment involved 114 university student participants recruited from Nanyang Technological University who took part in a series of activities spanning five days. The activities included focus group discussions (FGDs), sending memes and news articles to a WhatsApp group and requiring participants to complete three online questionnaires (one prior to the main experiment, one during the main experiment and one after the main experiment). A student confederate was also embedded as a participant to manipulate source familiarity and the mode of delivery.
8.3.2 Study Procedure The participants were randomly assigned to one of four experimental groups: (a) familiar source, interpersonal chat (b) familiar source, group chat; (c) unfamiliar source, interpersonal chat; and (d) unfamiliar source, group chat. To simulate a group setting as well as introduce our student confederate to the participants assigned in the familiar group, we created both offline and online activities across four days. Day 1: We asked the participants to attend an in person FGD at a facility in the university for a briefing. Breaking them into four groups, we walked them through the study procedures and asked them to fill out a questionnaire. However, the main purpose of the briefing session was to get the participants to know each member of their respective groups as well as to introduce our source manipulation in the form of a student confederate. The student confederate, acting as a participant, was a male student from the same university. He participated in two of the four FGDs, pretending to be a regular participant. Before the briefing session ended, the moderator created a WhatsApp group chat for all the group members, explaining that the activities for the study will be communicated via the group chat. The confederate was also included in the chat groups and his WhatsApp profile included his photo. Day 2: The moderator instructed all participants via their respective WhatsApp chat groups to send a meme to the group. In the familiar condition, the confederate messaged each group member individually to ask a question about the task. This conversation aimed to enhance source familiarity. Day 3: The moderator instructed all participants to send to their respective WhatsApp groups a news article that they find interesting. Under the pretence of the given task, an experimenter unknown to any of the participants sent a screenshot of a fake news article to the WhatsApp group. This fake news article contained untrue information about prevention of dengue fever using coconut oil, an actual piece of news that circulated in India and the Philippines. Thirty minutes later, the student confederate sent a correction message that debunked the misinformation piece about the use of coconut oil as protection against dengue fever. Two groups received the correction message via the chat group, while two groups received the correction as a personal message from the confederate. Some 10 minutes later, the moderator sent a
8 No “Me” in Misinformation: The Role of Social Groups in the Spread …
139
message to all participants, saying he noticed that some interesting exchanges were made and asked the participants to fill out a questionnaire that asked about the (a) fake news post; and (b) the correction to the fake news. Day 4: The moderator asked the participants to complete another online questionnaire that asked their perceptions of their respective group chats as well as what they thought was the purpose of the study. They were also invited to join a debriefing session. The responses of those who figured out the role of the student confederate were excluded from the final data analysis. The participants also attended a debriefing session with their respective group members, in which they were then told of the real objectives of the study. They were also asked about their experience with the study as well as other questions about misinformation in Singapore. As part of this project’s goal to contribute to combating misinformation and disinformation, a 10-minute presentation on how to detect and debunk fake news was conducted. The presentation showed the participants some online tools and resources that they can use to verify information online.
8.3.3 Manipulation and Dependent Variables Source Familiarity: In the familiar condition, two groups (n = 40) were made to interact with a student confederate prior to the actual experimental treatment: in person, during the briefing session, and on WhatsApp, during the third day of the experiment. This is similar to the approach used in a previous study that manipulated familiarity with an electronic recommendation agent (Komiak & Benbasat, 2006). The other two groups (n = 40) were not introduced to the student confederate and encountered the student confederate on WhatsApp only when he sent the correction message. Delivery Mode: This refers to whether the correction was sent individually to participants by the confederate or to the group as a whole. Participants in the interpersonal condition (n = 38) received the correction message sent privately by the confederate, while participants in the group condition (n = 42) received the correction message in the WhatsApp group chat. Message Credibility: The participants were asked to rate their level of agreement with each of the following descriptions of the correction message that they received— if it is believable, factual, credible, trustworthy, reliable, important, accurate and unreliable (reversed). The scale, adapted from previous studies on message credibility (Li & Suh, 2015; Tandoc, 2018), was found to be reliable (Cronbach’s α = 0.94). Source Credibility: The analysis also accounted for the perceived credibility of the student confederate who sent the correction message, consistent with studies that found that source credibility affects message credibility (e.g., Kang et al., 2011; Metzger et al., 2010). The participants were asked to rate, using a 5-point Likert scale, how much they agree or disagree with each of the following descriptions of the sender of the correction message—intelligent, untrained, inexpert, informed, incompetent, bright, cares about me, has my interests at heart, self-centred, concerned
140
E. C. Tandoc Jr. et al.
with me, insensitive, not understanding, honest, untrustworthy, honourable, moral, unethical and genuine. The eight negatively worded items were reverse-coded. The scale, adapted from McCroskey and Teven (2013), was reliable (Cronbach’s α = 0.83).
8.3.4 Covariates The analysis also controlled for several covariates that the literature had identified could potentially affect participants’ belief in the correction. Frequency of WhatsApp Use: The participants were asked how often they accessed different social media and messaging apps, including WhatsApp, using a 5-point scale, from 1 (never) to 5 (very often: more than six times per day). The participants used WhatsApp very frequently (M = 4.50, SD = 0.78). Frequency of Following Health News: The participants were also asked how often they followed news across different topics, including health and medicine, using a 5-point scale, from 1 (never) to 5 (very frequently). This is an important covariate, as previous studies have found that familiarity with a topic can affect individuals’ judgments on the credibility of the related information (e.g., Gao et al., 2015). The participants did not follow this news topic frequently (M = 2.34, SD = 1.01). Group Tie Strength: While the first two covariates were measured before the main experiment, perceived tie strength was measured after the main experiment. Since the participants were assigned to different groups, it is important to account for how close they felt to their respective groups. This is also consistent with the framework of social identity theory that focuses on the importance of belonging to an ingroup. Studies have also found that tie strength can affect credibility judgments (e.g., Koo, 2016). Thus, the participants were asked to rate, using a 5-point Likert scale, their level of agreement with each of the following statements about their respective groups—I have good relationships with people who are in this WhatsApp group; I am in close contact with the people who are in this WhatsApp group; I enjoy reading news stories written by the people who are in this WhatsApp group; and I enjoy sharing news stories with the people who are in this WhatsApp group (adapted from Ma, Lee, & Goh, 2014). The scale was reliable (Cronbach’s α = 0.72). An analysis of variance confirmed there were no significant differences between groups, F (5, 74) = 0.96, p > 0.05. Perceived Credibility of the Fake News Message: The analysis also controlled for the extent to which the participants evaluated the credibility of the fake news message about dengue fever and coconut oil, which was the object of the correction message. This was important to control, given the studies on belief echoes that found that while some people might believe in the correction, their original belief in the misinformation might also persist (see Thorson, 2016). Thus, during the main experiment, the participants were also asked to rate their level of agreement with each
8 No “Me” in Misinformation: The Role of Social Groups in the Spread …
141
of the following descriptions of the fake news post they saw—believable, factual, credible, trustworthy, reliable, important, accurate and unreliable (reversed). This scale, similar to the one used to measure correction message credibility, was also reliable (Cronbach’s α = 0.87).
8.4 Study Results The dataset for this study combines responses from 114 participants across three questionnaires asked at three different junctures—before the actual experiment, during the experiment and after the actual experiment. Before conducting the analysis, the researchers carefully examined the combined dataset by focusing on the open-ended question included in the third questionnaire that asked the participants to guess the purpose of the study. Participants who correctly guessed that a confederate was embedded in their group chat were excluded from the final analysis. Incomplete responses were also excluded. These steps left the study with a final sample of 80 participants for analysis, of which 57.5 per cent are female, 87 per cent are Chinese-Singaporean and the average age is 22.79 years (SD = 1.43). H1 predicted that a familiar source will be rated more credible than an unfamiliar source. An independent samples t-test found no significant difference in how participants rated the source of the correction in the familiar (M = 3.36, SD = 0.67) and the unfamiliar (M = 3.41, SD = 0.47) source conditions, t (78) = −0.35, p > 0.05. H1 is not supported. A two-way between-subjects analysis of covariance (ANCOVA) was used to examine the effects of source familiarity and mode of delivery on participants’ perception of the credibility of the correction message. Covariates were also entered into the analysis. Specifically, we entered (a) frequency of WhatsApp use; (b) frequency of health-related news consumption; (c) perceived credibility of the fake news post; (d) perceived credibility of the person that sent the correction; and (e) perceived group tie strength (see Table 8.1). Frequency of WhatsApp use was a significant covariate, F (1, 71) = 5.05, p