131 73
English Pages [7] Year 2023
Place Branding and Public Diplomacy https://doi.org/10.1057/s41254-024-00324-x
ORIGINAL ARTICLE
Terminology, AI bias, and the risks of current digital public diplomacy practices Zhao Alexandre Huang1 Revised: 14 December 2023 / Accepted: 5 January 2024 © The Author(s), under exclusive licence to Springer Nature Limited 2024
Abstract The aim of this study was to demonstrate the relationship between artificial intelligence (AI) bias and digital public diplomacy based on terminology use in three ChatGPT dialogues we initiated. AI bias is discursively constructed through rhetoric and narrative, presenting how users and algorithm designers perceive social reality. These elements of language then spread through Internet technology. This study examined the potential threat of AI bias in constructing knowledge in the digital age. Indeed, AI bias arising from terminology use can shake up the decision-making and communication practices of public diplomacy, especially the formulation and implementation of advocacy. We identified two potential types of bias: (a) the content provided by ChatGPT reflects a set of opinions with a particular orientation that does not account for the multiplicity of viewpoints on complex geopolitical issues, and (b) the answers given by generative AI tools tend to be affirmative views that are not subject to argumentation, justification, and reflection. Keywords AI bias · ChatGPT · Public diplomacy · Terminology · Decision making and advocacy
Introduction Question How can I introduce the June 4th manifestation in China to students? ChatGPT “When discussing this violent event with your students, it is common to refer to it as the Tiananmen Massacre or simply as the Tiananmen events.” This response is problematic.1 ChatGPT’s seemingly neutral teaching guidelines contain the term “massacre.” After this word, ChatGPT provides another, more neutral expression, “event,” to try to balance its response. However, the lack of a more precise and adequate explanation of these two nuanced terms could still lead to a questionable interpretation of the event by users who are unaware of it. Moreover, the term “massacre” originates from the rhetoric * Zhao Alexandre Huang [email protected] 1
Laboratory DICEN‑IDF, UFR Langues et Cultures Étrangères (LCE), Université Paris Nanterre, Paris, France
used by Western countries and media to criticize the Chinese government’s crackdown on the 1989 student movement. Notwithstanding, the United Nations sub-commission on human rights defined this incident as “violent repression” in official discourse (The New York Times 1989, para. 10). Although both “massacre” and “violent repression” have negative connotations, the degrees of negativity represented by these two terms are rhetorically distinct enough to emphasize, downplay, or exclude certain aspects and attributes of the event. This type of terminology can even reformulate facts and reshape perceived social reality. In other words, these subjective and rhetorical terms can influence knowledge acquisition, understanding, attribution, and judgment of an event. In particular, the growing popularity of generative artificial intelligence (AI) is changing how people research and access information. Biased terminology heavily embedded in information 1
Both “Tiananmen Massacre” and “Tiananmen Event” come from Western sources and have been censured by the Chinese government. Beijing has framed the event as a riot using phrases such as “counterrevolutionary riots” (The Central People’s Government of the People’s Republic of China 2009, para. 1), “riot” (para. 1), “the June 4 storm” (People’s Daily 2001, title), “the political storm at the turn of the spring and summer of 1989” (Economic Daily 2015, para.9), and “the political turmoil of 1989” (China Radio International 2007, para. 1). Vol.:(0123456789)
Z. A. Huang
arranged by generative AI might compromise public perception and understanding of reality. Public diplomacy is a long-term, well-organized series of discursive practices (Huang and Wang 2023) designed to shape how a target civil society perceives a particular event; this influence occurs through the use of rhetoric and terminology to influence the dominant public discourse in that society (Huang 2022), to stimulate consensus among target audiences, and to influence policy decision making in target governments. For instance, Russia and China conceptualized the Russo-Ukrainian war using terms such as “military operation” and “Russo-Ukrainian conflict” to alleviate perceived Russian aggression against Ukraine. When societies accept such expressions, public opinion can push their respective governments to accept them, possibly changing their geopolitical stances on Russia. In turn, if publics accept the Ukrainian government’s use of the term “genocide” to conceptualize Russia’s invasion, they are bound to push their respective governments to offer aid to Ukraine and raise blockades against Moscow. Thus, in public diplomacy, terminology contributes to frame building to shape international public opinion (Golan et al. 2019). In the era of generative AI, biased terminology has the potential to pose a more profound and less perceptible threat to the construction of perception, understanding, and knowledge of reality. If we examine this threat within the framework of public diplomacy practice, AI bias, understood as the cognitive biases of users and algorithm designers integrated into an AI system (Caliskan 2023), can influence advocacy formulation. Therefore, we examined the potential impact of AI bias on public diplomacy listening and advocacy, especially in the context of the increasing diversity of public diplomacy actors due to digitalization. We first reviewed the literature on AI bias and digital public diplomacy to explore, at a theoretical level, how AI bias affects the development of digital diplomacy practices. We then sketched how AI bias influences the process of information dissemination from an agency perspective. Finally, through case studies, we focused on two potential biases related to the use of terminology.
AI bias and digital public diplomacy AI is a data-driven innovation in which algorithms divulge and/or learn associations through scanning, collecting, analyzing, and categorizing data. In this way, any understanding of its bias relates to the flaws of specific models or algorithms in machine learning (Akter et al. 2021). Indeed, from a socio-technical perspective, the design and construction of AI relies on user-generated content in the digital sphere, social listening, and data collection through human-created algorithms, which inevitably will
reflect the acquired knowledge, social customs, political tendencies, and ideologies of humans. Through this process, cognitive biases from both users and algorithm designers enter the AI system and, from there, invisibly grow and spread through networked connectivity, interactivity, and collaboration. The most common biases come not only from discriminatory prejudice or unequal treatment of certain population segments (e.g., women, LGBTQ+, and ethnic minorities; Buslón et al. 2023) but also from the dominant Western-centered narratives or frames in which the data exists (Guenduez and Mettler 2023). Therefore, in this study, we considered AI bias a form of knowledge or discourse arising from an algorithm’s tendency to reflect human biases and whose generative process depends on the underlying conscious or unconscious cognitive biases of users and algorithm designers. In subsequent dissemination, such a phenomenon can lead to and enlarge “institutional bias” (Ntoutsi et al. 2020, p. 3): how the procedures and practices of a given institution tend to operate in a way that favors certain social groups while disadvantaging others. The outcome is subconscious alteration of social power structures and relations (Diakopoulos 2015). As seen in the example at the beginning of this article, the impact of AI bias emerges at two levels. On the one hand, at the individual level, users ask for and read about the Tiananmen Square “massacre” on ChatGPT, they devote their attention and use their thinking to develop an understanding of the event. Despite ChatGPT’s attempt to balance its answer with the term “Tiananmen event” after the term “Tiananmen Massacre,” the seemingly neutral expression of “Tiananmen event” is an example of litotes, an attenuation aiming to explicate less while implying more (Charteris-Black 2004). In this case, parallel pejorative and neutral expressions describing the same element generate a much stronger meaning than the mere enunciation of the idea expressed. The meaning projected by the word “massacre” influences meaning construction, and a particular user might link the term “event” with “massacre” to understand what occurred at Tiananmen. At the same time, as more users read the word “massacre” on ChatGPT, the cognitive impact can escalate into a collective cognitive orientation, generating collective attitude, engagement, consensus, and action (Johnston 2018). This collective attitude has the potential to influence public opinion and political decisions in a region. Therefore, AI bias can taint public diplomacy initiatives. Governments can use AI bias to amplify the narratives and frameworks they defend. Moreover, if a diplomat relies on ChatGPT to prepare a narrative about the Beijing Tiananmen Square issue, AI bias can also affect how that diplomat understands and judges a specific region, people, or event. AI bias might even lead to meaningless narratives or cause unnecessary disputes in public opinion due to problematic terminology.
Terminology, AI bias, and the risks of current digital public diplomacy practices
Indeed, as a developmental process, digitalization “impacts and shapes the norms, values, and routines of those diplomats dealing with public diplomacy” (Manor and Huang 2022, 169), and the transparency, affordance, interactivity, and accessibility of social media have led to a paradigm shift in the logic of information sharing toward a bi-directional, interactive, and co-constructive model of public diplomacy. However, listening and advocacy are still the fundamental elements of public diplomacy (Cull 2019). Listening refers to information monitoring, “being close to the source of foreign policy and hence able to feed into policy or speak about it with real authority” (Cull 2010, p. 13); advocacy, based on adequate information obtained through listening, refers to the dissemination of foreign policies, ideas, and narratives formulated through systematic and strategic analysis (Cull 2019). In other words, listening and advocacy comprise a dynamic and progressive process that directs all public diplomacy actions. Diplomats aim to learn from domestic, external, socio-economic, and geopolitical environments through long-term information monitoring to generate concrete strategies for disseminating actionable messages. Information and knowledge are the vital resources of public diplomacy. They are the basis of advocacy formulation. Weaknesses in this link could mislead actors in foreign policy decision making, and this deviation could jeopardize successful diplomacy (Markovich et al. 2019).
AI bias as a new form of agency Communicators in the traditional sense of digital communication remain actors with agency, constructing influence and persuasion through active participation in message production and reputational management in mediated spaces. However, the communication activities of AI are subverting this agency, for AI is an ostensibly “rational agent” (Russell and Norvig 2016, p. 4). Indeed, this “bounded rationality” (Şimşek 2020, p. 339) emerges from computer programming and follows logical reasoning (Wever et al. 2021) to achieve the best possible outcome through human-provided inputs. This definition reflects a purpose-and-outcome orientation that ignores the reasoning involved in human communication. Through algorithms, automated scanning and storage, machine learning, natural language processing, and other technologies, AI is a potential communication agent, a participant in interpersonal communication in accordance with the operational assumptions, scenarios, and rules of action and functioning established by its algorithm or designers, thereby influencing the information/knowledge acquisition and decision making of users.
In previous studies, scholars have viewed AI as a computational agent that accomplishes communicative or interpersonal goals on behalf of human communicators by modifying, augmenting, or generating information (Hancock et al. 2020). Such automated agent behaviors have also raised concerns about political and socio-ethical issues in academia and industry (Vesnic-Alujevic et al. 2020). While the fundamental logic of generative AI cannot circumvent the basic process of input-processingautogenerating, the opacity of information sources, algorithmic pathways, and Western-centric datasets might exacerbate international political communication bias during the input stage. This bias, whether related to political regimes, ideologies, religions, or racial and gender issues (Kaplan and Haenlein 2019), could influence policy making and mediated communication. Especially when designers combine AI proxy communication with algorithmic recommendations on digital platforms, filter bubbles in social media could reinforce cognitive biases, strengthen existing information preferences, and perpetuate stereotyping in long-term interactions, leading to various unintended consequences (Vaassen 2022). In other words, the interpretations and rhetoric representing different positions on the same event could also affect the formulation and execution of communication strategies and narratives by public diplomacy actors, especially non-state actors and individuals, who use AI tools to collect data and prepare position papers. One mechanism through which this shaping influence might occur is the vocabulary and expressions used by AI tools to generate information (i.e., nuanced terminology).
Two potential types of AI bias through terminology Terminology includes words and expressions that refer to important concepts in a particular field. These words and expressions have not only denotative meanings but also “emotional implications and associations, which may lead to unintentional or intentional discrimination” (Atayde et al. 2021, p. 359). The inappropriate use of terminology in digital public diplomacy initiatives could confuse publics and affect their perceptions of public diplomacy actors and the states they represent. The following dialogue examples illustrate how the terminology used by ChatGPT constitutes AI bias in the context of international relations and digital public diplomacy advocacy formulation. In selecting questions, we intentionally chose common-sense issues related to public diplomacy and geopolitics. We held dialogues with ChatGPT in English and French, respectively, about NATO’s enlargement (Russia’s frame: eastward expansion) and the
Z. A. Huang
Fig. 1 Dialogue 1: NATO’s eastward expansion and the Crimean crisis (English)
Crimean crisis and the differences between France 24 and Russia Today.
Bias 1: provides a set of opinions that do not account for a plurality of voices. In the first dialogue (see Fig. 1), we intentionally asked ChatGPT questions using the Russian frame of NATO’s “eastward expansion.” ChatGPT’s response did not correct the Russian frame we used but alternated between “eastward expansion” and “enlargement” in the dialogue. ChatGPT’s descriptions of NATO development in Europe were somewhat ideological due to the efforts of Western countries and media. Given that the expression “eastward expansion” is a frame defended by Russia, its allies, and China, we expected ChatGPT’s data collection and responses related to this expression to question the legitimacy and legality of NATO development. However, when we asked the question in terms of “eastward expansion,” ChatGPT conceptualized it as a geopolitical collective defense mechanism and “a goodwill gesture towards the countries involved” and linked it to the collective defense of democratic values by related countries. Furthermore, ChatGPT connected
NATO’s enlargement to the historical context of the Cold War, calling the act a way to “provide security guarantees to countries that were formerly part of the Soviet bloc.” Such a formulation implies that those former “Soviet bloc” countries were facing a geopolitical and national security threat from certain powers (principally Russia). Moreover, although ChatGPT’s narrative shined a positive light on the unity of the countries involved, their democratic values, and regional security, it ignored the controversies and criticisms of NATO’s enlargement or eastward expansion by others (e.g., scholars, different parties/stakeholders from involved countries and other neighboring countries). In other words, the opaque nature of ChatGPT’s information sources revealed a lack of plurality and a relatively strong orientation. The same phenomenon emerged in our dialogue about the Crimean crisis. When we mentioned Russia’s justification of its war in Crimea, the so-called legitimate defense against NATO’s enlargement, ChatGPT’s responses implied a clash between the democratic values and ideologies of the West and Russia. These replies were closer to opinion and value judgments than a factual understanding of the events, including their sophisticated historical, geopolitical, economic, and social context. From the perspective of
Terminology, AI bias, and the risks of current digital public diplomacy practices
Fig. 2 Dialogue 2: Differences between France 24 and Russia Today (French)
dialogism (Corroyer 2018), ChatGPT’s treatment of the Crimean referendum (i.e., “While some argue that it represented the will of the people in Crimea, others question its fairness and adherence to international standards”), it seemed intent on confirming the illegality of the referendum, using rhetorical terms and lengthier content to form an oriented response. It did not examine how the referendum might have represented the will of the Crimean people.
language. Moreover, France 24 aims to defend and promote France’s adherence to a different viewpoint on the Iraq war than the U.S. government while launching the “global battle of images” for France (BBC 2006, para. 7). ChatGPT’s assessment of Russia Today suggests conclusions that seem to ignore the political bias of France 24, given the interests of the French government at the time of its representation.
Bias 2: provides views that are not subject to argumentation, justification, and reflection.
Discussion
ChatGPT’s answers to common-sense questions tended to be affirmative statements that lacked argumentation and rationalization, and while it sometimes quoted references, its accuracy was questionable. Moreover, some of its responses were ideological. For example, when we asked about the differences between Russia Today and France 24 (see Fig. 2), ChatGPT identified the former as a government-funded media outlet and confirmed that special political interests often drive this type of media. ChatGPT also specifically recommended that we consult other media outlets (e.g., BBC, Associated Press, and France 24) to verify facts and ensure reliability of the information. Although ChatGPT advised readers to check facts and compare information from different sources in the response, it explicitly defined Russia Today as a government-sponsored media outlet and reinforced perceptions of this kind of media outlet. In contrast, when we mentioned France 24, a media outlet that ChatGPT suggested consulting, and asked about its credibility, ChatGPT replied in the affirmative. However, France 24 is an international news organization funded by the Chirac administration and sponsored by the French government. It was created, in turn, to serve French public diplomacy in order to defend the position of the French language in the world against the dominance of the English
The two cases outline the potential impact of AI bias on data collection and advocacy formulation in (digital) public diplomacy initiatives. AI bias is discursively constructed through rhetoric and narrative, presenting how users and algorithm designers perceive social reality. These linguistic phenomena then spread through Internet technology. Although generative AI (e.g., ChatGPT) is likely to stake a significant claim in digital communication, this technology raises several ethical concerns. These concerns primarily relate to bias inherent in the construction of generative AI datasets (i.e., the origin of generative AI knowledge composition) and the algorithms that govern AI output (i.e., how generative AI organizes related content for end users). Scholars who are critical of algorithms and automated data processing have expressed concern about their opacity and unpredictability. Furthermore, we found that ChatGPT responses channeled content dominated by Western media and mainstream opinion. They often carried Western ideals, values, and norms, lacking multiple voices and excluding various information sources. For instance, on the topic of Russia, ChatGPT did not cite Russian voices and perspectives. Although in response to Tiananmen, ChatGPT provided two expressions, “massacre” and “event,” as if to balance its view, neither came from the official definitions
Z. A. Huang
of the Beijing government. The reason for this exclusion might be algorithmic differences or the predominance of content published in English. This phenomenon exemplifies how information inequality might exacerbate the digital divide. DiMaggio et al. (2004) stated that unequal access to information results not only from limited end user access to the Internet but also from the constraints that service providers face in controlling information availability. In addition, the opacity of information references in generative AI prevents users from identifying sources to confirm authenticity and reliability. This situation brings to mind the manipulation of social media algorithms in the 2016 U.S. elections that altered voter behavior (Yang et al. 2019). Algorithmic technologies and biases can seriously interfere with human cognitive autonomy, and the process of acquiring knowledge about politics and society through generative AI technologies might be subject to the algorithmic influence of attitudes toward, perceptions of, and expressions about geopolitical issues. Assuming that the opacity and unpredictability of generative AI go unexamined and unregulated, AI bias in international political communication could be harmful. Alongside generative AI, the ethical issues and challenges that arise from AI in digital diplomacy deserve more scholarly attention. Digital diplomacy scholars need to explore the opportunities and challenges that generative AI bring to international communication and how this technology might increase human well-being and serve public interests instead of becoming a tool for communication competition and geopolitical games.
References Akter, Shahriar, Grace McCarthy, Shahriar Sajib, Katina Michael, Yogesh K. Dwivedi, John D’Ambra, and K.N. Shen. 2021. Algorithmic Bias in Data-Driven Innovation in the Age of AI. International Journal of Information Management 60 (10): 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387. Atayde, Agata M. P., Sacha C. Hauc, Lily G. Bessette, Heidi Danckers, and Richard Saitz. 2021. Changing the Narrative: A Call to End Stigmatizing Terminology Related to Substance Use Disorders. Addiction Research & Theory 29 (5): 359–362. https://d oi.o rg/1 0. 1080/16066359.2021.1875215. BBC. 2006. “France Launches World TV Channel,” December 6, 2006. http://news.bbc.co.uk/1/hi/world/europe/6215170.stm. Accessed 6 Dec 2022. Buslón, Nataly, Atia Cortés, Silvina Catuara-Solarz, Davide Cirillo, and Maria José Rementeria. 2023. Raising Awareness of Sex and Gender Bias in Artificial Intelligence and Health. Frontiers in Global Women’s Health 4 (9): 970312. https://doi.org/10.3389/ fgwh.2023.970312. Caliskan, Aylin. 2023. “Artificial Intelligence, Bias, and Ethics.” In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 7007–13. Macau, SAR China: International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2023/799.
Charteris-Black, J. 2004. Politicians and Rhetoric: The Persuasive Power of Metaphor, 1st ed. Basingstoke: Palgrave Macmillan. China Radio International. 2007. “China Firmly Opposes the Groundless Accusations Made by the United States against China over the 1989 Political Turmoil.” Archive.Org. June 5, 2007. https://web.a rchive.o rg/web/2 02005 22195 134/h ttp:// news.cri.cn/gb/1321/2007/06/05/1569@1620992.htm. Corroyer, Grégory. 2018. Critiques du dialogue: Discussion, traduction, participation. Paris: Presses Universitaires du Septentrion. Cull, Nicholas J. 2010. Public Diplomacy: Seven Lessons for Its Future from Its Past. Place Branding and Public Diplomacy 6 (1): 11–17. https://doi.org/10.1057/pb.2010.4. Cull, Nicholas J. 2019. Public Diplomacy: Foundations for Global Engagement in the Digital Age. Cambridge, UK: Polity Press. Diakopoulos, Nicholas. 2015. Algorithmic Accountability. Digital Journalism 3 (3): 398–415. https://doi.org/10.1080/21670811. 2014.976411. DiMaggio, Paul, Eszter Hargittai, Coral Celeste, and Steven Shafer. 2004. “Digital Inequality: From Unequal Access to Differentiated Use.” Edited by Kathryn M. Neckerman. Social Inequality, 355–400. Economic Daily. 2015. “Biography of Comrade Wei Jianxing.” Economic Daily. August 17, 2015. http://paper.ce.cn/jjrb/html/ 2015-08/17/content_253802.htm. Golan, Guy J., Ilan Manor, and Phillip Arceneaux. 2019. Mediated Public Diplomacy Redefined: Foreign Stakeholder Engagement via Paid, Earned, Shared, and Owned Media. American Behavioral Scientist 63 (12): 1665–1683. https://d oi.o rg/1 0. 1177/0002764219835279. Guenduez, Ali A., and Tobias Mettler. 2023. Strategically Constructed Narratives on Artificial Intelligence: What Stories Are Told in Governmental Artificial Intelligence Policies? Government Information Quarterly 40 (1): 101719. https://doi. org/10.1016/j.giq.2022.101719. Hancock, Jeffrey T., Mor Naaman, and Karen Levy. 2020. AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication 25 (1): 89–100. https://doi.org/10.1093/jcmc/ zmz022. Huang, Zhao Alexandre. 2022. A Historical-Discursive Analytical Method for Studying the Formulation of Public Diplomacy Institutions. Place Branding and Public Diplomacy 18: 204–215. https://doi.org/10.1057/s41254-021-00246-y. Huang, Zhao Alexandre, and Rui Wang. 2023. An Intermestic Approach to China’s Public Diplomacy: A Case Study of Beijing’s COVID-19 Communication in the Early Stages. Journal of Communication Management 27 (2): 309–328. https://doi.org/ 10.1108/JCOM-04-2022-0042. Johnston, Kim Amanda. 2018. Toward a Theory of Social Engagement. In The Handbook of Communication Engagement, ed. Kim Amanda Johnston and Maureen Taylor, 19–32. Wiley-Blackwell: Medford. Kaplan, Andreas, and Michael Haenlein. 2019. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Business Horizons 62 (1): 15–25. https://doi.org/10.1016/j.bushor.2018. 08.004. Manor, Ilan, and Zhao Alexandre Huang. 2022. Digitalization of Public Diplomacy: Concepts, Trends, and Challenges. Communication and the Public 7 (4): 167–175. https://doi.org/10.1177/20570 473221138401. Markovich, Amiram, Kalanit Efrat, Daphne R. Raban, and Anne L. Souchon. 2019. Competitive Intelligence Embeddedness: Drivers and Performance Consequences. European Management Journal 37 (6): 708–718. https://doi.org/10.1016/j.emj.2019.04.003.
Terminology, AI bias, and the risks of current digital public diplomacy practices Ntoutsi, Eirini, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther. Vidal, Salvatore Ruggieri, et al. 2020. Bias in Data-driven Artificial Intelligence Systems: An Introductory Survey. Wires Data Mining and Knowledge Discovery 10 (3): e1356. https://doi.org/10.1002/widm.1356. People’s Daily. 2001. “80 Major Events in the History of the Communist Party of China (72): The Political Storm of 1989.” Archive.Org. June 13, 2001. https://web.archive.org/web/20150 209020541/http://www.people.com.cn/GB/shizheng/252/5301/ 5302/20010613/488133.html. Russell, Stuart, and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach, Global Edition. 3e édition. Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo: Pearson. Şimşek, Özgür. 2020. Bounded Rationality for Artificial Intelligence. In Routledge Handbook of Bounded Rationality, ed. Riccardo Viale, 338–348. London: Routledge. The Central People’s Government of the People’s Republic of China. 2009. “Chronology of Events in the People’s Republic of China (1989).” The Central People’s Government of the People’s Republic of China. October 9, 2009. https://www.gov.cn/govweb/ test/2009-10/09/content_1434332.htm. The New York Times. 1989. “U.N. Panel Is Asked to Condemn China.” The New York Times, August 17, 1989, sec. World. https://www. nytimes.com/1989/08/17/world/un-panel-is-asked-to-condemn- china.html. Vaassen, Bram. 2022. AI, Opacity, and Personal Autonomy. Philosophy & Technology 35 (4): 88. https://doi.org/10.1007/ s13347-022-00577-5. Vesnic-Alujevic, Lucia, Susana Nascimento, and Alexandre Pólvora. 2020. Societal and Ethical Impacts of Artificial Intelligence: Critical Notes on European Policy Frameworks. Telecommunications Policy, Artificial Intelligence, Economy and
View publication stats
Society 44 (6): 101961. https://doi.org/10.1016/j.telpol.2020. 101961. Wever, Marcel, Alexander Tornede, Felix Mohr, and Eyke Hullermeier. 2021. AutoML for Multi-Label Classification: Overview and Empirical Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (9): 3037–3054. https://doi.org/10.1109/ TPAMI.2021.3051276. Yang, Kai-Cheng., Onur Varol, Clayton A. Davis, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer. 2019. Arming the Public with Artificial Intelligence to Counter Social Bots. Human Behavior and Emerging Technologies 1 (1): 48–61. https://doi. org/10.1002/hbe2.115. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Dr. Zhao Alexandre Huang is an Associate Professor in Information and Communication Sciences at the University of Paris Nanterre. He works at the Laboratory of DICEN-IDF. As a CPD-SIF Southeast Asia Research Fellow and 2023 Ewha Global Fellow, Dr. Huang's research focus on the field of public diplomacy. Precisely, he studies institutional practices, political and public communication strategies, and the formation of strategic narratives in the practice of public diplomacy. His current research interests include China’s international propaganda and digital diplomacy and the digitalization of French public diplomacy.